How'd you program the morality of auto-cars?

SLAM: debunk creationism, pseudoscience, and superstitions. Discuss logic and morality.

Moderator: Alyrium Denryle

User avatar
LaCroix
Sith Acolyte
Posts: 5193
Joined: 2004-12-21 12:14pm
Location: Sopron District, Hungary, Europe, Terra

Re: How'd you program the morality of auto-cars?

Post by LaCroix »

salm wrote:That´s pretty much the point. It is unpredictable for the programmer, so they have to find a way to make the robot decide in unique situations. Simply having a standard reaction like hitting the breaks isn´t a good solution because the standard reaction might be the worst possible reaction.
Well, since the common human reaction to a problem with the car is hitting the brakes, I don't see why a robot would be at fault doing that if he has no better response in mind. Again - the legal term you are looking for is "average competent person". E.g. a standard driver.
I am interested in how a robot would declare one person to die and the other to live in scenario that would require at least one to die.
The robot doesn't care about who dies, for dying is not a thing it could calculate or should. It's job is to drive the vehicle. It cares about avoiding an impact. If there is no way to evade the primary impact without causing another, it will simply try to reduce speed as much as possible. If it do this, it has acted exactly according to the letter of law*, and no matter what happens, you cannot sue anyone for it.

If somebody dies due to this - well, so what, that's life. Shit happens. If there were a human driver, the number of dead/injured would be the same or even worse, for the robot will definitely have initiated the stop earlier, thus reduced the impact significantly. No liability.

*Why letter of law?
Swerving to avoid a collision and create another collision makes you the instigator of that collision - you could be sued for it. On the other hand, you can't be sued for not driving into the ditch to avoid a collision with someone suddenly crossing into your lane. Because THAT OTHER GUY is liable for it.
In fact, even if you crash into someone else while (out of reflex) evading the collision, the other guy still shares liability with you, as you wouldn't have needed to evade if it wasn't for him.
A minute's thought suggests that the very idea of this is stupid. A more detailed examination raises the possibility that it might be an answer to the question "how could the Germans win the war after the US gets involved?" - Captain Seafort, in a thread proposing a 1942 'D-Day' in Quiberon Bay

I do archery skeet. With a Trebuchet.
User avatar
salm
Rabid Monkey
Posts: 10296
Joined: 2002-09-09 08:25pm

Re: How'd you program the morality of auto-cars?

Post by salm »

LaCroix wrote: The robot doesn't care about who dies, for dying is not a thing it could calculate or should.
Why not? It should be possible to have the robot calculate the reaction that will least certainly lead to a heavy accident. If there are no options besides mowing down some guy or mowing down two guys the robot should decide to mow down the single person. If the only possible reactions are mowing down one guy or mowing down one other guy the computer has do decide which person to mow down. Since both reactions have the same outcome, that is, one dead person, the computer has to decide which person dies. Or replace dying with "least damage caused" if you want to.
It's job is to drive the vehicle. It cares about avoiding an impact. If there is no way to evade the primary impact without causing another, it will simply try to reduce speed as much as possible. If it do this, it has acted exactly according to the letter of law*, and no matter what happens, you cannot sue anyone for it.

If somebody dies due to this - well, so what, that's life. Shit happens. If there were a human driver, the number of dead/injured would be the same or even worse, for the robot will definitely have initiated the stop earlier, thus reduced the impact significantly. No liability.

*Why letter of law?
Swerving to avoid a collision and create another collision makes you the instigator of that collision - you could be sued for it. On the other hand, you can't be sued for not driving into the ditch to avoid a collision with someone suddenly crossing into your lane. Because THAT OTHER GUY is liable for it.
In fact, even if you crash into someone else while (out of reflex) evading the collision, the other guy still shares liability with you, as you wouldn't have needed to evade if it wasn't for him.
If it turns out that instead of breaking, swerving is a lot more safe in general it would be easy to sign that into law. I think arguing from law isn´t really that interesting because it is a law created for human drivers. Robots might require a completely different set of laws to ensure an optimal outcome.
User avatar
Darth Tanner
Jedi Master
Posts: 1445
Joined: 2006-03-29 04:07pm
Location: Birmingham, UK

Re: How'd you program the morality of auto-cars?

Post by Darth Tanner »

No it only has to emergency stop. Swerving into people or traffic is not acceptable driving even if through some Byzantine and contrived scenario it may result in fewer casualties. You definitely should not be making decisions or making it legally required to murder people to avoid an accident.
Get busy living or get busy dying... unless there’s cake.
User avatar
LaCroix
Sith Acolyte
Posts: 5193
Joined: 2004-12-21 12:14pm
Location: Sopron District, Hungary, Europe, Terra

Re: How'd you program the morality of auto-cars?

Post by LaCroix »

Why should a robot driver be held to different standards? Especially since moral decissions are already hard to do for humans - now, the car does not only have to try and avoid an accident, it also has to try to determine which life is less valuable. Killing a kid instead of two old people? Or killing a genious working on a cure for cancer instead of two people currently running from the cops because they just robbed a bank?

You cannot control all these variables, and you shouldn't.

The car has one job - driving.

The routine for impact is simple.
if (impactImminent()){
activateBrakes() //calls sendWarningSignalForFollowingRobocars() to help other robocars avoid a crash
if(enoughEmptySpaceToStop(Direction.right)){
steerRight()
}else if(enoughEmptySpacetoStop(Direction.leftt)){
steerLeft()
}
waitForFullStop()
}

Trying to inject morality into this is unnecessary.
And in my opinion immoral, itself, as you actively try to decide in what situations someone deserves to die for no reason. If someone deserves to be hit, then the one involved in the accident. Not some innocent bystander who decided to walk on the less crammed side of the road.
A minute's thought suggests that the very idea of this is stupid. A more detailed examination raises the possibility that it might be an answer to the question "how could the Germans win the war after the US gets involved?" - Captain Seafort, in a thread proposing a 1942 'D-Day' in Quiberon Bay

I do archery skeet. With a Trebuchet.
User avatar
The Duchess of Zeon
Gözde
Posts: 14566
Joined: 2002-09-18 01:06am
Location: Exiled in the Pale of Settlement.

Re: How'd you program the morality of auto-cars?

Post by The Duchess of Zeon »

In this situation I'd have seen the bus start to move and slammed the accelerator to power through the intersection before I lost maneouvring room. That is not a normal human decision, as it requires training which the average person doesn't have, and a very deliberate response to driving, which started when I intentionally started to drill myself to drive according to the mindfulness I had as a private pilot (I was in an extremely bad car accident while I was in pilot training, and decided to start applying the same mindset to my driving to avoid future collisions -- it's worked for the subsequent 12 years, at least). I would expect that an autonomous car in the same circumstance would, in principle, be capable of scanning its surroundings adequately to do the exact same thing -- determine that at x velocity a dangerous situation will exist at position y and accelerating into x1 velocity will clear position y before the situation occurs; or else determine it won't, and then go into another track of the decision loop.

Conversely I don't want self-driving cars on the road until they're sufficiently well programmed to be able to make decisions like that reliably in test situations. There's no rush in getting them out. Of course, I don't believe the technology will mature for quite a long time, because most models will have human override, and that will cause the number of crashes to increase, which will prompt social backlash against the cars. The chance of anyone licensing a self-driving car without a steering wheel for regular use on all roads is presently low, which means the driver will inevitably take over at the worst possible time and drive their car directly into bystanders as their coffee and cereal plunges into their lap at once.
The threshold for inclusion in Wikipedia is verifiability, not truth. -- Wikipedia's No Original Research policy page.

In 1966 the Soviets find something on the dark side of the Moon. In 2104 they come back. -- Red Banner / White Star, a nBSG continuation story. Updated to Chapter 4.0 -- 14 January 2013.
Simon_Jester
Emperor's Hand
Posts: 30165
Joined: 2009-05-23 07:29pm

Re: How'd you program the morality of auto-cars?

Post by Simon_Jester »

The problem of figuring out how to keep users from killing themselves with a user override is... yeah, challenging.
LaCroix wrote:
salm wrote:That´s pretty much the point. It is unpredictable for the programmer, so they have to find a way to make the robot decide in unique situations. Simply having a standard reaction like hitting the breaks isn´t a good solution because the standard reaction might be the worst possible reaction.
Well, since the common human reaction to a problem with the car is hitting the brakes, I don't see why a robot would be at fault doing that if he has no better response in mind. Again - the legal term you are looking for is "average competent person". E.g. a standard driver.
Most of the time, hitting the brakes is the appropriate solution, while doing some kind of ninja-dodging-into-hazardous-obstacle tricks is less wise. Especially if you do it well ahead of time so drivers behind you have time to slow down themselves. If nothing else, hitting the brakes will greatly reduce the amount of energy packed into whatever collision you're about to experience, which means less damage done all around.
salm wrote:
LaCroix wrote: The robot doesn't care about who dies, for dying is not a thing it could calculate or should.
Why not? It should be possible to have the robot calculate the reaction that will least certainly lead to a heavy accident. If there are no options besides mowing down some guy or mowing down two guys the robot should decide to mow down the single person. If the only possible reactions are mowing down one guy or mowing down one other guy the computer has do decide which person to mow down. Since both reactions have the same outcome, that is, one dead person, the computer has to decide which person dies. Or replace dying with "least damage caused" if you want to.
I would honestly prefer to program the robot to follow a deontological approach, not a consequential approach. Because that's more in keeping with how robots work. They're much better at following a predictable ruleset that is simple and works 90% or more of the time (deontological). That's easier compared to trying to calculate all the indefinite consequences of a complex and unforeseeable thing like 'decide who dies' where you're doing an evil thing if you calculate it wrong (consequential).
This space dedicated to Vasily Arkhipov
User avatar
Arthur_Tuxedo
Sith Acolyte
Posts: 5637
Joined: 2002-07-23 03:28am
Location: San Francisco, California

Re: How'd you program the morality of auto-cars?

Post by Arthur_Tuxedo »

salm wrote:
Arthur_Tuxedo wrote:These scenarios are fairly silly. If you need to make a split-second decision with no optimal outcome in traffic, you've already violated countless safety practices.
But that isn´t necessarily true. In a system that is as complex as traffic it is possible that unknown and unlikely things happen which can not be taken into account by a computer system. Perhaps some random shit suddently falls onto the street like a tree. A wild boar or a dear might appear out of nowhere, get hit by a car gain some air and fly directly into the oncomming traffics windshield.

Now, sure these are very unlikely scenarios and I am sure that robot cars are a lot safer in general than human drivers if implemented correctly but the question is in a "one of two people dies" scenario, how do you program the robot to kill the "correct" person.
Even these scenarios don't force the decisions you're talking about. If a tree falls onto the road and there's no time to brake, then neither the robot nor the human driver can avoid colliding with it, although the robot can recognize the hazard and apply brakes and evasive maneuvers more quickly and precisely. If it falls far enough ahead that the hazard can be avoided or hit at low speed, then that's what the program will do. The chances of such a sudden hazard appearing in a configuration where a pedestrian is so close to the tree that they must be hit to avoid the death of the occupants, yet were somehow not already injured by the tree collapse itself is vanishingly small, especially compared to the tens of thousands of annual fatalities caused by human drivers.

Wild boars and deer running into the road are perfect examples of what I was talking about. If a deer runs into the road and you can't stop or avoid it, you were driving too fast for the conditions and/or not paying enough attention. And while a tree, thicket, or brush that obscured the deer is impenetrable to the human eye, it is not impenetrable to radar and infrared sensors, so a situation that would emerge with almost no warning for a human would occur with plenty of warning for a machine.

Even in such an unlikely scenario that a self-driving car would have to "choose" who dies, the best practice is to simply follow a set procedure of applying brakes and attempting to avoid collision and avoiding moral judgments, as others have pointed out.
"I'm so fast that last night I turned off the light switch in my hotel room and was in bed before the room was dark." - Muhammad Ali

"Dating is not supposed to be easy. It's supposed to be a heart-pounding, stomach-wrenching, gut-churning exercise in pitting your fear of rejection and public humiliation against your desire to find a mate. Enjoy." - Darth Wong
bilateralrope
Sith Acolyte
Posts: 5958
Joined: 2005-06-25 06:50pm
Location: New Zealand

Re: How'd you program the morality of auto-cars?

Post by bilateralrope »

AniThyng wrote:When the human driver makes a decision and kills the 'wrong' person, it's quite possible it goes to trial with the driver in the dock to determine if he did in fact act accordingly. If the robot care does something and causes injury, who goes to trial? The project manager? The programer? The passenger? Who gets to sit there and testify their code made the car make the best optimal decision that any reasonable person would also have done?
Whoever the company that made the robot car decides to send. I expect them to get very good at blaming the manual override Duchess mentions.
salm wrote:
Name one where a robot car would do worse than a human. Be specific.
Why?
Because the standard a robot car needs to be held to is only that of being better than a human driver. So for a robot car to be a bad idea it needs to do worse than a human driver.

Insisting on perfection is a fallacy. At attempt to convince people to stick to the worse option (human drivers) because the robot car isn't perfect. Even when it's much better.
I am interested in how a robot would declare one person to die and the other to live in scenario that would require at least one to die.
I assume that if the system comes to the conclusion that both people have the same chance of death it just picks a random target. But perhaps other people would like to see other "value" categories like age for example. I am sure that racists would rather kill the black guy than the white guy and again other poeple would rather kill the man than the women.
Perhaps if the system knew that one of the target has cancer and is likely to die soon anyway it will pick the cancer candidate.
The problem with those scenarios is that they are so highly specific that they are too unlikely to be specifically programmed into the robot car.

So the robot car would rely on its general purpose rules. Which is what I did with the example scenarios you gave.

Then there will be the question of imperfect information. Take two parked cars and a convoluted scenario where hitting one of those cars is better than the other options, and the robot car has to decide which. The robot car is unlikely to spend any CPU cycles on passengers inside a vehicle as they don't matter most of the time, so it won't be able to identify which car is unoccupied. At this point the rule of thumb to follow would be the law and/or protecting the robot cars passengers.
User avatar
The Duchess of Zeon
Gözde
Posts: 14566
Joined: 2002-09-18 01:06am
Location: Exiled in the Pale of Settlement.

Re: How'd you program the morality of auto-cars?

Post by The Duchess of Zeon »

Google realized this, that self-driving cars are just a massive way to get people less experience driving and to make them less capable drivers, and then they still have a steering wheel, and will glance up from their coffee/burrito/newspaper/angry birds and see what they think is an impending crash but isn't, and hastily overcorrect/panic, grabbing the steering wheel and doing something really stupid. Highly trained airplane pilots have crashed airplanes with autopilots doing this. Furthermore the human social response to this kind of accident occurring will be to ban self-driving cars as unsafe. Google realized eliminating the steering wheel eliminates this problem, but we have no regulatory framework for insuring automobiles if the driver cannot be at fault, and the uptake of cars with no way for you to take control is, due to human nature, likely to be limited. So I don't anticipate full self-driving cars to be successful for another fifty years. They may be rolled out, but will probably suffer severe backlash as a result of this problem. What I do expect is for things like low speed city driving and stop-and-go traffic for the car to be able to completely take over, and for steady integration of systems which subtly prevent you from killing yourself by applying limits on the behavior that you just engaged in.
The threshold for inclusion in Wikipedia is verifiability, not truth. -- Wikipedia's No Original Research policy page.

In 1966 the Soviets find something on the dark side of the Moon. In 2104 they come back. -- Red Banner / White Star, a nBSG continuation story. Updated to Chapter 4.0 -- 14 January 2013.
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: How'd you program the morality of auto-cars?

Post by Starglider »

LaCroix wrote:Trying to inject morality into this is unnecessary.
...and completely impractical. Aside from the fact that contemporary sensors and pattern recognition can't extract enough information to support most of these judgements, the dramatic increase in code complexity would make testing almost impossible. It's already extremely expensive and time-consuming to test even simple kinetics-focused driving systems, without the combinatorial explosion of 'ethical' situations. Furthermore I suspect this would create more legal challenges and bad PR than it solves.
Sky Captain
Jedi Master
Posts: 1267
Joined: 2008-11-14 12:47pm
Location: Latvia

Re: How'd you program the morality of auto-cars?

Post by Sky Captain »

The Duchess of Zeon wrote:In this situation I'd have seen the bus start to move and slammed the accelerator to power through the intersection before I lost maneouvring room.
A computer armed with data from various sensors would be very good at this. It would nearly instantly know if increasing speed would avoid collision and initiate action before human even realized what is happening.
The Duchess of Zeon wrote:Google realized this, that self-driving cars are just a massive way to get people less experience driving and to make them less capable drivers, and then they still have a steering wheel, and will glance up from their coffee/burrito/newspaper/angry birds and see what they think is an impending crash but isn't, and hastily overcorrect/panic, grabbing the steering wheel and doing something really stupid. Highly trained airplane pilots have crashed airplanes with autopilots doing this.
That could happen. Humans often misjudge speed and distance so a maneuver that is perfectly safe from computer point of view because everything is measured and calculated with appropriate safety margin at first glance may look like imminent crash to human who then grabs the wheel and tries to correct situation. Going without any kind of manual override also may be problematic. You want to have some override capability in case there is software glitch, some problem with sensors or something else that computer can't deal with. Most of the time probably it would be better to have no override, but have a freak accident caused by some malfunction and passengers unable to do anything to stop runaway car and company who made that car will have extreme public backlash and monster lawsuit.
Simon_Jester
Emperor's Hand
Posts: 30165
Joined: 2009-05-23 07:29pm

Re: How'd you program the morality of auto-cars?

Post by Simon_Jester »

The most obvious solution to me (which probably won't work from a legalese or political perspective) is to put the manual override on a time delay. You can't take over the wheel unless you've actually been watching the road and continuously depressing a button on the steering wheel for five or ten seconds- which means you're hopefully paying attention to road conditions and not just screwing up the computer's carefully calculated maneuver with your own knee-jerk response.
This space dedicated to Vasily Arkhipov
User avatar
salm
Rabid Monkey
Posts: 10296
Joined: 2002-09-09 08:25pm

Re: How'd you program the morality of auto-cars?

Post by salm »

So, since the experts on the matter are saying that the moment and in the not so distant future a simple procedure like breaking is the most reasonable procedures, do you also think that will stay that way in the more distant future? Or is it probable that once computers, sensors and perhaps the possiblilty to link your car to some sort of network where every cars knows exactly what the other cars are doing some other precedures will become better?

We see tests with small flying and driving drones every now and then which can cooperate in a swarm and react based on what information the other drones are sending them. Wouldn´t it make sense to link all cars together to some sort of swarm?
User avatar
madd0ct0r
Sith Acolyte
Posts: 6259
Joined: 2008-03-14 07:47am

Re: How'd you program the morality of auto-cars?

Post by madd0ct0r »

the whole point of a swarm is that individuals are NOT linked, they simply follow deterministic rules based on available information about other members of the swarm.
We already see this with traffic and congestion updates resulting in the human car swarm reflowing into alternant road routes.

On the small scale - ie a unexpected accident, I'd be very surprised if doing anything other then stopping turns out to be the most useful behaviour in all but fringe cases. Even manoeuvring to clear a path for emergency services is best done at low speed AFTER stopping.
"Aid, trade, green technology and peace." - Hans Rosling.
"Welcome to SDN, where we can't see the forest because walking into trees repeatedly feels good, bro." - Mr Coffee
User avatar
Arthur_Tuxedo
Sith Acolyte
Posts: 5637
Joined: 2002-07-23 03:28am
Location: San Francisco, California

Re: How'd you program the morality of auto-cars?

Post by Arthur_Tuxedo »

I look forward to the lack of rubber-necking in the future. You should hear me holler when I've been stuck in bumper-to-bumper traffic for 30 minutes and it turns out to have been caused by something like a spare tire on the grass next to the road. People who don't see the point of self-driving cars probably don't know that almost 100% of all traffic jams are caused by idiots pointlessly changing lanes, causing the whole traffic stack to slinkee for miles back, which an AI-car would never do. Or imagine the impact of a lane closure if all the drivers knew miles ahead of time it was closed and cars in the open lanes left enough space for cars in the closed lane to merge seamlessly with no macho bullshit vs. today where a closed lane causes a complete standstill for miles.

Also, once legislatures wake up and realize that the manual overrides they've been mandating are responsible for nearly 100% of all accidents, vehicle design can be radically improved. Removing the steering column, shifter and pedals, and all linkages between the cabin and controls would eliminate a lot of weight by itself, saving on cost and increasing efficiency. The motor and transmission could be replaced by a small engine near each wheel, eliminating the transmission and further increasing efficiency. The cabin could be redesigned to be more like an entertainment room with large screens and surround sound, and seats could be turned to face each other or, for families sick of looking at each other, rotated to face away from the others. Once the human-driven cars are banned from the freeways (or perhaps given a lane to share with Amish horse-carts), cars can stack up for a slipstream effect, greatly reducing wind-resistance and increasing efficiency, and allowing speeds of 100+ MPH with only modestly powerful engines. Clearance on the sides will also be unnecessary so lanes can be narrower and population increases can be accommodated without the need to build massive amounts of new highway infrastructure. It truly boggles my mind when someone says they don't see the point of self-driving cars, given all the potential benefits.
"I'm so fast that last night I turned off the light switch in my hotel room and was in bed before the room was dark." - Muhammad Ali

"Dating is not supposed to be easy. It's supposed to be a heart-pounding, stomach-wrenching, gut-churning exercise in pitting your fear of rejection and public humiliation against your desire to find a mate. Enjoy." - Darth Wong
User avatar
The Duchess of Zeon
Gözde
Posts: 14566
Joined: 2002-09-18 01:06am
Location: Exiled in the Pale of Settlement.

Re: How'd you program the morality of auto-cars?

Post by The Duchess of Zeon »

Once the human-driven cars are banned from the freeways (or perhaps given a lane to share with Amish horse-carts), cars can stack up for a slipstream effect, greatly reducing wind-resistance and increasing efficiency, and allowing speeds of 100+ MPH with only modestly powerful engines. Clearance on the sides will also be unnecessary so lanes can be narrower and population increases can be accommodated without the need to build massive amounts of new highway infrastructure. It truly boggles my mind when someone says they don't see the point of self-driving cars, given all the potential benefits.
And this part is where you drop off a cliff into la-la land. Slipstream effect is a 5% fuel economy gain which translates into a pathetic reduction in horsepower, so the engine can't be called modest. Lane clearances are needed for these things called CARGO TRUCKS, so they're not going away either even if the trucks themselves are autonomous, they need those 13 feet for their PAYLOAD. And a 5% fuel economy gain justifies building dedicated highways/stuff it to the working poor by forcing them all to buy new vehicles that cost more than simple manually operated ones? Awesome! I hate autonomous car boosterism, and I have some good reasons to.
The threshold for inclusion in Wikipedia is verifiability, not truth. -- Wikipedia's No Original Research policy page.

In 1966 the Soviets find something on the dark side of the Moon. In 2104 they come back. -- Red Banner / White Star, a nBSG continuation story. Updated to Chapter 4.0 -- 14 January 2013.
User avatar
salm
Rabid Monkey
Posts: 10296
Joined: 2002-09-09 08:25pm

Re: How'd you program the morality of auto-cars?

Post by salm »

The Duchess of Zeon wrote: And this part is where you drop off a cliff into la-la land. Slipstream effect is a 5% fuel economy gain which translates into a pathetic reduction in horsepower, so the engine can't be called modest. Lane clearances are needed for these things called CARGO TRUCKS, so they're not going away either even if the trucks themselves are autonomous, they need those 13 feet for their PAYLOAD. And a 5% fuel economy gain justifies building dedicated highways/stuff it to the working poor by forcing them all to buy new vehicles that cost more than simple manually operated ones? Awesome! I hate autonomous car boosterism, and I have some good reasons to.
It would only put an unreasonable burden on the poor if the laws were implemented extremely badly. Now, not that this is impossible to happen as law makers can be rather stupid but that would just be the law makers fault and nothing specific to autonomous cars.
I doubt the transition will be abrupt, though. Usually changes like this are given a decade or so of shifting from the old system to the new one and I see no reason why this shouldn´t be done in case of robot cars. I mean, they managed to do it with light bulbs, so why not cars?

As for cargo trucks, I allways thought that they were the ones to be replaced first because it is easier due to their static routes and very desirable because they could be operated around the clock without having to wait until the driver is finished sleeping.
User avatar
The Duchess of Zeon
Gözde
Posts: 14566
Joined: 2002-09-18 01:06am
Location: Exiled in the Pale of Settlement.

Re: How'd you program the morality of auto-cars?

Post by The Duchess of Zeon »

Try replaced last, because of the increased safety concerns of semi-trucks and the higher standard of training for the drivers. People who are okay with driverless cars will somewhat justly be far more concerned about driverless trucks. And if you think the routes of American trucking are predictable, there's some utterly beautiful lakefront property I have in western Utah that totally isn't carpeted in rotting brine shrimp. It is a charter, on-demand, highly responsive business with numerous delays, reorientations, and detours that regular cars will not have to deal with.

Also, you ignored what I actually said! I said that the lanes could not get smaller because of the need to haul a certain volumetric payload, which has a certain loading gauge which will not magically get smaller if the truck is autonomous. This is especially true for oversized loads.

Next up, how does the autonomous car handle chaining up to go over mountain passes in winter? Let alone the autonomous truck, because chain-up is required for trucks much more often than for cars? It's one thing to automate city buses, but long haul trucking? If I'm still alive in 2100 maybe I'll see some of those on the road.

Note: Drone trucking is an entirely different story.
The threshold for inclusion in Wikipedia is verifiability, not truth. -- Wikipedia's No Original Research policy page.

In 1966 the Soviets find something on the dark side of the Moon. In 2104 they come back. -- Red Banner / White Star, a nBSG continuation story. Updated to Chapter 4.0 -- 14 January 2013.
User avatar
salm
Rabid Monkey
Posts: 10296
Joined: 2002-09-09 08:25pm

Re: How'd you program the morality of auto-cars?

Post by salm »

Eh, I´m just going by what I read in the media and there have been plenty of claims from companies like Mercedes that robot trucks will be there before robot cars. No matter what is going to be there first, they are also making the claim that robot trucks could save up to 15% of gasoline due to improved aerodynamics caused by traveling in convoys.

Here´s an article about robot trucks:

http://www.bbc.com/future/story/2014101 ... bie-trucks

I didn´t adress your comment about smaller streets because I don´t want to. There is not reason to. I don´t think streets will get smaller any time soon. The trucks have to carry containers which are standardized and adapted for other things than trucking, like cargo shipping, cranes and other things. Changing their size would cause lots of trouble.
I was mainly commenting on your statement that the poor would get fucked over.
Simon_Jester
Emperor's Hand
Posts: 30165
Joined: 2009-05-23 07:29pm

Re: How'd you program the morality of auto-cars?

Post by Simon_Jester »

The poor will get fucked over if automated cars become a requirement or if it becomes impossible to find a high-speed highway that you can legally travel without an automated car. Because no matter what happens, an automated car will add some thousands of dollars of extra cost to the vehicle, and I doubt that anyone is going to step up and subsidize the extra cost for people who otherwise can't afford such a car.

If you can think of a way to finesse this problem, my compliments, and I'd love to hear about it.
This space dedicated to Vasily Arkhipov
bilateralrope
Sith Acolyte
Posts: 5958
Joined: 2005-06-25 06:50pm
Location: New Zealand

Re: How'd you program the morality of auto-cars?

Post by bilateralrope »

The Duchess of Zeon wrote:Next up, how does the autonomous car handle chaining up to go over mountain passes in winter? Let alone the autonomous truck, because chain-up is required for trucks much more often than for cars? It's one thing to automate city buses, but long haul trucking? If I'm still alive in 2100 maybe I'll see some of those on the road.
Just because the vehicle is driving itself doesn't remove the possibility of a human being on board to handle problems that the automated vehicle can't. For example I can't see robot cars being able to replace flat tires for a while.
The poor will get fucked over if automated cars become a requirement or if it becomes impossible to find a high-speed highway that you can legally travel without an automated car. Because no matter what happens, an automated car will add some thousands of dollars of extra cost to the vehicle, and I doubt that anyone is going to step up and subsidize the extra cost for people who otherwise can't afford such a car.
That depends on how long it takes to make automated cars mandatory. If it's done quickly, that is a problem. If it takes long enough, second hand automated cars should become cheap enough to be affordable to everybody.
User avatar
Arthur_Tuxedo
Sith Acolyte
Posts: 5637
Joined: 2002-07-23 03:28am
Location: San Francisco, California

Re: How'd you program the morality of auto-cars?

Post by Arthur_Tuxedo »

The Duchess of Zeon wrote: And this part is where you drop off a cliff into la-la land. Slipstream effect is a 5% fuel economy gain which translates into a pathetic reduction in horsepower, so the engine can't be called modest. Lane clearances are needed for these things called CARGO TRUCKS, so they're not going away either even if the trucks themselves are autonomous, they need those 13 feet for their PAYLOAD. And a 5% fuel economy gain justifies building dedicated highways/stuff it to the working poor by forcing them all to buy new vehicles that cost more than simple manually operated ones? Awesome! I hate autonomous car boosterism, and I have some good reasons to.
A quick Google search shows more like 20%, and that's behind a single truck at bumper distances safe for a human driver. I know from experience that the amount of effort required to keep race pace in cycling goes from trivial to impossible when I pop off the back and get the wind in my face. Anyway, wide vehicles and oversized loads should be trivial for a V2V network (which could handle dynamic number and width of lanes) to accommodate.

As far as hurting the poor, the fully-automated highway I'm talking about is 20 years away at the bare minimum, and probably more like 30-50. There's simply no way that in 20 years the necessary sensors and processing power would add more cost than the amount saved by eliminating all cabin controls and linkages, all but the most basic safety equipment, the transmission and most suspension components, and most of the horsepower (partly because of slipstreaming, partly weight-savings, but mostly because people won't demand hundreds of HP when they aren't pushing the pedal). The way I see it, a bare-bones fully autonomous vehicle could cost half or less of a current economy car.

I'm curious to understand your animus to self-driving car boosterism, because from where I sit, it seems like a strong early candidate for the most transformative technology of the 21st century.
"I'm so fast that last night I turned off the light switch in my hotel room and was in bed before the room was dark." - Muhammad Ali

"Dating is not supposed to be easy. It's supposed to be a heart-pounding, stomach-wrenching, gut-churning exercise in pitting your fear of rejection and public humiliation against your desire to find a mate. Enjoy." - Darth Wong
User avatar
The Duchess of Zeon
Gözde
Posts: 14566
Joined: 2002-09-18 01:06am
Location: Exiled in the Pale of Settlement.

Re: How'd you program the morality of auto-cars?

Post by The Duchess of Zeon »

It's a cheap band-aid to a profoundly unsustainable and energy-intensive automotive lifestyle. I'd rather that public transportation was good enough that cars are what thy should be, an excursion vehicle for the middle class to go on trips to obscure areas, and a mode of transportation for the 3% of the population let on farms. Automated cars increase consumption of resource-limited REMs with their extensive electronics, produce more hideously polluting electronic waste, and provide a bandaid to prolong the existence of suburbs, massive houses, unsustainable commuting distances in terms of fuel consumption and etc. I would rather the technology never be implemented and we instead build lots of rail and lots of buses. Automated cars solve a problem which doesn't exist, and convince people that a 5% gain on reduced CO2 while increasing toxic waste is better than a 100% reduction in CO2 and greenhouse gases for commuting transportation and 90% overall, while reducing toxic waste.

I will also be honest and say that beyond these legitimate complaints, driving is by far well and above the thing that makes me happiest in all of the world. There is a beauty and precision to controlling a eighteen hundred pounds of metal traveling at thirty meters per second, anticipating every curve and gear shift and seeing the perfect union of human and machine as a responsive set. Perhaps I grew up listening to too much Rush, but I'm appalled at the fact that others salivate over banning cars when for a profoundly long time it was the only pleasure in my life, and even as I've journeyed to healthier places remains so very important to my happiness. At 43mpg I can arbitrarily check out of my house and my life and eight hours later be in vastly different climatic zones, empty places where the din and den of humanity are muted and distant thing instead of an omnipresent drone. The idea of replacing that with a world where I have to put my life in the hands of the machine, which may be safer but which guarantees that if I die it was not by my own failure and responsibility, but as the hapless creature of another will, is sort of like the future sinking into a dystopian madhouse. So yes, while I think my critique has validity and that we won't realize this technology to the level you dream of for most of this century, I will also admit that I dread it's realization and profoundly hope I'm dead first.

Also it's another step forward into a world with fewer jobs for average, normal sorts of people, and I think ultimately this is going to create an underclass which is going to cause vast social problems, so I favour the replacement of capitalism with a system, perhaps based on the arts and crafts movement, which voluntarily supports large economic inefficiency to preserve employment, as that creates a rewarding life for the average person that being on the dole will never give. This position can more or less be summarized by Stan Rogers' The Idiot, and, if you think I'm an idiot for holding that view, that people obtain so much objective satisfaction from a job well done that they deserve the right to work one even if their job could be better done by a machine, well, you're welcome to think me an idiot, surely. I'm proud of it.
The threshold for inclusion in Wikipedia is verifiability, not truth. -- Wikipedia's No Original Research policy page.

In 1966 the Soviets find something on the dark side of the Moon. In 2104 they come back. -- Red Banner / White Star, a nBSG continuation story. Updated to Chapter 4.0 -- 14 January 2013.
edaw1982
Padawan Learner
Posts: 181
Joined: 2011-09-23 03:53am
Location: Orkland, New Zealand

Re: How'd you program the morality of auto-cars?

Post by edaw1982 »

The AI of your car will make the best decision relivent to you. "If I do X then Y happens and my occupants die. This parses as unacceptable as my primary programming is to prevent death or excessive damage to my occupants. Therefore I will attempt the action that will cause least damage to my occupants."

If that happens to be 'run into the pedestrians' then that's what the computer will choose if it deems it to be the 'mathmatically logical answer'.
You can't have a machine in charge of your vehicle worrying about anyone else's safety but your own.
"Put book front and center. He's our friend, we should honour him. Kaylee, find that kid who's taking a dirt-nap with baby Jesus. We need a hood ornment. Jayne! Try not to steal too much of their sh*t!"
AniThyng
Sith Devotee
Posts: 2760
Joined: 2003-09-08 12:47pm
Location: Took an arrow in the knee.
Contact:

Re: How'd you program the morality of auto-cars?

Post by AniThyng »

edaw1982 wrote:The AI of your car will make the best decision relivent to you. "If I do X then Y happens and my occupants die. This parses as unacceptable as my primary programming is to prevent death or excessive damage to my occupants. Therefore I will attempt the action that will cause least damage to my occupants."

If that happens to be 'run into the pedestrians' then that's what the computer will choose if it deems it to be the 'mathmatically logical answer'.
You can't have a machine in charge of your vehicle worrying about anyone else's safety but your own.
This is precisely the sort of choice the programmer or PM will have to defend in court if the pedestrian or the state persecutes for vehicular manslaughter...

edit: even if you pin it on the party that caused the car to take that cause of action, it's still going to have to be addressed at some point if the car should be programmed to put the safety of the occupants above others.
I do know how to spell
AniThyng is merely the name I gave to what became my favourite Baldur's Gate II mage character :P
Post Reply