How'd you program the morality of auto-cars?

SLAM: debunk creationism, pseudoscience, and superstitions. Discuss logic and morality.

Moderator: Alyrium Denryle

User avatar
madd0ct0r
Sith Acolyte
Posts: 6259
Joined: 2008-03-14 07:47am

How'd you program the morality of auto-cars?

Post by madd0ct0r »

http://www.highwaysindustry.com/driverl ... ot-swerve/
Picture the scene; It’s 2030, and a driverless car is taking its owner to work one sunny morning. Seemingly from nowhere – a bus pulls out of a junction in front of it. It’s a school bus. To the left of the vehicle pedestrians occupy the pavement, and there is a cyclist making his or her own way to the office for the day. To the right, a moving lane of traffic heading in the opposite direction.

The question is then simple to ask, but virtually impossible to answer- what on earth does our autonomous vehicle do? Where does it turn? Should it turn at all? Does it stop? If so, does it risk killing it’s own passengers?

The possible introduction of such vehicles to our roads as an everyday mode of transport will spark debate and conversation that could go on for years- and probably will.

The ethical dilemmas that someone, somewhere, somehow has to program these vehicles to make could cause uproar, as the algorithm to sacrifice some before others must surely at some point be incorporated into the vehicle’s system in the event of such an occurence. The cases of unavoidable damage and chaos may be rare- but they are there- creating a huge problem for companies looking to lead the driverless generation. After all, none of these companies want a high profile, costly court case questioning their morality. But surely someone has to be held accountable?

So if and when an accident occurs, who will be to blame? It’s probably fair to say that every single one of us believe- and accept- that humans make mistakes from time to time. But, would we accept the same from a robot? A robot that has been invested in heavily in terms of both labour and finance no less. The level of expectation on perfection would certainly be higher.

When an accident does occur, the question will always be asked “Would that have happened had a person been driving?”. Some will simply make their mind up with a statement rather than the question and bat for ‘team human’ with something along the lines of “A person wouldn’t have done that.”

So- the question remains, who’s to blame in an accident? Is it government for even letting these things on our roads? Is it the company for building a machine incapable of such complex decisions? Is it down to an individual programmer or developer for not getting the algorithm 100% perfect? Even if the vehicle makes the statistically least damaging decision- how do we justify it if it causes the loss of even one human life?

There’s no doubt that all autonomous vehicle manufacturers have these dilemmas before being able to truly master the driverless car and it’s implementation into our lives as a ‘norm’. The cases of unavoidable damage in situations with only negative outcomes is surely the biggest issue surrounding this technology.

Just for a second- put yourself in the position of having to implement the programme into a vehicle that makes the choice between hitting a small car potentially containing a family, or hitting a large truck that could sacrifice everyone in your own vehicle. How do you make that moral justification for your decision when you’re held accountable?

To swerve? Or not to swerve? They are the very real, very serious questions.
bearing in mind complex risk to passenger calculation ala Asimov are probably impossible in the timeframe, what would you pick out as the key things to form the basis for the decision?
"Aid, trade, green technology and peace." - Hans Rosling.
"Welcome to SDN, where we can't see the forest because walking into trees repeatedly feels good, bro." - Mr Coffee
User avatar
Jub
Sith Marauder
Posts: 4396
Joined: 2012-08-06 07:58pm
Location: British Columbia, Canada

Re: How'd you program the morality of auto-cars?

Post by Jub »

No human can say they really made a conscious and informed choice in a split second life or death situation no matter what they might claim. As you should program the car to first and foremost avoid those situations and then to protect its passengers as best it is able. If anybody complains make the case that the accident wasn't your cars fault at all and that the programming used saves more lives on average than will be lost in the rare unavoidable accidents.
User avatar
LaCroix
Sith Acolyte
Posts: 5193
Joined: 2004-12-21 12:14pm
Location: Sopron District, Hungary, Europe, Terra

Re: How'd you program the morality of auto-cars?

Post by LaCroix »

Injury probablility:

Hitting the pedestrians - near certain death for all of them.
Going into opposite traffic? Mass carambolage, which will cause a random number of deaths, depending on number of people in the cars I hit, how many crash into them, and wether some choose to swerve into pedestrians, as well... Too random for me.

Hitting the breaks and trying to hit the bus in the front (where the engine is) will cause the least damage.

Even if you can't hit the front (which I doubt, because if you have time to sverve to avoid it, you certainly have time to choose a hit location), hitting the bus where you see the least children will only affect a few kids full. Aiming for the wheels is also a safer option, as it will stop you better. Also, as you ram the bus way below the seat level, there is still some metal between the kids and your car.
A minute's thought suggests that the very idea of this is stupid. A more detailed examination raises the possibility that it might be an answer to the question "how could the Germans win the war after the US gets involved?" - Captain Seafort, in a thread proposing a 1942 'D-Day' in Quiberon Bay

I do archery skeet. With a Trebuchet.
User avatar
Darth Tanner
Jedi Master
Posts: 1445
Joined: 2006-03-29 04:07pm
Location: Birmingham, UK

Re: How'd you program the morality of auto-cars?

Post by Darth Tanner »

I always see these scenarios drawn up against driverless cars - it strikes me as obvious that the course of action should be an emergency stop with an impact onto the bus if inevitable - you should not be swerving into coming traffic or pedestrians to avoid a motor vehicle as your increasing the chance of fatality and spreading the incident to non participants. If you have the fine detail control for choosing an impact zone then obviously the front of the bus or wheels will be preferred but I seriously doubt any human would have the time to make such a decision. Also a bus pulling out on you would imply your going to hit the front anyway - if you are hitting the rear area of the bus then you are likely speeding for the road condition and contributing to the accident which a driverless car should not be doing.

The fault for the incident lies solely with the bus driver for pulling out legal responsibility would lie solely with them.

The problem of where legal responsibility lies when the driverless car is truly at fault, say it drives smack into a wall because its painted a colour that confuses its camera, is the legal battle that must be fought at present - I'd argue such software/hardware faults would be the manufactures responsibility as long as the vehicle was maintained properly. No different to the manufacturer being responsible for a crash if its engine exploded due to a design fault.
Get busy living or get busy dying... unless there’s cake.
User avatar
Lagmonster
Master Control Program
Master Control Program
Posts: 7719
Joined: 2002-07-04 09:53am
Location: Ottawa, Canada

Re: How'd you program the morality of auto-cars?

Post by Lagmonster »

What pisses me off is that these articles put pressure on the reader to assume that robots have to be flawless, or make these kinds of complex moral or philosophical fat-man-onto-the-rail-tracks decisions at all.

The reality is that humans kill and maim so many people with cars, the baseline for "good enough that they should replace humans" is already pretty goddamned high. I don't know off-hand how many people die due to human error in automobile accidents, but if robots kill even only half that many, you're still a wild idiot if you claim that robots should be kept off the road until they're better. Sure, you personally might feel that you could have prevented an accident that you end up in due to a robot, but the odds are still against you.
User avatar
TheFeniX
Sith Marauder
Posts: 4869
Joined: 2003-06-26 04:24pm
Location: Texas

Re: How'd you program the morality of auto-cars?

Post by TheFeniX »

I can usually tell when someone is about to cut me off. It's not exactly rocket-science: look at their wheels. I would assume a robot cam could predict the likelyhood of a person pulling out and making judgements like... I don't know.. slowing down in advance? Like normal people do?

What kind of idiot do you have to be to consider swerving into another car to possibly avoid a wreck? That's the dumbest shit I've read today (though it is only 10am). You not only risk killing someone and yourself, but also both of you losing control of your vehicles and causing another wreck.

The biggest killer in driving is people not paying attention, getting lost in their head, messing with the radio, etc. So when something does happen, they have no idea about their current surroundings and have to make a split-second decision based off next to no information. One would hope a computer wouldn't have this issue.
StandingInFire
Redshirt
Posts: 16
Joined: 2013-08-31 06:56pm

Re: How'd you program the morality of auto-cars?

Post by StandingInFire »

The conditions:
A) from nowhere – a bus pulls out of a junction in front of it. (large object)
B) To the left pedestrians occupy the pavement, and a cyclist. (people)
C) To the right, a moving lane of traffic heading in the opposite direction. (medium fast moving objects)
For A to be true the bus would have to be traveling at a very high speed, meaning it will collide with the opposite moving lane regardless or have line of sight to the self-driving car blocked and slam on its break to only block 1/2 the intersection (otherwise the car should have been able to slow before the collision as they tend to have 360 degree vision).

Even ignoring the silly combination of circumstance needed for the example, a self-driving car would likely be programmed in case of imminent collision to:
1) Find a path to avoid collision.
2) Slow down as much as possible.
The example given means #1 is impossible so it would likely just do #2. It will be a long time before both sensors and computing power to parse the sensor value would allow you to do much else than that.

A simplified view of the situation has one large stationary object to collide with, people (near stationary), and fast moving objects. As force of a collision would be proportional to kinetic energy (0.5*mass*velocity^2) the lowest energy collision would be the bus as you don't know if people are in there and you won't program it to hit people.
bilateralrope
Sith Acolyte
Posts: 5938
Joined: 2005-06-25 06:50pm
Location: New Zealand

Re: How'd you program the morality of auto-cars?

Post by bilateralrope »

So if and when an accident occurs, who will be to blame?
In the scenario given in the article, I'd blame the bus driver. Unless the bus driver can blame something else like a mechanical fault. Is there anyone here who disagrees ?
Lagmonster wrote:What pisses me off is that these articles put pressure on the reader to assume that robots have to be flawless, or make these kinds of complex moral or philosophical fat-man-onto-the-rail-tracks decisions at all.
Agreed. Even if someone can point to some highly specific scenarios where a robot would do worse than a human every time, which the article does not do, if the robot car would do better the vast majority of the time then the robot car still wins. Fortunately insurance companies understand statistics. So I expect to see robot car adoption driven, at least in part, by reduced insurance premiums.
LaCroix wrote:Hitting the breaks and trying to hit the bus in the front (where the engine is) will cause the least damage.
The sooner you hit the brakes, the better you will do. Which comes down to reaction time. So the computer wins.
Even if you can't hit the front (which I doubt, because if you have time to sverve to avoid it, you certainly have time to choose a hit location), hitting the bus where you see the least children will only affect a few kids full. Aiming for the wheels is also a safer option, as it will stop you better. Also, as you ram the bus way below the seat level, there is still some metal between the kids and your car.
Would a human driver have the reaction time to make that decision ?

Would a robot car be paying attention to the bus passengers ?
AniThyng
Sith Devotee
Posts: 2760
Joined: 2003-09-08 12:47pm
Location: Took an arrow in the knee.
Contact:

Re: How'd you program the morality of auto-cars?

Post by AniThyng »

The scenario is rather contrived, but the core question of who is ultimately liable when an unexpected situation results in injuries or fatalities is still there - I suppose it's not new insofar as autopilots have been basically doing the same thing, but it needs to be made very very clear that if a robot car which I own and I am travelling in is involved in an accident, it cannot be me that is liable if the fault lies in a decision made by the robot car's logic.

Come to that, what if there was a software update module released that changed the logic to make it better, but I did not upgrade, and an accident occured that would have been prevented by the update. Am I now therefore liable?
I do know how to spell
AniThyng is merely the name I gave to what became my favourite Baldur's Gate II mage character :P
Sky Captain
Jedi Master
Posts: 1267
Joined: 2008-11-14 12:47pm
Location: Latvia

Re: How'd you program the morality of auto-cars?

Post by Sky Captain »

TheFeniX wrote:What kind of idiot do you have to be to consider swerving into another car to possibly avoid a wreck? That's the dumbest shit I've read today (though it is only 10am). You not only risk killing someone and yourself, but also both of you losing control of your vehicles and causing another wreck.
Happens all the time with humans driving. It can be as simple as dog or cat running in front of car, human sees obstacle, panics tries to swerve to avoid collision and ends up hitting opposite traffic or pedestrians on sidewalk even though running over an animal would cause least damage. Human brain just does not work well when it has to make split second decisions.
User avatar
LaCroix
Sith Acolyte
Posts: 5193
Joined: 2004-12-21 12:14pm
Location: Sopron District, Hungary, Europe, Terra

Re: How'd you program the morality of auto-cars?

Post by LaCroix »

bilateralrope wrote:
LaCroix wrote:Hitting the breaks and trying to hit the bus in the front (where the engine is) will cause the least damage.
The sooner you hit the brakes, the better you will do. Which comes down to reaction time. So the computer wins.
Even if you can't hit the front (which I doubt, because if you have time to sverve to avoid it, you certainly have time to choose a hit location), hitting the bus where you see the least children will only affect a few kids full. Aiming for the wheels is also a safer option, as it will stop you better. Also, as you ram the bus way below the seat level, there is still some metal between the kids and your car.
Would a human driver have the reaction time to make that decision ?

Would a robot car be paying attention to the bus passengers ?
The question was how to program a robot car for such situations.
Since I've programmed a lot of robotics, myself, I can definitely say that
a) a computer will always react faster than a human. Actually, it will most likely have gone through all calculations and initiated the actions by the time the picture has just registered in our brain.
b) a computer will always have the option to find the best impact point in situations where a human can do so only if he's a really, really fast thinker.
c) any robot able to navigate a car sufficiently will most probably be able to detect the people inside the car, if you want it to do so. After all, it needs to detect pedestrians (by shape/image/radar crossection/temperature/whatnot. You don't want to have such a system rely on one sensor system alone, so there is most likely a multitude of sensors working at any time). Programming it to try and hit the most likely not inhabited sections in an unavoidable crash (like the wheels or the engine compartment of a bus) would only be a short step from there.
A minute's thought suggests that the very idea of this is stupid. A more detailed examination raises the possibility that it might be an answer to the question "how could the Germans win the war after the US gets involved?" - Captain Seafort, in a thread proposing a 1942 'D-Day' in Quiberon Bay

I do archery skeet. With a Trebuchet.
User avatar
salm
Rabid Monkey
Posts: 10296
Joined: 2002-09-09 08:25pm

Re: How'd you program the morality of auto-cars?

Post by salm »

Sky Captain wrote: Happens all the time with humans driving. It can be as simple as dog or cat running in front of car, human sees obstacle, panics tries to swerve to avoid collision and ends up hitting opposite traffic or pedestrians on sidewalk even though running over an animal would cause least damage. Human brain just does not work well when it has to make split second decisions.
I think that one great possibility to avoid even more damage is that robot cars could communicate with each other. So if one car, for whatever reason, has absolutely no other option than swerving into oncomming traffic, the oncomming traffic would know and react accordingly by swerving onto the breakdown lane or into a meddow. Or at least break as hard as possible to reduce the impact.
Or perhaps the oncomming traffic has the possibility to swerve onto the first cars lane and still avoid the obstacle the first car would have faced had it not swerved into oncomming traffic.

Cars coordinating themselves among each other would be even more efficient than cars simply relying on sensor input. Obviously this would require a high coverage of robot cars. If you had only like 10 percent it wouldn´t be worth much. But if 90 or so percent of cars could communicate this could be very efficient.
User avatar
Lagmonster
Master Control Program
Master Control Program
Posts: 7719
Joined: 2002-07-04 09:53am
Location: Ottawa, Canada

Re: How'd you program the morality of auto-cars?

Post by Lagmonster »

I recall The Oatmeal's review of Google's self-driving car, where he described an event where the robot, thanks to its sensors, was able to detect a person THROUGH an opaque obstacle and stop. Arguably, an out-of-control bus would have been spotted by the robot a half block away and accounted for.
AniThyng
Sith Devotee
Posts: 2760
Joined: 2003-09-08 12:47pm
Location: Took an arrow in the knee.
Contact:

Re: How'd you program the morality of auto-cars?

Post by AniThyng »

LaCroix wrote:
bilateralrope wrote:
LaCroix wrote:Hitting the breaks and trying to hit the bus in the front (where the engine is) will cause the least damage.
The sooner you hit the brakes, the better you will do. Which comes down to reaction time. So the computer wins.
Even if you can't hit the front (which I doubt, because if you have time to sverve to avoid it, you certainly have time to choose a hit location), hitting the bus where you see the least children will only affect a few kids full. Aiming for the wheels is also a safer option, as it will stop you better. Also, as you ram the bus way below the seat level, there is still some metal between the kids and your car.
Would a human driver have the reaction time to make that decision ?

Would a robot car be paying attention to the bus passengers ?
The question was how to program a robot car for such situations.
Since I've programmed a lot of robotics, myself, I can definitely say that
a) a computer will always react faster than a human. Actually, it will most likely have gone through all calculations and initiated the actions by the time the picture has just registered in our brain.
b) a computer will always have the option to find the best impact point in situations where a human can do so only if he's a really, really fast thinker.
c) any robot able to navigate a car sufficiently will most probably be able to detect the people inside the car, if you want it to do so. After all, it needs to detect pedestrians (by shape/image/radar crossection/temperature/whatnot. You don't want to have such a system rely on one sensor system alone, so there is most likely a multitude of sensors working at any time). Programming it to try and hit the most likely not inhabited sections in an unavoidable crash (like the wheels or the engine compartment of a bus) would only be a short step from there.
Surely it's not too hard to contrive of a realistic situation where there are multiple possible actions and where someone could realistically sue the programmer/company for using the wrong logic tree?
I do know how to spell
AniThyng is merely the name I gave to what became my favourite Baldur's Gate II mage character :P
User avatar
LaCroix
Sith Acolyte
Posts: 5193
Joined: 2004-12-21 12:14pm
Location: Sopron District, Hungary, Europe, Terra

Re: How'd you program the morality of auto-cars?

Post by LaCroix »

AniThyng wrote:Surely it's not too hard to contrive of a realistic situation where there are multiple possible actions and where someone could realistically sue the programmer/company for using the wrong logic tree?
Only if the robot never hit the breaks and runs into something at full speed, while you can prove that an average human would have done a better. Means, a failure of the system to detect the imminent crash.

But then, a lot of people crash into things each day without having time to react, so it's quite a high burden of proof.

You would need to proove that:

the danger was obvious to an average human
an average human would have had enough time to react properly
the average human would have had enough time to take a better option than what the robot did. (e.g. swerving instead of a full break when there is safe room to do so)
- caveat: you would need to prove without doubt that the alternate option would have been safer (and no just say - the oncoming traffic would have stopped in time, you must proove they were so far away that they would), and seem reasonable to a human


Edit: you must remember that the robot car is not liable to do whatever was needed to cause the least harm - as in drive over a cliff instead of crashing into a car, because in that case, only the single passenger gets hurt instead of the additional 4 people in the car that just cut you off...

The car is liable according to the same rules a human driver would be. Also, it has a duty to keep its passengers reasonably safe. If it's scrificing its very own passengers in order to save others, now THEN you'd have reason to sue, as it obviously did not act like the average driver would have.
A minute's thought suggests that the very idea of this is stupid. A more detailed examination raises the possibility that it might be an answer to the question "how could the Germans win the war after the US gets involved?" - Captain Seafort, in a thread proposing a 1942 'D-Day' in Quiberon Bay

I do archery skeet. With a Trebuchet.
Sky Captain
Jedi Master
Posts: 1267
Joined: 2008-11-14 12:47pm
Location: Latvia

Re: How'd you program the morality of auto-cars?

Post by Sky Captain »

LaCroix wrote:The car is liable according to the same rules a human driver would be. Also, it has a duty to keep its passengers reasonably safe. If it's scrificing its very own passengers in order to save others, now THEN you'd have reason to sue, as it obviously did not act like the average driver would have.
How would a competently programmed robot act in a no win situation where keeping passengers alive would mean killing or seriously injuring other people? For example a heavy truck from opposite lane suddenly invades lane robot is driving, there is not enough room to stop and only solution to avoid truck is swerve to the right into bicycle lane hitting people on bicycles.
Crashing into truck would mean serious injury or death to people riding in a robot car while hitting bicycles would keep passengers alive but seriously injure or kill other people. Potential lawsuit in both cases.
Average human driver in similar situation likely would panic and try to avoid truck probably not even noticing that there are other people to the right.
User avatar
Darth Tanner
Jedi Master
Posts: 1445
Joined: 2006-03-29 04:07pm
Location: Birmingham, UK

Re: How'd you program the morality of auto-cars?

Post by Darth Tanner »

The only correct response is to emergency brake! Why are people obsessed with swerving into innocent pedestrians and suing driverless car manufacturers for not making their cars mass murderers by default!
Get busy living or get busy dying... unless there’s cake.
User avatar
Arthur_Tuxedo
Sith Acolyte
Posts: 5637
Joined: 2002-07-23 03:28am
Location: San Francisco, California

Re: How'd you program the morality of auto-cars?

Post by Arthur_Tuxedo »

These scenarios are fairly silly. If you need to make a split-second decision with no optimal outcome in traffic, you've already violated countless safety practices. A self-driving car isn't going to blissfully barrel past a blind spot at full speed like so many morons and then be forced to slam on the brakes or swerve out of the way when a vehicle/person/object presents itself. Between its optical sensors, radar and lidar systems, and V2V relays from other vehicles, it would take a lot to create a hazard so sudden that a self-driving car would be forced to crash, and it's a virtually 100% guarantee that said hazard would be caused by gross human stupidity, not by another self-driving car.
"I'm so fast that last night I turned off the light switch in my hotel room and was in bed before the room was dark." - Muhammad Ali

"Dating is not supposed to be easy. It's supposed to be a heart-pounding, stomach-wrenching, gut-churning exercise in pitting your fear of rejection and public humiliation against your desire to find a mate. Enjoy." - Darth Wong
User avatar
salm
Rabid Monkey
Posts: 10296
Joined: 2002-09-09 08:25pm

Re: How'd you program the morality of auto-cars?

Post by salm »

Arthur_Tuxedo wrote:These scenarios are fairly silly. If you need to make a split-second decision with no optimal outcome in traffic, you've already violated countless safety practices.
But that isn´t necessarily true. In a system that is as complex as traffic it is possible that unknown and unlikely things happen which can not be taken into account by a computer system. Perhaps some random shit suddently falls onto the street like a tree. A wild boar or a dear might appear out of nowhere, get hit by a car gain some air and fly directly into the oncomming traffics windshield.

Now, sure these are very unlikely scenarios and I am sure that robot cars are a lot safer in general than human drivers if implemented correctly but the question is in a "one of two people dies" scenario, how do you program the robot to kill the "correct" person.
Sky Captain
Jedi Master
Posts: 1267
Joined: 2008-11-14 12:47pm
Location: Latvia

Re: How'd you program the morality of auto-cars?

Post by Sky Captain »

Arthur_Tuxedo wrote:These scenarios are fairly silly. If you need to make a split-second decision with no optimal outcome in traffic, you've already violated countless safety practices. A self-driving car isn't going to blissfully barrel past a blind spot at full speed like so many morons and then be forced to slam on the brakes or swerve out of the way when a vehicle/person/object presents itself. Between its optical sensors, radar and lidar systems, and V2V relays from other vehicles, it would take a lot to create a hazard so sudden that a self-driving car would be forced to crash, and it's a virtually 100% guarantee that said hazard would be caused by gross human stupidity, not by another self-driving car.
That's the whole point that robot car could end up forced in no win situation because of actions from careless human drivers not by other robot cars. If most cars on the streets would be robotic then I expect serious accidents would be very rare simply because robots would obey speed limits, traffic rules and would be always 100 % focused on driving.
While most cars are still driven by humans I can easily imagine that some human drivers seeing few robotic cars driving nearby could act even more aggressively because robot will do its best to avoid accidents. For example human driver aggressively changing lanes forcing robot to brake to avoid collision, cutting across robot car in an intersection because robot will brake to let him pass and so on.
bilateralrope
Sith Acolyte
Posts: 5938
Joined: 2005-06-25 06:50pm
Location: New Zealand

Re: How'd you program the morality of auto-cars?

Post by bilateralrope »

salm wrote: the question is in a "one of two people dies" scenario, how do you program the robot to kill the "correct" person.
How do you train human drivers to reliably kill the correct person in the same scenario ?


Those situations are so highly specific and unpredictable that I can't see the programmers being able to predict them in advance. So the only option I can see is to give the robot car programming to handle as many situations as it can, then accept that it might do badly in some unforeseen circumstances (halting the vehicle should deal with most unforseen circumstances). Or even in some foreseeable circumstances because they look identical to a more common situation until it's too late.

Remember, the standard robot cars should be held to is not one of making the correct choice every time. It's merely one of performing better than the average human driver would do in the same situation.
In a system that is as complex as traffic it is possible that unknown and unlikely things happen which can not be taken into account by a computer system.
Name one where a robot car would do worse than a human. Be specific.
Perhaps some random shit suddently falls onto the street like a tree.
Assumption: tree is far enough away that the car can be stopped before collision.
Robot car: Sudden obstruction on road. Reduce speed. Attempt to stop without hitting anything.
Human driver: Slower reaction time before hitting brakes. High chance of panic slowing reactions further or attempting evasive action taking car outside of its lane.

If the collision is unavoidable, the robot car might be able to do something to reduce injury. Like deploying airbags before impact (if that would reduce injury, which I do not know).
A wild boar or a dear might appear out of nowhere, get hit by a car gain some air and fly directly into the oncomming traffics windshield.
Assumption: Robot cars CPU and control systems are not damaged by the initial impact. Damage to external sensors occurs.
Robot car: Collision detected. Sensors damaged. Use remaining sensors to avoid collision while halting vehicle. Turn off engine. Request response from passengers, if no response occurs* within 30 seconds contact emergency services.
Human driver. Possibility of panic. Possibility of injury from impact with animal.

*The request for a response could be an annoying noise. The response would be pushing the button to turn it off.
Sky Captain wrote:While most cars are still driven by humans I can easily imagine that some human drivers seeing few robotic cars driving nearby could act even more aggressively because robot will do its best to avoid accidents. For example human driver aggressively changing lanes forcing robot to brake to avoid collision, cutting across robot car in an intersection because robot will brake to let him pass and so on.
The robot car would have a video record of each time that happens. So I'm sure the police could figure out a way to discourage those drivers.
AniThyng
Sith Devotee
Posts: 2760
Joined: 2003-09-08 12:47pm
Location: Took an arrow in the knee.
Contact:

Re: How'd you program the morality of auto-cars?

Post by AniThyng »

When the human driver makes a decision and kills the 'wrong' person, it's quite possible it goes to trial with the driver in the dock to determine if he did in fact act accordingly. If the robot care does something and causes injury, who goes to trial? The project manager? The programer? The passenger? Who gets to sit there and testify their code made the car make the best optimal decision that any reasonable person would also have done?
I do know how to spell
AniThyng is merely the name I gave to what became my favourite Baldur's Gate II mage character :P
User avatar
Darth Tanner
Jedi Master
Posts: 1445
Joined: 2006-03-29 04:07pm
Location: Birmingham, UK

Re: How'd you program the morality of auto-cars?

Post by Darth Tanner »

While most cars are still driven by humans I can easily imagine that some human drivers seeing few robotic cars driving nearby could act even more aggressively because robot will do its best to avoid accidents. For example human driver aggressively changing lanes forcing robot to brake to avoid collision, cutting across robot car in an intersection because robot will brake to let him pass and so on.
I remember once in a discussion on the Guardian that someone argued children would run in front of automated cars to force them to stop. I did wonder if he thought human drivers should not stop if a child runs in front of them and instead should run them over?

Are you arguing that if someone cuts across an intersection and forces a human driver to break or impact them that somehow a robot car automatically breaking is worse than a human having the option to not break and cause an accident?
Get busy living or get busy dying... unless there’s cake.
User avatar
salm
Rabid Monkey
Posts: 10296
Joined: 2002-09-09 08:25pm

Re: How'd you program the morality of auto-cars?

Post by salm »

bilateralrope wrote: How do you train human drivers to reliably kill the correct person in the same scenario ?
Umm... you don´t?
Those situations are so highly specific and unpredictable that I can't see the programmers being able to predict them in advance. So the only option I can see is to give the robot car programming to handle as many situations as it can, then accept that it might do badly in some unforeseen circumstances (halting the vehicle should deal with most unforseen circumstances). Or even in some foreseeable circumstances because they look identical to a more common situation until it's too late.

Remember, the standard robot cars should be held to is not one of making the correct choice every time. It's merely one of performing better than the average human driver would do in the same situation.
That´s pretty much the point. It is unpredictable for the programmer, so they have to find a way to make the robot decide in unique situations. Simply having a standard reaction like hitting the breaks isn´t a good solution because the standard reaction might be the worst possible reaction.
Name one where a robot car would do worse than a human. Be specific.
Why?
Assumption: tree is far enough away that the car can be stopped before collision.
Robot car: Sudden obstruction on road. Reduce speed. Attempt to stop without hitting anything.
Human driver: Slower reaction time before hitting brakes. High chance of panic slowing reactions further or attempting evasive action taking car outside of its lane.

If the collision is unavoidable, the robot car might be able to do something to reduce injury. Like deploying airbags before impact (if that would reduce injury, which I do not know).

Assumption: Robot cars CPU and control systems are not damaged by the initial impact. Damage to external sensors occurs.
Robot car: Collision detected. Sensors damaged. Use remaining sensors to avoid collision while halting vehicle. Turn off engine. Request response from passengers, if no response occurs* within 30 seconds contact emergency services.
Human driver. Possibility of panic. Possibility of injury from impact with animal.

*The request for a response could be an annoying noise. The response would be pushing the button to turn it off.
That is obvious.
The tree and the wild boar were examples for unexpectable factors in traffic which can lead to the system having to decide which person dies. I am sure one can come up with other realistic examples in which such a decistion needs to be made.
Comparing humans to robot drivers is rather trivial and little interesting because robots can be made to perform a lot better than humans anyway.
I am interested in how a robot would declare one person to die and the other to live in scenario that would require at least one to die.
I assume that if the system comes to the conclusion that both people have the same chance of death it just picks a random target. But perhaps other people would like to see other "value" categories like age for example. I am sure that racists would rather kill the black guy than the white guy and again other poeple would rather kill the man than the women.
Perhaps if the system knew that one of the target has cancer and is likely to die soon anyway it will pick the cancer candidate.
Sky Captain
Jedi Master
Posts: 1267
Joined: 2008-11-14 12:47pm
Location: Latvia

Re: How'd you program the morality of auto-cars?

Post by Sky Captain »

Darth Tanner wrote:I remember once in a discussion on the Guardian that someone argued children would run in front of automated cars to force them to stop. I did wonder if he thought human drivers should not stop if a child runs in front of them and instead should run them over?

Are you arguing that if someone cuts across an intersection and forces a human driver to break or impact them that somehow a robot car automatically breaking is worse than a human having the option to not break and cause an accident?
The difference is that robot will be programmed to do its best to avoid accidents and always will be alert. Your average human driver not so - he probably is speeding, talking on a mobile phone, texting, checking emails, fiddling with radio and in general not 100 % focusing on driving.
If you are an asshole driver you may be respecting human drivers more because they may not let you pass because they are not paying full attention to traffic. It's just that you may be more willing to exploit the fact robot car will let you pass because robot is always alert and programmed to avoid accidents.
bilateralrope wrote:The robot car would have a video record of each time that happens. So I'm sure the police could figure out a way to discourage those drivers.
That could be a potential solution.
Post Reply