bilateralrope wrote:It occurs to me that the 'morality' programmed into a robot car will not be utilitarian ethics. It will be a rule based ethical system, because that's easier to implement in software.
It won't be
any ethical system.
This entire notion that an automated car makes 'ethical' decisions is based on a false concept, an anthropomorphization, of the vehicle. It's like asking what the ethics of a horse are; the horse hasn't got any. The horse will, of its own accord, seek to avoid collisions and situations where it might damage itself. But there isn't
anything in the horses brain that could even begin to say "running smack into a tree is less wrong than running smack into a human being."
It's like asking what ethics a Vroomba would use if it were in a position where making the 'wrong' vacuuming decision could cost lives. The Vroomba doesn't use any ethics; it's a subsentient mechanism following preprogrammed directives. Even if you
did program it to "act ethically," what you'd really be doing is deciding in advance which outcomes are ethical, and trying to program the machine to act in ways that lead to those outcomes. Such as steering to avoid anything that looks like a pedestrian, automatically.
In borderline cases where all options lead to hitting a pedestrian, or something similarly disastrous, there is a good chance of the computer's programming locking up or acting counterintuitively, precisely because our intuitive brain models the computer as having a self-aware ethical 'center' that makes these decisions based on the conventional rules of society. And the computer has no such thing.
Arthur_Tuxedo wrote:True, my experience may not be representative here, so if anyone has better figures I'm ready to be corrected.
Duchess presented some figures earlier, I believe.
You can eliminate the transmission by placing an electric motor on each wheel axel. I doubt it would work with a gas motor, though.
It is non-obvious that this would constitute a weight savings.
Also... I mean, we could build cars like that
now; there is nothing about a human-driven car that requires that it have a single engine with a geared transmission running torque to the wheels. You could even still have a human operating a 'gas pedal;' the details of what instructions that passes to the motors are a black box from the point of view of the end user.
So the idea that future cars will save weight by having a computerized control system where each axle is powered independently by an electric motor is
totally separate from the idea that the car will be self-driving. And that's even assuming it would save weight to have multiple motors rather than one motor plus one transmission.
Likewise the suspension- the car needs a suspension in case it hits something, and to avoid damage to delicate items in the passenger compartment. Like the passengers. You can't skimp on that just because there's a robot making the driving decisions.
Not in the near future, but on a fully automated road a crash would carry similar odds to being struck by lightning, and the other vehicles that would be crashed into would also be similarly constructed.
"In case it hits something" in this case includes, oh,
potholes.
Do you have a non-joking proposal for how switching to automated cars would somehow prevent roads from forming potholes? If not, a car with a weaker suspension is going to be far more vulnerable to hitting potholes and damaging itself. And, possibly, swerving as a result, into this ultra-low-tolerance ultra-close-together tangle of cars moving with blind disregard for safe following distances, triggering massive hundred-car accidents.
Clever.
This is starting to sound like you're treating the robot car as a panacea, rather than as an actual piece of technology that can do some things but not other things.
Arthur_Tuxedo wrote:Well, sure, but that's an argument against passenger cars and trucks in general regardless of who's driving. Besides, the marketplace has already thought up some solutions for these issues. Here in SF, I can take an Uber or Lyft downtown and share it with other people who are also going the same way, cutting the price by up to 60% and making it only slightly pricier than the bus.
Although that works in large part because Uber and Lyft are bypassing the normal regulatory regime on taxis; that can't possibly last.
The archaic taxi rules needed a mercy bullet to the back of the head anyway...
Why?
but regardless of how the legalities shake out, the point is that algorithms that are already being deployed will be refined even further in the age of autonomous vehicles and allow the passenger vehicle fleet to achieve similar cost and efficiency to bus and rail systems without the upfront cost and politicking. People can still hog a vehicle all to themselves, of course, but they'll be paying a lot more for the privilege and far fewer will do so.
All the means you've suggested by which the autonomous vehicle fleet could achieve this are... highly dubious, to say the least.
Also, I point out that this reinforces the idea that forcibly making everyone go over to robot cars involves a major sacrifice of individual autonomy. If you're relying on robot cars using brilliant algorithms to figure out where to go, and using carpooling to make it efficient, you're also relying on
me being willing to get in a car with whatever random strangers the algorithm says are going to the same general location as I am.
While people are sometimes willing to make that tradeoff, the ability to travel independently of the whims of others, with a degree of privacy and control, is something that 20th century people in the developed world tend to view at a premium. Even services like Zipcar don't drop this concept- because you're basically just renting a car for
yourself.
Slipstreaming cars a few inches apart isn't going to work because any unexpected stimulus or mechanical failure will cause a massive pileup. Stopping distance isn't just about how long it takes the driver to notice the car in front of him slowing down; it's how long it takes to stop.
Right, but with proper V2V communication, any vehicle that develops a mechanical problem will inform the others before immediate action is required.
Uh... don't you realize that mechanical defects can occur
abruptly and cause entirely unpredictable changes in a car's speed or direction? Like, say, blowing out a tire.
Some of these things will render the car's autopilot
physically unable to keep the car on course, it will
have to steer for the side of the road or otherwise make unexpected maneuvers.
Cars could also actually touch bumpers (perhaps linked together magnetically), which would prevent impacts in the event of a sudden velocity change.
By that argument, when a train derails people shouldn't suffer severe physical injury because the cars are all connected to each other. Instead, the reverse is true- being on a train when it hits an obstacle large enough to derail it is
very unsafe precisely because if one car goes off the rail, they all do.
The need for sudden stops is caused by our limited access to information, a problem that AI cars would not suffer.
Because AI cars will never hit a pothole and have a major drive train component break. Riiight.
I get that, and I've been known to enjoy a nice scenic drive myself, but people said the same thing about horseback riding, and the thing is that you can still ride horses if it catches your fancy, just not on the public roadway. Even if human-driven cars are banned from the roads (which I think they eventually will be), I would expect to see a surge in private roads and racetracks where people can drive fast and free without all the distractions of traffic, pedestrians, stop lights, etc.
Except that in that case you aren't
going anywhere; you've effectively banned people from autonomously traveling anywhere outside walking distance.
Replacing a horse with a car is at least an upgrade from the user's point of view. No autonomy is lost. Replacing a car with a robot car is only an upgrade if you don't value your own control over the vehicle. Or, for that matter, the route you're taking, in all probability...
Frankly, I don't think many people are going to care about that when the time comes, and the ones who do will be drowned out by those who are annoyed that their commute is twice as long because old geezers are stopping up traffic.
I think you genuinely might be surprised by the number of people who are uncomfortable with ceding control over their lives.
If you are
not surprised and this number is indeed small... well, I'm not saying you're wrong, but I don't think you should act like that's an unalloyed good thing.
Also, horses don't need fancy roads so a significant amount of movement autonomy actually was lost in the transition.
This does not align with the experience of actual horse users, who overwhelmingly sought out roads to travel along...
Millions of horses that need food, poop everywhere, and travel slowly was not sustainable given the new technological alternatives (internal combustion engines). People driving themselves in large vehicles with hundreds of horsepower with no passengers, guzzling billions of gallons of gas annually and smashing into each other, causing tens of thousands of deaths every year is just as unsustainable, and whether the solution is automated roads or a return to mass transit, autonomy is lost either way.
There is a fundamental difference between a situation where people, by individual economic choices, seek to live closer together and rely more heavily on mass transit (possibly including robot taxis) and a situation where the law basically mandates that everyone stop driving their own vehicles in exchange for marginal savings on fuel efficiency.
You've made reasonably good arguments for why the former might happen. But not so much the latter. Your arguments for why the legal system might mandate
only robot vehicles on the roads are... rather full of holes.