Categories: Morality

Self-driving cars prove that morality is objectively difficult

You’re probably already familiar with the trolley problem: should you act to save five people at the cost of killing one person, or should you choose not to act, allowing five people to die? In the original problem, I’d answer that the moral thing to do would be to act—if there are only two options, and if I’m the only person who can act, then I should act.

That’s settled, then: when you can act to save five people at the cost of killing one, you should. And if that’s a solid rule, we should always apply it. You’re a surgeon, and five of your patients desperately need organ transfers, or they will all die within the week—and they all need different organs. You can choose not to act, thus allowing all five people to die. Or you can act to save all five people by killing a healthy person walking down the street and collecting their organs. For some reason, this time it’s rather difficult to argue that you should act to save five people at the cost of killing one person.

As intriguing as that might be, it’s just a though experiment. But Melanie Mitchell’s wonderful Artificial Intelligence: A Guide for Thinking Humans cites an actual study which proves moral choices are equally ambiguous in real life. Consider an autonomous vehicle, a.k.a. self-driving car, having to choose between sacrificing ten pedestrians on an alley and killing the single passenger in the vehicle by veering into a building. In one survey, 76% of the respondents answered it would be morally preferable for the AI in the self-driving car to sacrifice the single passenger.

I’d answer the same, and I expect you would, as well. The surprise came later: they asked the very same respondents whether they’d buy a self-driving car which preferred to kill their passengers in order to save pedestrians by the same rules as above. They overwhelmingly declined.