Simon Beard in Quartz:
Imagine you’re driving a trolley car. Suddenly the brakes fail, and on the track ahead of you are five workers you’ll run over. Now, you can steer onto another track, but on that track is one person who you will kill instead of the five: It’s the difference between unintentionally killing five people versus intentionally killing one. What should you do?
Philosophers call this the “trolley problem,” and it seems to be getting a lot of attention these days—especially how it relates to autonomous vehicles. A lot of people seem to think that solving this thorny dilemma is necessary before we allow self-driving cars onto our roads. How else will they be able to decide who lives or dies when their algorithms make split-second decisions?
I can see how this happened. The trolley problem is part of almost every introductory course on ethics, and it’s about a vehicle killing people. How could an “ethical” self-driving car not take a view on it, right?
However, there’s just one problem: The trolley problem doesn’t really have anything to do with the ethics AI—or even driving.
More here.