AI Is Already Making Moral Choices for Us. Now What?

Jim Davies in Nautilus:

Do we need artificial intelligence to tell us what’s right and wrong? The idea might strike you as repulsive. Many regard their morals, whatever the source, as central to who they are. This isn’t something to outsource to a machine. But everyone faces morally uncertain situations, and on occasion, we seek the input of others. We might turn to someone we think of as a moral authority, or imagine what they might do in a similar situation. We might also turn to structured ways of thinking—ethical theories—to help us resolve the problem. Perhaps an artificial intelligence could serve as the same sort of guide, if we were to trust it enough.

Even if we don’t seek out an AI’s moral counsel, it’s just a fact now that more and more AIs have to make moral choices of their own. Or, at least, choices that have significant consequences for human welfare, such as sorting through resumes to narrow down a list of candidates for a job, or deciding whether to give someone a loan.1 It’s important to design AIs that make ethical judgments for this reason alone.

More here.