Algorithms Don’t Care

by Rachel Robison-Greene

The Trolley Problem, once a thought experiment students encountered for the first time in an Introduction to Philosophy classroom, has become a well-known cultural meme. It is depicted in various iterations in cartoons across the internet. The standard story involves a person choosing to pull a lever so that a trolley runs over one person tied to the tracks rather than five. People have imagined all sorts of identities for the individuals involved, up to and including a billionaire at the lever choosing to direct the trolley to run over many people in order to protect bags of money. All of these scenarios share a key feature in common—they encourage those contemplating them to consider the consequences that might result from the decision the lever-puller makes.

Much emerging technology encourages people to reason along similar lines. Consider the case of autonomous vehicles. Critics and ethically inclined collaborators have been quick to point out that these vehicles will not be naturally inclined to make decisions guided by moral considerations. If we want these vehicles to make defensible moral decisions in tricky circumstances, we’ll need to program them to do so. Discussion of these issues often proceeds along broadly consequentialist lines: identify things that are valuable and maximize them and identify things that have disvalue and minimize them. If we think that the life of a human is more valuable than, say, the life of an ill-fated duck crossing the street, then the life of the human should be prioritized when there is a choice between them. If we value saving a greater number of lives rather than a lesser number, then perhaps we should program autonomous vehicles to sacrifice the driver when the choice is between the death of the driver and the deaths of a greater number of people. In any case, the decision-making metric involves weighing consequences. The same is true with algorithmic decision making made by other forms of AI. This includes AI used in our most important institutions: education, health care, the criminal justice system, and the military.

It is likely that we think about ethics and algorithms in this way because of the way that algorithms work. AI-powered chatbots may use first-personal pronouns and behave in ways that are very similar to the way that humans behave. Nevertheless, the differences between humans and AI are profound. AI lacks the features that give rise to morality in the first place.  AI is not self-aware. It does not have qualia; there is nothing it is like to be it from its own first-personal, subjective perspective. Indeed, AI does not have a subjective perspective. It cannot feel pain, nor can it experience humiliation in response to violations of its dignity. It’s responses to others are correspondingly deficient: it cannot feel sympathy, empathy, or compassion, and it cannot practice love or care.

Our decisions with regard to technology pose deep metaethical questions that relevant corporate decision makers don’t tend to give much thought. There is a live debate within metaethics on the question of whether ethics should be impartial. Should it matter who is experiencing the suffering or the nature of the relationship in which we stand to the sufferer? Consequentialists tend to argue that these features of a moral dilemma do not make a genuine moral difference. Care ethicists, by contrast, argue that we cannot engage in caring relationships with others without recognizing the nature of their needs. We can only know that if we treat them as specific persons rather than as abstract others. We can satisfy their needs not by blind calculation, but by care driven by compassion.

Artificial intelligence dominates most industries, and we can expect that to continue into the future. To the extent that we are equipping algorithms with moral principles, we are proceeding in the only way we think an algorithm could proceed, through blind calculation. In doing so, we are treating the metaethical debate about the role of emotion in ethical decision making as settled. Even a mindless machine can make ethical decisions because there is nothing more to such decisions than the weighing of values that we have assigned in advance. This rules out care as the core of morality by stipulation.

In his book What We Owe the Future, philosopher William MacAskill warns readers about the danger of what he calls “Value Lock-In.” In the aftermath of major historical events, humans are, at first, willing to consider a range of possible responses. After a period of time, however, rigidity kicks in and we become locked in very particular ways of thinking about and responding to similar events in the future He says,

In this chapter I discuss value lock-in: an event that causes a single value system, or set of value systems, to persist for an extremely long time. Value lock-in would end or severely curtail the moral diversity and upheaval that we are used to. If value lock-in occurred globally, then how well or poorly the future goes would be determined in significant part by the nature of those locked-in values. Some changes in values might still occur, but the broad moral contours of society would have been set, and the world would enact one of only a small number of futures compared to all those that were possible.

The concern here is that value-lock in might encourage us to think consequentially for many years, perhaps forever. This should matter to us for several reasons. First, this shift distances us from what we have in common with other social beings. Intelligent creatures who live in social groups experience moral emotions such as concerns for fairness, reciprocity, trust, empathy, and love, that govern their interactions with one another. This behavior is described in detail by Jane Goodall in Through a Window: 30 Years with the Chimpanzees of Gombe, by Marc Bekoff and Jessica Pierce in Wild Justice: The Moral Lives of Animals, and by Fraz de Wal in Primates and Philosophers. We have long ignored the lessons we could potentially learn from the social behavior of other-than-human animals. Locking-in consequentialist reasoning into our dominant decision-making systems may minimize the value of the caring features that we share with our closest living sentient relatives.

Second, creating a future in which consequentialist AI dominates our institutions prevents us from reimagining those very institutions as institutions dedicated to care. We have been encouraged to think about institutions such as education or health care as concerned with maximizing various outcomes. These systems could be revolutionized if we thought of them as instead dedicated to fostering caring relations based on the recognition of compelling need. Care for others is a practice that involves compassion, empathy and attention. In Moral Boundaries: A Political Argument for an Ethic of Care, philosopher Joan Tronto makes a similar point. She says, “the need to rethink appropriate forms of caring raises the broadest questions about the shape of political and social institutions in our society.” She argues that if we emphasized the importance of care not just in the private, domestic domain, but in the public domain it would, “change our sense of political goals and provide us with additional ways to think politically and strategically.”

The ability to reassess social and political goals, to see shortcomings in our institutions, and to imagine how those institutions might better serve needs of sentient beings is critical to human moral engagement with the world. If consequentialist ways of thinking are locked-in in the aftermath of the emergence of AI, it may be decades or much longer before we consider the potential for crafting genuinely caring institutions.

Tronto says that to strive for a world in which “all people are adequately cared for is not a utopian question, but one which immediately suggests answers about employment policies, non-discrimination, equalizing expenditures for schools, providing adequate access to health care, and so on.”

If seeing to it that people are adequately cared for requires compassionate attention to need, these projects can’t be completed by AI. That said, AI is here to stay. As we design human futures, we need people who recognize the value of care involved in the conversations before moral decision-making by empty calculation gets locked-in.

Enjoying the content on 3QD? Help keep us going by donating now.