Bots, Beasts, and Beliefs about Consciousness

by Rachel Robison-Greene

In Discourse on the Method, philosopher René Descartes reflects on the nature of mind. He identifies what he takes to be a unique feature of human beings— in each case, the presence of a rational soul in union with a material body. In particular, he points to the human ability to think—a characteristic that sets the species apart from mere “automata, or moving machines fabricated by human industry.” Machines, he argues, can execute tasks with precision, but their motions do not come about as a result of intellect. Nearly four-hundred years before the rise of large language computational models, Descartes raised the question of how we should think of the distinction between human thought and behavior performed by machines. This is a question that continues to perplex people today, and as a species we rarely employ consistent standards when thinking about it.

For example, a recent study revealed that most people believe that artificial intelligence systems such as ChatGPT have conscious experiences just as humans do. Perhaps this is not surprising; when questioned, these systems use first personal pronouns and express themselves in language in a way that resembles the speech of humans. That said, when explicitly asked, ChatGPT denies having internal states, reporting, “No, I don’t have internal states or consciousness like humans do. I operate by processing and generating text based on patterns in the data I was trained on.”

The human tendency to anthropomorphize AI may seem innocuous, but it has serious consequences for users and for society more generally. Many people are responding to the loneliness epidemic by “befriending” chatbots. All too frequently, users become addicted to their new “friends,” which impacts their emotional wellbeing, sleeping habits, personal hygiene, and ability to make connections with other human beings. Users may be so taken in by the behavior of Chatbots that they cannot convince themselves that they are not speaking with another person. It makes users feel good to believe that their conversational partners empathize with them, find their company enjoyable and interesting, experience sexual attraction in response to their conversations, and perhaps even fall in love.

Chatbots can also short-circuit our ability to follow epistemic norms and best practices. In particular, they impact our norms for establishing trust. It is tempting to believe that machines can’t get things wrong. If a person establishes what they take to be an emotional connection, they may be more likely to trust a chatbot and to believe targeted disinformation with alarming implications for the stability of democracy.

Machines crafted by humans are not the only unthinking “automata” that Descartes discusses. He also considers non-human animals to be kinds of machines— flesh-based machines that carry out tasks but do not engage in reasoning. Many humans today seem to share this belief—their behavior suggests that they do not believe that non-human animals have internal states, or, in any event, states that are significant enough to be taken seriously. On this view, animal actions are not motivated by reason. Though their actions may be purposeful, those actions are not freely performed; non-human animals are not persons. Again, this is unsurprising; this position is robustly grounded in the history of western philosophy. For example, in his Politics, Aristotle argues that humans are the only living things with rational souls. This gives rise to a natural hierarchy. He says,

After the birth of animals, plants exist for their sake, and the other animals exist for the sake of man, the tame for use and food, the wild, if not all at least the greater part of them, for food, and for the provision of clothing and various instruments. Now if nature makes nothing incomplete, and nothing in vain, the inference must be that she has made all animals for the sake of man.

Aquinas makes a similar argument, in this case connecting the intellect with the Divine,

Of all the parts of the Universe, intellectual creatures hold the highest place, because they approach nearest to the divine likeness. Therefore, divine providence provides for the intellectual nature for its own sake, and all others for its sake.

Here, again, we have Descartes on the topic of non-human animals,

They have no reason at all, and that it is nature which acts in them according to the disposition of their organs, just as a clock, which is only composed of wheels and weights is able to tell the hours and measure the time more correctly than we can do with all of our wisdom.

When Descartes raises the idea that some animal actions come about purely “according to the dispositions of their organs” he foreshadows contemporary discourse about the motivations of non-human animals. If we can explain animal action by appeal to mere “instinct,” then we don’t need to bother ourselves considering motivations like emotion, social norms and interests, familial bonds and connections, and so forth.

There is an asymmetry between what people believe about artificial intelligence systems and what they believe about non-human animals. Even when people are explicitly told, perhaps by the systems themselves, that AI systems do not have mental states, people persist in believing that they do. On the other hand, even when faced with compelling evidence that non-human animals have internal states, people persist in believing that they do not, insisting instead that their actions are performed as a result of “mere instinct.” This may be because humans have self-interested reasons for believing in the mental lives of AI and also for rejecting such lives in the case of non-human animals.

Much of this asymmetry is potentially explainable by appeal to language, both when it comes to AI systems and when it comes to non-human animals. AI systems are excellent at using language in the same way that a person would. Animals are at a disadvantage when it comes to language in at least two ways. First, they are disadvantaged by the language we use or refuse to use when discussing them. Second, our perceptions about the extent to which animals can use language affects how we feel about them.

Consider, for example, the rule of engagement in research on animals known as “Morgan’s canon.” In his book, When Animals Dream: The Hidden World of Animal Consciousness, David M. Peña-Guzmán describes it as “an obligation to opt for the simplest explanation of animal behavior.” In particular, Morgan’s canon recognizes an obligation to refrain from positing cognitive explanations for behavior when physiological explanations are sufficient.

This methodological canon has its roots in the Verification Principle made popular by the philosophical school known as Logical Positivism and thinkers like A.J Ayer. According to this principle, a statement is meaningful only if it can be empirically verified. The argument is that physiological states are verifiable by empirical observations; mental states are not. By the mid to late 20th century, the Verification Principle was no longer viewed as an adequate method for studying human behavior. One dominant reason for this is that the Verification Principle is itself, not empirically verifiable. What’s more, positing the existence of mental states such as belief and desire has explanatory power that the Verification Principle doesn’t have. We recognize, for instance, that my belief that the chocolate cake is in the refrigerator, taken together with my desire to eat it, is a better explanation for my trip to the kitchen than a purely physiological explanation that denies that appeal to such states is meaningful. For these reasons among others, we now readily acknowledge the existence of mental states when talking about human psychology and motivation, but stubbornly cling to it when talking about animal minds.

When it serves our interests, we also refrain from naming animals, preferring instead to refer to them by assigned number. Naming is a powerful practice—when we name something, we put ourselves in a position to recognize it for what it is. When we give something a number, we relegate it to the status of “thing.” In her book, Hope for Animals and Their World, Jane Goodall describes the reasons that a group tasked with reintroducing wolves into the wild stopped referring to those animals by name. She says,

They did not fully realize the dangers these naïve wolves would face. They did not guess that 60 to 80 percent of those wolves would not make it—would get sick or collide with a car as they tried to cross the road that bisected their new home. And the field team felt devastated by each loss. “We had to learn to keep some distance, try not to get too emotionally involved”…that was one of the reasons why the wolves, for the most part, were not given names.

These researchers recognized the connection between language, naming, in particular, and how we respond to others with whom we engage. The chatbots to which people become addicted express themselves in language. They use “I” pronouns. They have names. Users view them as persons. Non-human animals are frequently referred to with the “it” of an object. We give names to our companion animals but withhold them when discussing animals used for research or food. Language matters for how we view minds.

A second way in which language frames how we view animal minds has to do with an animal’s ability to use language. Descartes, for example, offered a test for use in determining whether one is dealing with a “mind” or a “machine”. He says,

…if there were machines bearing the image of our bodies, and capable of imitating our actions as far as it is morally possible, there would still remain two most certain tests whereby to know that they were not therefore really men. Of these the first is that they could never use words or other signs arranged in such a manner as is competent to us in order to declare our thoughts to others…

Here, Descartes expresses an idea that remains popular to this day—for a being’s actions to be motivated by reason rather than something like instinct or “the disposition of their organs,” that being must be capable of expressing a thought in a language. On this view, belief formation involves accepting propositions. Practical reasoning involves recognizing relationships between propositions. If a being can’t articulate beliefs and desires in a language, that being can’t engage in the kinds of reasoning that is characteristic of mind and of free action that establishes personhood.

There are at least two ways of responding to such an argument. First, one could deny that language is a necessary condition for the existence of mental states. Second, one could accept that language is necessary, but that many non-human animals make use of language. Empirical evidence in support of this assertion has been provided by, ironically, AI. AI systems are now being used to recognize patterns in the noises animals make. Scientists have recently suggested on the basis of such models that elephants use unique names for one another that play important roles in social interactions. The future is likely to hold many more such discoveries.

Just as they do in our assessment of AI systems, our beliefs about animal consciousness matter. We engage in cruel behaviors toward non-human animals and excuse such behavior by rejecting the idea that animals have meaningful inner states. We prevent ourselves from knowing the range of things we could possibly know about our fellow creatures, which is both intrinsically bad and instrumentally bad insofar as it has implications for policy decisions. Finally, and crucially, when we assume that animals function in ways that are totally different from human beings, we are likely weakening our theories about human minds rather than making them stronger. When we know more about other animals, we come to better know ourselves.

We inhabit a world populated by minds. This is a remarkable time to gain a better understanding of the lines that demarcate important philosophical concepts related to mind, concepts such as belief, desire, intentionality, consciousness, personhood, and freedom. We’ll only arrive at compelling answers if we frame our questions carefully and pose them consistently. We can’t let existing bias and psychological need govern the conditions under which we are willing to ask them.

***

Enjoying the content on 3QD? Help keep us going by donating now.