by Rachel Robison-Greene
In Discourse on the Method, philosopher René Descartes reflects on the nature of mind. He identifies what he takes to be a unique feature of human beings— in each case, the presence of a rational soul in union with a material body. In particular, he points to the human ability to think—a characteristic that sets the species apart from mere “automata, or moving machines fabricated by human industry.” Machines, he argues, can execute tasks with precision, but their motions do not come about as a result of intellect. Nearly four-hundred years before the rise of large language computational models, Descartes raised the question of how we should think of the distinction between human thought and behavior performed by machines. This is a question that continues to perplex people today, and as a species we rarely employ consistent standards when thinking about it.
For example, a recent study revealed that most people believe that artificial intelligence systems such as ChatGPT have conscious experiences just as humans do. Perhaps this is not surprising; when questioned, these systems use first personal pronouns and express themselves in language in a way that resembles the speech of humans. That said, when explicitly asked, ChatGPT denies having internal states, reporting, “No, I don’t have internal states or consciousness like humans do. I operate by processing and generating text based on patterns in the data I was trained on.”
The human tendency to anthropomorphize AI may seem innocuous, but it has serious consequences for users and for society more generally. Many people are responding to the loneliness epidemic by “befriending” chatbots. All too frequently, users become addicted to their new “friends,” which impacts their emotional wellbeing, sleeping habits, personal hygiene, and ability to make connections with other human beings. Users may be so taken in by the behavior of Chatbots that they cannot convince themselves that they are not speaking with another person. It makes users feel good to believe that their conversational partners empathize with them, find their company enjoyable and interesting, experience sexual attraction in response to their conversations, and perhaps even fall in love.
Chatbots can also short-circuit our ability to follow epistemic norms and best practices. In particular, they impact our norms for establishing trust. It is tempting to believe that machines can’t get things wrong. If a person establishes what they take to be an emotional connection, they may be more likely to trust a chatbot and to believe targeted disinformation with alarming implications for the stability of democracy. Read more »