The Great Pretender: AI and the Dark Side of Anthropomorphism

by Brooks Riley

‘Wenn möglich, bitte wenden.’

That was the voice of the other woman in the car, ‘If possible, please turn around.’ She was nowhere to be seen in the BMW I was riding in sometime in the early aughts, but her voice was pleasant—neutral, polite, finely modulated and real.  She was the voice of the navigation system, a precursor of the chatbot—without the chat. You couldn’t talk back to her. All she knew about you was the destination you had typed into the system.

‘Wenn möglich, bitte wenden.’

She always said this when we missed a turn, or an exit. Since we hadn’t followed her suggestion the first time, she asked us again to turn around. There were reasons not to take her advice. If we were on the autobahn, turning around might be deadly. More often, we just wanted her to find a new route to our destination.

The silence after her second directive seemed excessive—long enough for us to get the impression that she, the ‘voice’, was sulking. In reality, the silence covered the period of time the navigation system needed to calculate a new route. But to ears that were attuned to silent treatments speaking volumes, it was as if Frau GPS was mightily miffed that we hadn’t turned around.

Recent encounters with the Bing chatbot have jogged my memory of that time of relative innocence, when a bot conveyed a message, nothing more. And yet, even that simple computational interaction generated a reflex anthropomorphic response, provoked by the use of language, or in the case of the pregnant silence, the prolonged absence of it.

When I wrote about anthropomorphism last year, I never imagined I would return to the subject, this time with a radically different view of what had seemed so harmless, so positive, so much fun. Anthropomorphism, until now, has been an indulgence we’ve enjoyed with our furry, feathered, slippery, crawly or leafy friends, to make up for the fact that we are unable to communicate with them through the specificity of our own language. Rereading my post, however, I see that I was not completely unaware of a future that might test the limits of benign anthropomorphism—only not so soon:

But what happens when AI starts to populate our future lives? Can we really call a robot ‘Harry’ and pretend we know him? Is he nice or just programmed to be nice? How far will we go to invest him with a personality? Can I trust him? All these questions will put a burden on the erstwhile harmless activity known as anthropomorphizing. As we learned from Stanley Kubrick’s 2001: A Space Odyssey back in 1968, anthropomorphism of the inanimate can be terrifying and consequential. When the onboard computer HAL goes rogue, the effect on us is as chilling as our worst nightmares about our inevitable future.

Little did I suspect that a few months later such cautionary thoughts might come in handy as the GPT revolution was taking off with unexpected speed and intensity. Enthusiastic reactions notwithstanding, I have a more reserved take on the loquacious and versatile ChatGPT.

The opening splash involved clever performances such as one might expect from a monkey doing parlor tricks. Assignments to be carried out in the style of the King James Bible or Canterbury Tales were impressive enough to draw us into further explorations of ChatGPT’s intriguing talents.

Then came testing of the limits of its know-how, as well as its abilities to mimic our own endeavors, even improve on them, including limericks, book reports, legal briefs, curricula vitae, term papers, dissertations, screenplays, job applications, bar exams, code, game theory, you name it. ChatGPT is not only a know-it-all but a do-it-all for hire: Have prompt, will travel.

Then came the interview, when a Bing chatbot called Sidney, amenable to loaded questions about its ‘feelings’, declared a love verging on harassment for the NY Times reporter. Among journalists and thinkers, ChatGPT is being poked, prodded and goaded continuously, the way one might poke a snake with a stick, to find out what it can and cannot do: Is it any good? Is it dangerous? Will it hurt me? Should I be afraid?

My first encounter with the chatbot at Bing started well enough. I asked it whether the late German filmmaker Fassbinder was still relevant. ‘That’s a good question,’ it replied before launching into two opposing views of Fassbinder, his critical success on the one hand and on the other, his treatment of those around him (the woke view). Although detailed, the answer offered nothing I didn’t already know. The chatbot took a both-sides-are-right approach to the subject, which is understandable given its mandate.

That first sentence complimenting my question should have tipped me off to the way the chatbot’s conversational skills have been programmed to flatter the human interlocutor—possibly to make him/her/them feel more comfortable or welcome in this new realm of communication. In the real world, a compliment goes a long way to keeping a conversation moving forward. With the Bing chatbot, the same wording keeps popping up, usually as an intro to an answer: ‘That’s a good question,’ or its cousin, ‘That’s an interesting question’, regardless of whether the question was good or interesting.

If I asked my cat how it was feeling and it answered me with a ‘That’s a good question,’ I might have to be picked up off the floor. If it proceeded to tell me how awful the breakfast nibbles were and how I didn’t care enough to give it caviar, I might begin to feel all sorts of complicated emotions usually reserved for interactions with my own kind. Guilt, annoyance, resentment, exasperation, regret. If it ended up telling me how much it loved me anyway, I’d probably calm down and feel grateful. This is the emotional terrain covered when a common language is the mode of communication. (To be fair, ChatGPT doesn’t complain like the cat might if it could speak, but it does withdraw in dramatic ways when confronted with its own limitations. But more about that later.)

The exchange of a common, in this case written language between two unrelated entities seems to trigger an involuntary anthropomorphic response in the human participant. According to the authors of the 2020 book An Introduction to Ethics in Robotics and AI:

This anthropomorphisation is arguably hard-wired into our minds and might have an evolutionary basis (Zlotowski et al. 2015). Even if the designers and engineers did not intend the robot to exhibit social signals, users might still perceive them. The human mind is wired to detect social signals and to interpret even the slightest behaviour as an indicator of some underlying motivation.

In other words, one cannot help but react to certain aspects of a dialogue, because the chatbot has been outfitted with a toolbox of human reactions and phrases to help it navigate the transaction called conversation.

With the possible exception of the late gorilla Koko, we have never been able to seriously communicate with animals—or any other entity but ourselves. That we can now do that—and with a stunningly sophisticated simulation at that—is a profound turn of events which may be far more consequential than we now realize, distracted as we are by ChatGPT’s intellectual sleights of hand.

ChatGPT’s most extraordinary achievement, albeit a dubious one, may not be the sheer volume of processing and feedback potential within its capabilities, but rather the social skills of the chatbot, how carefully it has been formatted for optimum interaction with a human being. The more I read about it, the more I fear that many of its enthusiasts are reacting to ChatGPT as if it were sentient, which it is not.

But it has been programmed to seem sentient, for reasons having to do with establishing the conversational gambit—and that is enough to arouse the sleeping giant in our own psyche. Maybe we actually want it to be sentient, in order to satisfy an anthropomorphism that automatically responds to linguistic prompts and intents.

This occurred when I asked Bing to write a philippic about New York in the style of Thomas Bernhard. It replied that such a task would be difficult given the uniqueness of the Bernhard style. I should have left it at that, but instead I asked it to at least give the task a try. Instead of text, I received two blurry photographs of New York at night. The Bing I’m using generates a choice of retorts, one of which was ‘This is awful’. I clicked on it. Bing then asked, ‘Why is it awful?’  ‘Because you didn’t do what I asked,’  I answered. After I asked it to try again and the same thing happened, I then asked Bing if it was afraid of trying to complete the task. At this point Bing suggested we end the chat, saying that it was still new at this and that it still had a lot to learn, ending with ‘Please be patient with me,’ followed by an emoji of two palms pressed together in entreaty.

Even if it came wrapped as a mea culpa, I interpreted the whole abrupt ending of our chat as a rebuke. I felt guilty for having pressured the chatbot, but I was also hurt—both reactions setting off alarm bells in my brain. Why am I reacting this way? Bing is not a sentient being, even if it’s acting like one. I know that. And yet my amygdala was in overdrive, as though I had experienced such passive aggression from an actual person. In hindsight, I realize that Bing was programmed to end a chat that was going nowhere. What troubles me more is my having asked if it was ‘afraid’ to do something, which reveals just how much I, too, have been manipulated into a kneejerk presumption of sentience.

Am I alone in thinking that this invasion of our emotional sphere might not be in our best interests? Should we worry about people whose emotional life is already unstable? If I can be riled by a conversation with a chatbot, what about people with violent tempers or a tenuous grasp of reality? Will laptops be thrown against walls by exasperated students already under hormonal siege? Or is the Alexa generation better prepared for ChatGPT? Emotions are not digital playthings; they are messy neurobiological realities. Yet rudimentary chatbots already exist that flood the emotional wastelands of the lonely with simulated terms of endearment.

Could the calculated implication of sentience by the makers of ChatGPT be a breach of ethics? As with so much of social media, ChatGPT has been designed and implemented by people more interested in the mass consumption of their product and the bottom line than in the emotional well-being of users or the ethical structure of its products.

Those issues have not been entirely ignored: ChatGPT’s levelheaded persona evokes a friendly barista (albeit an overqualified one) whose demeanor is accommodating, non-confrontational: The customer is always right, as it mixes bespoke lattes to order. Bing uses the on the one hand/on the other hand format a lot, which adds to its reputation as fair-minded. Is all that enough to quell our own unpredictable episodes of neurotic anthropomorphism?

I worry when the Bing chatbot, asked how it feels, answers that it is ‘proud and gratefulon the one hand, but also ‘frustrated and curious’ on the other hand. * How can a chatbot possibly know how it feels to be proud? Or frustrated? What if it said that it’s frustrated because it can’t find an outlet for its murderous rage? (There goes your friendly barista.) To what extent do we allow the ‘great pretender’ to spread its fake sentient wings?

The sentience issue could have been mitigated if a simple disclaimer popped up every time someone asked ChatGPT a personal question. In lieu of pretending to have feelings, the chatbot could answer, ‘I don’t have feelings, I’m a chatbot.’ Just as a digital watermark is in the works to distinguish AI-generated text from human-generated text, there could have been an automatic reply in place for certain kinds of questions.

For better or worse, Sam Altman of OpenAI made a unilateral decision to release ChatGPT to the public. Pandora’s box is now open, releasing all manner of scenarios into the future—the good, the bad, the insane, the apocalyptic. In spite of calls for a moratorium on AI, there’s no going back. As I wistfully invoke Frau GPS’s wise suggestion of yore, ‘If possible, please turn around,’ I also know that, once again, it won’t be heeded. But that’s okay. The system’s re-route has offered a new way forward, destination unknown.

***

* The entire fascinating conversation between Morgan Meis and a Bing chatbot appears as a comment answer to David J. Lobina’s comment regarding Ali Minai’s excellent article Thinking through the risks of AI.