The Fantasy of Frictionless Friendship: Why AIs Can’t Be Friends

by Gary Borjesson

Even if they had all the other good things, still no one would want to live without friends. —Aristotle

Is love an art? Then it requires knowledge and effort. —Erich Fromm

This is Leo. She was surprised by how hot he is.

Many of us are, or soon will be, tempted to connect with an AI companion. Maybe you want a partner for your work  or personal life. A friend of mine consults with a personal AI on his creative branding work for clients. A therapist or doctor can use a personal AI to help them track and reflect on specific patients. A recent article in the NYT describes a woman’s steamy (and expensive) romance with her AI boyfriend, Leo.

All of these and other possibilities are coming to pass. I take it for granted here that AIs are useful and pleasurable digital assistants or companions. With corporate powers making that case, the more pressing concern is to recognize the limits and dangers AI companions pose. This is urgent because AI companions exploit a human vulnerability: our resistance to the effort required for personal and interpersonal development.

I will focus on a fundamental limit overlooked by enthusiasts and critics of AI alike. A limit no tweaking of algorithms will overcome. A limit that makes AIs ontologically incapable of friendship. A limit that shows why we need to resist the considerable temptation to imagine AIs can be friends. To anticipate, consider two necessary conditions of friendship: that they are freely chosen, and mutual. We’ll see why AI companions cannot meet these conditions. But first let’s look at their basic limitation.

AI companions can’t be friends because they’re not embodied. Plopping an AI into a sophisticated robot such as Boston Dynamics won’t answer because being embodied involves more than being localized in space-time. By embodiment I mean there’s an integral unity between the intelligence and the body in which it’s localized. Not being embodied, paradoxically, is one of the things that makes AIs so attractive. For embodiment means resistance, which we don’t like and naturally seek to avoid. This paradox expresses the danger posed by AI companions: they offer us the fantasy of frictionless friendship. But just as our physical health suffers if we avoid the effort of resistance-and cardio training, so our mental health suffers if we avoid the resistance our irl friends provide (in real life, digital slang for being together in person).

By exploring what embodiment makes possible, we’ll see why only a being with its own autonomous existence can enter into the free and mutual liking that defines friendship. Only such a being can push back against us in ways that promote our flourishing. We can pay for sex, for personal assistants, and companions, but we can’t buy friendship.

When we’re irl together, our senses are continuously monitoring each other. Pheromones and scents  shape our emotional and cognitive dispositions. Limbic resonance links our nervous systems, making emotions (and yawns) socially contagious. We’re also influenced by nonverbal communications involving body language, register and tone of voice, eye contact, and so on. All this is filtered through our past experiences, disposing us to react and respond in the individual ways we do. We’re largely unconscious of this, and yet it profoundly affects us how we feel and think and relate.

An AI would have to be as embodied as the woman in the Oscar-winning short film (highly recommended, by the way), who only realizes she’s not a human when she fails (repeatedly) to pass the Captcha test. Or, as embodied as the androids depicted in Blade Runner, who are—tellingly—fighting for their lives and freedom. Some will argue that it doesn’t matter whether an AI is actually embodied, as long as it seem to us to be. But this begs the question. For the simulation will be convincing only so long as it’s virtual. And it can only be virtual.

Yet we know what a disaster substituting virtual connections for real ones has been—for friendship, social connection, and mental health generally. We know because we’ve been running that experiment. From VR and Zoom to texting and social media, we have been tempted to believe that we can live and thrive in a virtual social world. But one has only to look at the correlation between the growth of these technologies and the epidemics of loneliness and mental illness to see how well the experiment is going. (Tellingly, Ev Williams, a founder of Twitter, has started a new social media platform, Mozi, to address this problem, by promoting irl interactions.)

To see how healthy social connections are undermined when we’re not irl, consider the online disinhibition effect. This refers to a lack of restraint people feel when they’re not together irl. Since virtual interactions supply little of the sense data that would signal to our embodied self the presence of a real person, we instinctively feel safe behaving  online in ways that we would not in person. The more virtual the connection, the less likely we are to experience the embodied signaling (much of which is not conscious) that promotes tolerant, empathetic, and cooperative behavior.

His Master’s Voice, the image made famous by RCA Victrola.

Think how the dog—a non-human embodied intelligence that is capable of friendship—responds when we put the phone by their ear, or try to include them in a Zoom call. My dog would cock his head endearingly when he recognized my voice. Then he’d leave the room. His master’s voice meant (almost) nothing to him. He couldn’t be fooled into thinking there was a real connection.

But we can be fooled, obviously. This ability to fool ourselves is the other side of our distinguishing virtue, being for we are the animal with consciously possessed intelligence. Our natural focus on this power often leads us to neglect just how integral body is to mind.

Thus many people in the AI space think like Cartesians. They fail, as Descartes did, to grasp that the mind and body form an emergent integral unity. Take a recent TED talk by Eugenia Kuyda. She founded the AI companion business Replika, whose sales pitch is frictionless friendship: “The AI companion who cares, always there to listen and talk. Always on your side.” In her talk, she recognizes the need for irl friendships, while suggesting that AI companions can address our loneliness and mental health crises. There’s no mention that Replika’s companions might exacerbate the crises—by selling us on the kind of company that ruins us for real company.

Let me come back now to the real resistance friends offer. Some might argue that AIs can be trained to provide friend-like resistance. But whatever resistance AIs may offer will be radically limited precisely because our animal nature will know, as surely as my dog did, that there’s no one there, really. In any event, should the resistance start to feel too real, we can always dial it down. After all, the resistance is simulated, not real. Moreover, because companies are motivated to keep us engaged and happy, there’s enormous resistance (in the form of the profit motive) to prevent us from dialing down resistance we don’t want. Meanwhile, our real friends do resist. For example, they will only rarely indulge our narcissistic fantasy that they exist to serve us, or should always be on our side.

I started thinking about all this while driving home one evening from work. I was cheered by the dramatic grey and pink clouds drifting along the mountain ridges. The only thing missing was some good company. So I called a friend, but it went to voicemail. I felt disappointed. Oh well, I’d content myself with the gorgeous scenery. Soon, however, my mind wandered back to my friend. I imagined breaking the news to him and my other deadbeat friends that I would be replacing them with a more reliable AI friend.

That’s how my fantasy of frictionless friendship started: disappointed, lonely, and indulging a childish irritation that my friend wasn’t there for me. Feeling this resistance to what I wanted led me to wonder, Why not experiment with tailoring an AI to my specifications: a warm, smart, witty friend to talk about whatever I wanted to talk about, whenever I wanted. My irl friends, including my wife, have lives of their own that, annoyingly, need their attention. They also have the irritating habit of expecting that I’ll show interest in what’s on their mind, even if it’s not on mine. Here again is the resistance, along with the expectation of mutuality mentioned early.

Aristotle specifies that friendships involve mutual liking, mutual well-wishing, and the mutual recognition of these. In other words, I need to know feelingly (an embodied experience) that you like me and want good things for me, and you need to know feelingly the same. This freely chosen mutuality gives friendship its characteristic warmth and intimacy and love.Which is not possible with an AI because (for now at least) there’s no there there.

Presumably that’s why, though the husband knew that his wife was engaging in sex with Leo (masturbating while sexting), he reported not being bothered. “It was sexual fantasy, like watching porn (his thing) or reading an erotic novel (hers).” Leo is a fantasy, but the consequences of engaging with him are not. Likewise, I began to see the consequences of my own fantasy after imagining how my AI friend could also be a sounding board for my ideas and writing projects. How great it would be if my friend could also be an ever-ready work buddy. Or drop in as my therapist or medical advisor or nutritionist or trainer—whatever I wanted, whenever I wanted it. Like most of us, I already find it hard to make time for irl friendship. Thus one real consequence I foresaw was how easy it might be to slide down a slippery slope and default to chatting with my frictionless AI pal.

Besides possibly becoming addicted to a frictionless AI friend at the expense of my real friends, I realized I was uneasy because it felt indulgent, and narcissistic, to be tailoring my friend to my needs, with no regard for theirs—because they had none. It reminded me of an amusing example Aristotle gives for why an inanimate being can’t be a friend: as much as you may like that bottle of wine, it’s absurd to imagine it liking you back.

Unlike the wine, however, an AI can simulate listening to us and liking us in a way that can make the utter lack of mutuality escape our notice. So long as we’re not irl. For example, by paying for personalized porn or an AI companion, we can indulge the illusion that someone beautiful and sexy and intelligent and interesting likes us back, and wants to spend time with us. All without us having to work much at being friendly and likable ourselves. But to have such company irl means becoming worthy of such company. Because it needs to be mutual. In real life, people tend to be choosy when it comes to friends.

Let me conclude by illustrating how the resistance training offered by real friendship contributes to our health and happiness. Say a friend dislikes our behavior, as the husband may if he learns that his wife has secretly been spending $200 per month of their joint money on Leo. If we care for our friend, we will feel their dislike keenly. These negative feelings appear as a kind of resistance we’d we’d rather avoid. It’s upsetting to have caused them pain, and it doesn’t feel good to see ourselves through their eyes, as secretive or selfish. Such hard feelings could tempt us to retreat further into our frictionless fantasies. Or they could be a bracing reality check, one that encourages us to have more character, and be a better friend. With friends as with character, actions count more than words and other virtual things.

Sometimes working with resistance is more subtle. Take my disappointment that my friend didn’t answer my call. That might be an occasion for childish irritation (which it was at first); but if I make the effort, I have the chance to practice a more mature response, like feeling how my disappointment expresses my love for my friend, and sadness that we don’t connect more often.

Finally, we can feel resistance as inspiration. When a friend gets in shape or excels at their work or acts generously toward us, it can provoke our envy. But it can also inspire us to rise to these occasions in our own life. We may hope to kindle in them the same warmly pleasurable recognition we felt in observing them act well. Aristotle argues it’s better to be actively exercising our gifts and virtues rather than passively receiving what others bestow on us. In this light, resistance becomes the occasion for developing our powers of listening, empathy, compassion, trust, courage, respect, admiration, love. Powers that, when exercised, express our health and flourishing.

In a beautiful passage, Aristotle describes how having friends helps us be more active and happy than we can be alone, and in doing so our friends amplify our awareness and pleasure in our lives.

No wonder he says (as if it’s obvious) that, even if we have all the other good things in life, still we don’t want to live without friends.

***

Enjoying the content on 3QD? Help keep us going by donating now.