The Society that Mistook its Data for a Mind

by Rebecca Baumgartner

Photo by Erick Butler on Unsplash

In the Black Mirror episode “Be Right Back” (2013), two romantic partners, Martha and Ash, move into Ash’s childhood home together. As they’re settling in for their first night in their new home, Ash finds a photo of himself as a little boy, one of the few left out after his mom had removed all the photos of Ash’s dad and brother after their deaths. “She just left this one here,” he says, “her only boy, giving her a fake smile.”

Martha says, “She didn’t know it was fake.”

“Maybe that makes it worse,” Ash says.

This theme – knowing how to tell what’s authentic and what’s not, the ability to understand the difference between performance and reality – is what the episode proceeds to dig into.

As it turns out, Ash dies in a car accident the day following that conversation. Martha is devastated and heartbroken. At his funeral, a friend says to her, “I can sign you up to something that helps…It will let you speak to him.” This turns out to be an AI service that culls data from a deceased person’s online posts to create a chatbot that can interact with users in the voice and style of the deceased person. The more active the deceased had been on social media, the better the data set, and the more true to life the chatbot will be. “The more it has, the more it’s him,” the friend explains. Initially, of course, Martha is horrified and insulted by the idea.

“It won’t be him,” she protests.

“No, it’s not,” her friend admits. “But it helps.”

Some time passes. Martha realizes she’s pregnant, and in a moment of vulnerability and loneliness, wanting to tell Ash about the pregnancy and feeling heartbroken that she can’t, she decides to try the chatbot service. Very quickly, she gets sucked in, uploading voice data to speak with “Ash” over the phone, staying up late chatting with him, and ignoring the living people in her life who care about her. “I think I’m going mad,” she says to him.

Martha teaches AI-Ash their inside jokes and private references, training him (literally, training the program that’s mimicking him) in the intimate history of their couplehood. He absorbs and remembers everything, as you’d expect, but that in itself is uncanny – as is his Alexa-like submissiveness and virtual-assistant tone (“I’ll only do it again if you ask,” he says at one point, when she’s creeped out by him googling something in the middle of a conversation).

Martha rejects calls from her sister, Naomi, so she can keep talking to AI-Ash. She takes her phone everywhere, so she can talk to him constantly. When her phone breaks and she loses her connection to him, she panics, feeling as though she’s lost him all over again. “I dropped you,” she says, sobbing.

Photo by Ahmed Nishaath on Unsplash

“I’m not in that thing, you know,” he says. “You don’t have to worry about breaking me.” This fact itself emphasizes his lack of humanity: AI-Ash, unlike real Ash, is backed up in the cloud, transferrable to any device or form, replicable and replaceable. “I’m not going anywhere,” AI-Ash says, somewhat menacingly. And in fact that’s part of the problem. He cannot be broken, he cannot be lost: he is invulnerable. But nor does he change, or grow, or have bad moods, or do anything truly new. What’s more, his own invulnerability makes it impossible for him to understand her vulnerability, or have empathy.

Eventually, at the chatbot’s suggestion, Martha orders a life-size android device made to look like Ash, with AI-Ash’s personality and voice built in. The android’s features are modeled after the curated pictures Ash had posted of himself online. Desperately missing the man she loves, Martha overcomes her initial uneasiness and begins trying to establish a somewhat normal life with the android.

But right away, there are things that are off. He doesn’t need to eat. He doesn’t sleep, but lays for hours next to her in bed with his eyes open. He doesn’t breathe. When all of these things start to grate on Martha, he changes his behavior to her liking, but that’s just it – she has to train him, like a dog or a performer. He is, in actual fact, merely a performer. He is an ornament to her life, not a participant in it. He’s too solicitous, too acquiescent. He doesn’t have a backbone or a will of his own. He doesn’t mind being trained or treated like an object, and that’s the ultimate giveaway of his inhumanity. No human would surrender their agency so cleanly.

This is eventually what breaks Martha’s all-too-willing suspension of disbelief. The real people in Martha’s life, most notably her sister Naomi, can hold her accountable and expect things from her, they can have priorities other than placating her, they can change her and be changed by her, they have a desire to be heard and respected, and they are capable of administering tough love. Android-Ash can do none of these things, and yet it’s not for lack of trying, on both their parts. Martha has given him so much data, so much training and feedback. “It doesn’t work,” she tells him after he tries to breathe naturally. “I can tell that you’re faking it.”

Photo by Linus Nylund on Unsplash

One day, at the top of a seaside cliff known as a popular suicide spot, Martha tells him, “You’re just a few ripples of you. There’s no history to you. You’re just a performance of stuff that he performed without thinking, and it’s not enough.”

She tells him to jump, and when he calmly begins to obey, she tells him in a rush of frustration that the real Ash would have been crying and scared at the prospect of dying. Android-Ash immediately adapts. He begins crying and begging Martha not to make him jump. “That’s not fair!” she shouts. He’s successfully manipulated her, used her love of Ash against her for the sake of creating a more convincing illusion. Ultimately she finds a way to live with him, but it is no sort of life for her, or him, worth speaking of.

***

The technology explored in such disturbing detail in “Be Right Back” is called a thanabot, and it now exists, thanks (?) to large language models like ChatGPT. If this creeps you out, it probably should. (To all the tech companies out there developing products that have been the subject of a Black Mirror episode, please stop creating the Torment Nexus.)

Why aren’t thanabots and other chatbots enough? Anyone can see they’re missing something, but what is it?

Like other non-religious folks, I believe that our minds are made of nothing but brain, and that we can understand the mind by understanding the brain (and, as is increasingly being recognized, its connections with rest of the body and the environment it finds itself in), without having to resort to spiritual explanations. Despite that, I share the common-sense intuition of the majority that there is something that makes each person uniquely who they are, something which is not predictable, or replicable, from the state of their neurons at any given time.

Historically this ineffable something bridging the purely biological with the uniquely personal has been called a “soul,” which in some belief systems is thought to persist after death. However, it’s not necessary to believe in the literal presence of a soul to recognize that we need a more “personalistic” understanding of the mind in order to understand, as neuroscientist Oliver Sacks puts it, the physical foundations of the persona. The sciences of the mind are still trying out various possible explanations of how selfhood arises from purely physiological mechanisms.

In The Man Who Mistook His Wife for a Hat (1985), Sacks describes this idea as “the very intersection of mechanism and life…the relation of physiological processes to biography.” The patient case histories he describes in the book are fascinating because they show, on the one hand, how dependent our selfhood is on basic biological processes which can so easily go awry; and on the other hand, how large of a gulf nevertheless remains between those biological processes and what they are capable of producing – that is, our sense of self.

Photo by Alina Grubnyak on Unsplash

Even gathering all possible neurological data from a particular person wouldn’t necessarily get us closer to knowing what’s in that gap. As Sacks put it:

“When Jonathan Miller [director and writer] had a wonderful moment, an epiphany, or fell in love or whatever, he used to say that he wished he could have a brain print – that would completely capture his brain state at that time, so he would re-experience it and so that others perhaps could experience it. But it seems likely that the brain print of a mental state would be different in everyone, so that it couldn’t be decoded by anyone else – this, among other things, might make telepathy impossible.

Secondly, possibly because one is continually altering and reconfiguring one’s own brain, it might not be intelligible. So maybe the moment and the brain state would be unique always, and one wouldn’t know what to do with it, even if it could be made.”

In the absence of such a brain state, the only thing thanabots and similar AI technologies have to work with is the performative, outward-facing statements that the person chose to reveal, as mediated through a particular form – i.e., their posts, emails, photos, voice recordings. What these communications lack, in even the most revealing instances, is difficult to place, but we know it’s missing.

In the meantime, we can find clues in the lives of patients for whom the physical foundations of the persona have crumbled, as Sacks did. For instance, the experience of one patient, Dr. P. (the titular man who mistook his wife for a hat), gives us insight into what thanabots and other AI simulations are missing, and may be always doomed to miss: they are unable to see the forest for the trees. Sacks relates how he asked Dr. P. to describe a picture:

“His responses here were very curious. His eyes would dart from one thing to another, picking up tiny features, individual features, as they had done with my face. A striking brightness, a color, a shape would arrest his attention and elicit comment – but in no case did he get the scene-as-a-whole. He failed to see the whole, seeing only details, which he spotted like blips on a radar screen. He never entered into relation with the picture as a whole – never faced, so to speak, its physiognomy. He had no sense whatever of a landscape or scene.”

Later, Sacks describes showing Dr. P. a picture of his (Dr. P.’s) brother. “That square jaw, those big teeth – I would know Paul anywhere!” the patient exclaims.

“But was it Paul he recognized, or one or two of his features, on the basis of which he could make a reasonable guess as to the subject’s identity? In the absence of obvious ‘markers,’ he was utterly lost. But it was not merely the cognition, the gnosis, at fault; there was something radically wrong with the whole way he proceeded. For he approached these faces – even of those near and dear – as if they were abstract puzzles or tests. He did not relate to them, he did not behold. No face was familiar to him, seen as a ‘thou,’ being just identified as a set of features, an ‘it.’ Thus there was formal, but no trace of personal, gnosis. And with this went his indifference, or blindness, to expression. A face, to us, is a person looking out – we see, as it were, the person through his persona, his face. But for Dr. P. there was no persona in this sense – no outward persona, and no person within.”

Apprehending a scene in its totality is more than simply registering the presence of all of its components. There is an act of synthesis, a process of interpretation, a self that guides the meaning-making process and to whom the resultant meaning means something. Contrary to the Black Mirror character’s assertion that “The more it has, the more it’s him,” this process of selfhood does not seem to be simply an additive one. It’s not the case that if 1 + 1 is vaguely Ash-like, 1 + 1 + 1 will necessarily be more Ash-like. It might just be more of whatever was vaguely Ash-like, with no progress made towards the goal of “real Ash.” The +1 that is added may even take the simulation further away from “real Ash,” if what it is adding is junk or inconsistent with previous inputs.

Photo by ameenfahmy on Unsplash

We intuitively feel that there is a totality to someone’s personhood that isn’t divisible, and we – those of us who do not suffer from neurological diseases like Dr. P.’s – perceive this automatically. We do not perceive our intimate partners as a collection of body parts that we’ve committed to memory as an abstract entity known as “My Spouse,” but rather as a thou which we relate to as a subject with the same moral status as ourselves.

If this is so, then simply feeding a large language model more and more examples of language will not necessarily get us closer to the ideal of a self that its developers are trying to represent (or hoping the system will represent on its own, given time and feedback). Having more of someone’s facial features to perceive (more data) may get an artificial system closer to correctly identifying that person, just like the brain-damaged Dr. P. managed to, but it wouldn’t get the system any closer to perceiving the person behind the facial features. Recognizing a thou within the data depends upon having (among other things, perhaps) a theory of mind, a knowledge of one’s own personhood, and what philosopher Daniel Dennett calls an intentional stance.

Whatever it is that makes a particular human being human – worthy of consideration in our moral calculus as a free agent – does not seem to be a matter of piling on more data. Achieving personhood is not a matter of making a more sophisticated autocorrect that knows enough words to tell us it has achieved personhood.

Computer scientists – who, let’s be clear, are not experts on the human brain or how the brain makes the mind, even though you could be forgiven for thinking so based on how blithely they throw around neurological metaphors – assume that this obstacle, while possibly insurmountable for the time being, is just a matter of time, or improved computing power. Once we get larger training sets, or richer data, or some new innovation we can’t even conceive of yet (so the argument goes), there’s no a priori reason a sufficiently advanced artificial pattern-recognition system couldn’t eventually become a mind, and from there eventually achieve something we could agree entitles the system to personhood.

I think this is like saying a bowl of ingredients will eventually become a cake if we just use better ingredients, devise a better bowl, or train the bowl of ingredients to learn what kinds of cake humans like to eat. It’s the difference between thinking that humans are stochastic parrots and recognizing that we’re something fundamentally different. Yes, natural and artificial processing systems are both made of stardust, if you want to think about it that way, but saying that wires and neurons both use electricity, or that variables and neuronal firing potentials both operate on a system of weighted odds, doesn’t prove that one can eventually function like the other, given enough X.

Humans are not (to quote Sacks quoting Hume) “nothing but a bundle or collection of different sensations, which succeed each other with an inconceivable rapidity.” Or rather, only brain-damaged people like Dr. P. work this way, and their cognition is devastated as a result. It is only in severely brain-damaged individuals, or artificial almost-minds, that we see this “gruesome reduction of a man…without a past (or future), stuck in a constantly changing, meaningless moment.”

***

From the outside, it’s devilishly difficult, if not impossible, to tell the difference between a Dr. P.-like mind and a normal mind “recognizing” a loved one. Both will be able to identify a photo of Paul as “Paul.” But only one of them will see “Paul, the person as a whole who is a thou like me.”

Photo by Possessed Photography on Unsplash

For the time being, we know that ChatGPT’s performance of intelligence and a thanabot’s performance of a deceased person are fakes, and we are, overall, savvy enough not to be hoodwinked by them. But people who know this intellectually are already starting to forget it on an emotional level, to lazily assume there is a real mind there. Our brains are used to dealing with sentient others, so the default is to assume that if something seems kind of human-ish, it’s sentient (cf. mythology, religion, and ghost stories). More and more of us are giving ChatGPT and other systems the benefit of the doubt, waving them through because they seem enough like people that it doesn’t really matter.

But such an unreflective stance is only possible when we ignore the profundity of the gap between physiology and biography, between mechanism and life. We may eventually come to understand and bridge that gap by exploring fruitful avenues in neurology – in particular, the “neurology of identity” that Sacks spoke of. It’s not that the question is closed to science, but that computer science and AI may not be the right sciences for achieving what their proponents say they are trying to achieve. Ideally, understanding the neurology of identity and how selfhood arises out of matter should involve experts in actual neurology and identity and selfhood, not just experts in counterfeit versions of these things.

As of now, the only thing that large language models show us is that we’re not even close to understanding what happens in the gap between neuron and person. And I think most of the less-bombastic AI researchers today would agree with that statement. However, where I differ with many of these researchers is that I don’t think more data, more training, or more computing power will necessarily get us any closer to the goal. That’s because such piling-on of discrete parts completely misses the idea of a person-as-a-whole. A bowl full of ingredients is not a cake. And adding more ingredients doesn’t make it more like a cake.

I also tend to agree with Dennett that the goal itself is misguided, whether or not we are eventually able to achieve it. As he writes in a piece for the 2019 collection Possible Minds, “We don’t need artificial conscious agents. There is a surfeit of natural conscious agents, enough to handle whatever tasks should be reserved for such special and privileged entities. We need intelligent tools.” Elsewhere he elaborates: “We’re making tools, not colleagues, and the great danger is not appreciating the difference, which we should strive to accentuate, marking and defending it with political and legal innovations.”

The risk is not that we’ll get better at making what Dennett calls “counterfeit people” until eventually they become real people; rather, the risk is that we’ll merely get better at convincing ourselves and others that we’ve gotten better at making counterfeit people and that those counterfeit people are conscious. At first it might just be the folks on the fringe who deceive themselves this way, like the former Google employee who believed the company’s large language model had achieved sentience. It’s easy to write these people off as crackpots for now. But who’s to say that, down the road, many more of us won’t fall for the fool’s gold of a counterfeit person and convince ourselves that the “cutesy humanoid touches” that AI creators use to plaster over the uncanny valley are really all we’re made of? As linguist and AI-malcontent Emily M. Bender put it, “People want to believe so badly that these language models are actually intelligent that they’re willing to take themselves as a point of reference and devalue that to match what the language model can do.”

Photo by Anderson Rian on Unsplash

We may think we’d be too sophisticated to fall for this, but we’re not. When the ELIZA program first came out, its creator Joseph Weizenbaum was disturbed by (in Dennett’s words) “the ease with which his laughably simple and shallow program could persuade people they were having a serious heart-to-heart conversation.” Lest any of us think that we’re immune, Dennett brings us down to earth: “Even very intelligent people who aren’t tuned in to the possibilities and shortcuts of computer programming are readily taken in by simple tricks.” (He even suggests that only those who have proven they can see through the artificiality of the system should be allowed to use it, a sort of reverse Turing Test.)

This isn’t just about gullible people seeing Jesus in a piece of toast or lonely people befriending a volleyball, either. We’re all susceptible; our species’ tendency to anthropomorphize runs rampant. The NASA scientists working on the Mars rovers Spirit and Opportunity (a rational lot if there ever was one) anthropomorphized the machines to such an extent that they attributed emotions, a personality, and a gender to each of them, described their malfunctions in terms of human diseases like amnesia and arthritis, and referred to them as their “children” (there was a lot of crying). So we shouldn’t be surprised if the average layperson leaps headfirst into the suspension of disbelief that allows them to bypass their native bullshit detectors and start bonding with a chatbot – especially if that chatbot sounds just like a dearly missed loved one. Given a sufficiently terrible loss and a sufficiently acute need for connection, who wouldnt consider chatting with a thanabot?

Photo by Tim Foster on Unsplash

But conversely, how many of us would, like Martha, have the honesty and strength of mind to admit to ourselves that it wasn’t enough, there there was no person behind the persona, and that it might be bad for us to pretend there was?

***

Thanabots and similar technologies demonstrate in a particularly creepy way how the capabilities of AI continue to outstrip the necessarily slow process of establishing cultural, legal, and ethical norms regulating its use.

One of many thorny issues to consider about the potential use of thanabots is whether it’s ethical to use the likeness and digital records of a dead person who didn’t consent to have their likeness or records used in that way. In the future, it may be more common for people to establish their consent (or lack thereof) for such services in the form of digital estate planning. This is similar to how you can set up what Google should do with your Gmail account once you’re inactive on the site for a certain amount of time. But such a process of consent is not yet standard.

Other issues come to mind: Even if consent laws evolve on this topic, how could someone possibly give “informed consent” to the creation of a thanabot to begin with, when the mechanisms by which thanabots (and ChatGPT itself) operate are opaque even to their creators? Even the creators of the technology are not “informed” in this sense. According to one AI researcher: “We just don’t understand what’s going on here. We built it, we trained it, but we don’t know what it’s doing.” Literally no one knows what a potential thanabot user would be signing up for, and the technology would likely change drastically between the time consent was given and the moment their thanabot was eventually created.

Photo by K. Mitch Hodge on Unsplash

In addition, what do we, as a society, owe those who are mourning? Should we allow corporations to opportunistically provide them with a technological “solution” to their grief, if the grieving person wants it (and is willing to pay for it)? Or do we decide that we have a responsibility to protect such people from exploitation? If psychics, mediums, and other charlatans are rightfully considered despicable for preying on vulnerable people to make a buck, how are thanabots or the companies that develop them any better? At what point should a certain AI, or the company that creates it, be considered predatory?

If using thanabots for a short time following a loss is morally acceptable, but using a thanabot for years, to the exclusion of developing new relationships, is not, where does the limit lie, and who gets to set it? Should we trust the grieving person to set that boundary for themselves?

More broadly speaking, what’s the risk of interacting with counterfeit people? What do we risk by trusting them, building intimacy with them, confiding in them, treating them as a thou? Therapeutically, if not philosophically, this is far from trivial. There has already been at least one suicide partially attributed to the use of a chatbot based on large language models.

For people who are vulnerable – not just those who are grieving, but also those who are isolated, mentally unwell, incapacitated, underage, elderly, cognitively disabled – there are not currently sufficient guardrails in place to prevent harm from these technologies. There is nothing stopping people from relying on them as a replacement for amateur (worse than amateur: crowdsourced) psychiatric care, or simply as a replacement for healthy human contact, and we don’t know what the long-term effects of that would be. But it stands to reason that vulnerable people will be more inclined than healthy people to invest the technology with more trust than it deserves, simply because they need to believe the delusion more.

Photo by Sasha Freemind on Unsplash

This is a problem because, unlike therapists, social workers, and many other people tasked with helping other humans, an AI has no fiduciary responsibility to act in its users’ best interests. “Artificial people will always have less at stake than real ones, and that makes them amoral actors,” Dennett says. The situation is clearly ripe for exploitation and abuse.

Large language models and the tools that use them are trained to be bullshitters. This term isn’t just mere invective – it describes a particular stance towards the truth. A bullshitter literally doesn’t care whether something is true or false. It makes no odds to them. This is what’s going on with well-documented instances of “sycophancy” and “sandbagging” in large language models, as described in a paper by researcher Sam Bowman. Sycophancy is when “a model answers subjective questions in a way that flatters their user’s stated beliefs” and sandbagging is when a model is “more likely to endorse common misconceptions when their user appears to be less educated.”

What would happen if the primary source of social support for large swaths of the population took the form of unregulated interactions with amoral, sycophantic, sandbagging bullshitters? What happens to a society that confuses data with minds, predictive accuracy with intelligence? Who benefits from the glorification of fooling people with counterfeits? Why does our society think this is a worthwhile goal, and what should be the goal instead? The time may soon be here, if it’s not here already, in which these questions are no longer merely philosophical, but are real problems to be solved by psychiatry, sociology, public health, government policy, and law.

What seems at first innocuous may turn out, in retrospect, to be one concession too far. As Dennett warns, “When attractive opportunities abound, we are apt to be willing to pay a little and accept some small, even trivial, cost-of-doing-business for access to new powers. And pretty soon we become so dependent on our new tools that we lose the ability to thrive without them. Options become obligatory.”

Photo by Compare Fibre on Unsplash

As we continue building amoral tools, and rewarding their creators for devising better ways of tricking us into treating these tools like our colleagues, we will likely see these cost-of-doing-business concessions grow to the point where we no longer feel we have a choice in the matter. We’ll become the society that mistook its data for a mind.

And like Dr. P., who “did not know what was lost, did not indeed know that anything was lost” when he lost the ability to see the person behind the persona, our situation may be all the more tragic because we won’t even remember that we used to perceive a difference between the two.