Not Your Parents’ AI (Especially if your Parents are Functionalists)

by Tim Sommers

The Theory of Mind That Says Artificial Intelligence is Possible

Does your dog feel pain? Or your cat? Surely, nonhuman great apes do. Dolphins feel pain, right? What about octopuses? (That’s right, “octopuses” not “octopi.”)They seem to be surprisingly intelligent and to exhibit pain-like behavior – even though the last common ancestor we shared was a worm 600 million years ago.

Given that all these animals (and us) experience pain, it seems exceedingly unlikely that there would only be a single kind of brain or neurological architecture or synapse that could provide the sole material basis for pain across all the possible beings that can feel pain. Octopuses, for example, have a separate small brain in each tentacle. This implies that pain, and other features of our psychology or mentality, can be “multiply realized.” That is, a single mental kind or property can be “realized,” or implemented (as the computer scientists prefer), in many different ways and supervene on many distinct kinds of physical things.

We don’t have direct access to the phenomenal properties of pain (what it feels like) in octopuses – or in fellow humans for that matter. I can’t feel your pain, in other words, much less my pet octopuses’. So, when we say an octopus feels pain like ours, what can we mean? What makes something an example (or token) of the mental instance (or type) “pain”? The dominant answer to that question in late twentieth century philosophy was called the “functionalism” answer (though many think functionalism goes all the way back to Aristotle).

Functionalism is the theory that what makes something pain does not depend on its internal constitution or phenomenal properties, but rather the role or function it plays in the overall system. Pain might be, for example, a warning or a signal of bodily damage. What does functionalism say about the quest for Artificial General Intelligence (AGI)?

It suggests that not only is AGI possible, but that there are no in-principle constraints on what we could make an AGI out of since neither the physical basis, nor the phenomenal properties (and by extension, maybe, sentience and consciousness), are necessary to create an AGI. Though there are many possible objections to functionalism, for the reasons just mentioned “functionalism” has long been viewed as the philosophy of mind that most clearly underwrites the possibility of AGI. After all, on this account, a mind can be made out of anything, even the nation of China, as long as the right functional relations obtain.

Yet, functionalism seems to have disappeared from the discourse. Maybe, that’s because the functionalist justification for thinking that AGI was possible also offers a very clear way of seeing why generative AI, Large Language Models, ChatETC, are not examples of AGI. To oversimply a little, LLMs etc. don’t even clear the low bar of functionalist accounts. You need multiple functions related to each other in appropriate ways to make a mind. There’s no knock-down proof, but it seems implausible that there is enough structure, or enough moving parts (so to speak), to make a text predictor into a thinking being.

Other Reasons to Think AGI is Possible

I am not saying that AGI stands or falls with functionalism. There are other reasons to think AGI is a possibility. Here are two.

(1) We ourselves exhibit natural General Intelligence.

Unless we are products of the divine will, or some other kind of miracle or magic, GI is a thing that can happen – because it happened to us. So, why can’t we make a new GI? “We each carry on our shoulders a small box of thinking meat,” as Maciej Ceglowski puts it, “So we know that in principle, this is possible.”

(2) Digital computers seem the closest analog to a mind currently available.

Almost 400 years ago, Thomas Hobbes asked, “What is the heart but a spring, and the nerves but so many strings, and the joints but so many wheels, giving motion to the whole body?” Descartes imagined people on the model of the hydraulic fountains he saw in the Royal Gardens which gave motion to various automatons. Freud modeled the mind as a steam engine with competing pressures and valves. The mind is invariably modeled after whatever is the most advanced technology of the time. So, for us, computers.

However, it’s important to remember that we did not model our brains as digital computers, or intelligence as a kind of computation, because it fit the experimental data or was independently plausible. We came to think the brain as hardware or software or both, because of the invention and wide-spread use of computers. These are the springs, strings, and “so many wheels” that we understand.

Nonetheless, I grant you that digital computing is the most likely technological route to AGI. So, how are we doing? I can’t tell. But, again, the functionalism that once underwrote the quest for AGI seems, to me, to undermine current claims, albeit by very smart people, that we are there or almost there.

Geoffrey Hinton and Scott Aaronson

I talked about Geoffrey “the Godfather of AI” Hinton’s remarks (quoted in a recent issue of The New Yorker in an article linked to on 3 Quarks Daily) a while back, including his claim that “by training something to be really good at predicting the next word, you’re actually forcing it to understand. Yes, it’s ‘autocomplete’—but you didn’t think through what it means to have a really good autocomplete”: large language models, such as GPT, which powers OpenAI’s chatbots, must be able to comprehend the meanings of words and ideas.

There’s no support for the claim that the only way to be good at predicting the next word in a sentence is to understand what is being said. LMMs are evidence, in fact, that that is not true. Further, if anything, prior experience suggests the opposite. Calculators are not better at math than most people because they “comprehend” numbers.

Most of these new generative AIs are text generators, as you probably know. So, while it’s fine to say that intelligence can be multiply-realized, as functionalists do, or that different, but of the right sort of, underlying physical processes might lead to an outcome that constitutes true intelligence, there’s just not enough mystery here about how these system works to make that plausible.

Does anyone think there are enough functional components in a model that predicts the next word in a text document to approach AGI? While we may not know what’s going on in an LLM from moment to moment, we know what, in general, is going on. And we have no reason to believe that that very focused process could give rise to understanding, no matter how well the chatbot functions or how much data it is fed.

Underwriting such a view requires more than functionalism. It requires a super or metafunctionalism. Not only is the mind essentially functionalist, but any given single function can somehow spontaneously instantiate the full range of functionalities needed for AGI if it gets really good at one task; e.g. generating text.

There’s also no evidence that LLMs ever get smarter, they just get more data. But, again, generating text is not even in principle the kind of thing that could be the basis of the entire neural architecture of an intelligent being. And while functionalists set aside phenomenal properties, to the extent that we might believe that phenomenal properties arise from the right kinds of functional relations, surely calculating the likelihood of the next word in a sentence, no matter how proficient at it the AI is at it or how useful that AI is, does not plausibly give rise to sentience.

The often-brilliant Scott Aaronson (in a video posted here on 3 Quarks Daily) makes a case against what he calls “the religion of justism.” Here are the kinds of things that he says “justa” critics say about LLMs.

They are justa stochastic parrot. They are justa next token predictor. They are justa next function approximation. They are (see, Hinton) justa gigantic autocomplete

Aaronson says it never occurs to such critics to ask what we – you and I – are justa? Instead, these critics keep moving the goalpost. ChatGPT and friends are doing more than almost anyone ever expected so soon. So, what must an AI do to catch a break?

One last time, functionalism may be the philosophy of mind most amenable to the AGI project, but here it tells us what goes wrong with thinking of an LLM as AGI. There are not enough functionalities related in enough of the right ways for current systems to realize anything remotely like intelligence. They are still justa a way of generating text from other text. Whereas we are, at least, justa a whole bunch of other functionalities as well.

Well, but LLMs are “getting better,” Aaronson keeps insisting. But getting better is still getting better at the one single thing they do – generate text. There is zero evidence that they are getting more intelligent in any sense relevant to AGI, much less, of course, sentient or conscious. It’s just that text is so seductive and so invites anthropomorphizing. No matter how good your calculator gets at math you are not likely to think it must really be thinking.

If Aaronson (or anyone else) has a plausible theory of mind to replace functionalism with that makes it remotely likely that any current AI exhibits any of the standard features of a thinking person, I wish they would stop keeping it a secret.

AI Costs

A small final aside on the cost of AI and Aaronson’s assumption that it will change civilization. AI is way more costly than you might think.

(1) AI is not profitable now.

For Google, Microsoft, and OpenAI AI is a big money loser so far. They are prerevenue on AI, as they say. Meanwhile, ChatGPT costs three-quarters of a million dollars a day to keep up and running.

(2) AI is not free.

While ChatGPT is free to you and me to play with, it’s not free free. If you ask Google’s AI between 10 and 100 questions, it will use up a half a liter of water answering them. In 2022, Google’s data centers consumed about  20 billion liters of fresh water for cooling. Currently 2-3% of global greenhouse gas emissions are AI related. Then there’s energy consumption.

Training an AI…uses more electricity than 100 US homes consume in an entire year.” Sam Altman, the CEO of OpenAI, told DAVOS last year that generative AI will require a “breakthrough” in nuclear fusion to continue to be feasible: “There’s no way to get there without a breakthrough.”. In conclusion all we need now is fusion. Great.

(From the Journal of Fusion Energy (2023): “In conclusion, according to the collective remarks by scientists, the popular phrase ‘fusion is always 30 years away’ is proven wrong, technically speaking. To be precise, we should now say ‘fusion was said to be 19.3 years away 30 years ago; it was 28.3 years away 20 years ago; 27.8 years away 10 years ago.’ And now, scientists believe fusion energy is only 17.8 years away. So there is a progress…”)

___________________________________________________________________

My view is that the standard features of personhood map pretty well onto the criteria for being a thinking being.

(1) Sentience – able to feel, at a minimum, pleasure and pain;

(2) Consciousness – in both the everyday (they are awake) sense and the “hard problem” sense (the lights are on/they are not a philosophical zombie);

(3) Self-Consciousness – might be the same as (2), but I mean in the sense that you are aware of yourself as a “self” and the world as separate from you;

(4) Language ability or the ability to communicate in some way;

(5) Rationality – capable of using reason and giving and responding to reasons;

(6) Morality – Able to understand and follow moral rules.

These criteria are debatable. But it looks like LLMs meet only (4) and not even (5).