Artificial General What?

by Tim Sommers

One thing that Elon Musk and Bill Gates have in common, besides being two of the five richest people in the world, is that they both believe that there is a very serious risk that an AI more intelligent than them – and, so, more intelligent than you and I, obviously – will one day take over, or destroy, the world. This makes sense because in our society how smart you are is well-known to be the best predictor of how successful and powerful you will become. But, you may have noticed, it’s not only the richest people in the world that worry about an AI apocalypse. One of the “Godfathers of AI,” Geoff Hinton recently said “It’s not inconceivable” that AI will wipe out humanity. In a response linked to by 3 Quarks Daily, Gary Marcus, a neuroscientist and founder of a machine learning company, asked whether the advantages of AI were sufficient for us to accept a 1% chance of extinction. This question struck me as eerily familiar.

Do you remember who offered this advice? “Even if there’s a 1% chance of the unimaginable coming due, act as it is a certainty.”

That would be Dick Cheney as quoted by Ron Suskin in “The One Percent Doctrine.” Many regard this as the line of thinking that led to the Iraq invasion. If anything, that’s an insufficiently cynical interpretation of the motives behind an invasion that killed between three hundred thousand and a million people and found no weapons of mass destruction. But there is a lesson there. Besides the fact that “inconceivable” need not mean 1% – but might mean a one following a googolplex of zeroes [.0….01%] – trying to react to every one-percent probable threat may not be a good idea. Therapists have a word for this. It’s called “catastrophizing.” I know, I know, even if you are catastrophizing, we still might be on the brink of catastrophe. “The decline and fall of everything is our daily dread,” Saul Bellow said. So, let’s look at the basic story that AI doomsayers tell.

Eventually someone will create an artificial general intelligence (AGI) more intelligent than the smartest humans. This AGI will be recursively self-improving on computer time-scales and so will quickly make itself much more intelligent until it achieves (what Nick Bostrom calls) “superintelligence.” Then it will either take over the world or immediately wipe us out as a means of self-defense (so we can’t turn it off, for example) or due to a too literal interpretation of its programming (the AGI assigned to make paper clips, in an oft-used example, makes the whole planet, and everything on it, into paper clips before we can stop it).

I doubt this. I don’t doubt that AI will be more and more disruptive, transformative, partly good, partly bad, and that things will definitely change. I only doubt this apocalypse story where an AGI wipes us out or takes over. For one thing, the standard AGI apocalypse story bears a suspicious resemblance to a marriage of Frankenstein’s Monster with the “Monkey’s Paw.” For another, the logic of the AGI apocalypse resembles the seemingly implacable logic of so many prior, hotly anticipated, apocalypses, including neoMalthusian predictions, like Paul Ehrlich’s in his best-selling 1968 book The Population Bomb, that “In the 1970’s, hundreds of millions of people will starve to death” because of overpopulation. But I will leave that line to others.

Since I am a philosopher and not a neuroscientist, computer scientist, or a coder, I don’t want to focus on what neuroscientists and coders know, which is more than me. I want to focus on what they don’t know. Specifically, they don’t know what intelligence is. Neither do I. But as the Oracle of Delphi called Socrates the wisest man in Athens not because of what he knew, but because of what he knew he didn’t know; so too, while I am no Socrates, I think I know that we don’t know some things we need to know before we can know what’s possible for AGI. You know?

(1) We don’t know if intelligence involves sentience, consciousness, self-conscious, or more.

Here’s a pretty standard list of things that make a person a person. (a) Sentience – able to feel, at a minimum, pleasure and pain; (b) Consciousness 1  – awake/aware ; (c) Consciousness 2 -“hard problem”/”there is something that is like to be me”/”not a philosophical zombie” consciousness (may be the same as (b)); (d) Self-Consciousness –aware of self as a “self,” and the world as separate; (e) Language – or the ability to communicate in some way by manipulating symbols; (f) Rationality – capable of using reason and/or giving and responding to reasons; (g) Morality – able to understand and follow moral rules.

I am not saying that I know these are the requirements for being a person and/or intelligent. These are just some oft mentioned possibilities. What we don’t know is which of these are essential to intelligence. (f) seems to be what the AGI debate is mostly about, but we don’t know if other things are also required. If sentience or consciousness, for example, are required, then there’s no evidence that any progress towards AGI has ever been made at all – even a little tiny bit. And even supposing a being might be intelligent without being anything other that rational, it’s hard to see why such a being would be motivated to do anything, even to add to it’s own intelligence or processing power.

(2) We don’t know what “general” intelligence is.

Some AIs always win at Jeopardy! or at chess or Go. The calculator on your phone is better at basic math than you, probably. I think my Spotify AI is better at picking music I’ll like that I am. That’s obviously different from general intelligence. But even if you create one AI that wins at Jeopardy! and chess and Go, and does math while picking out better music than you or I can, that’s still not general intelligence. We have no idea if all the recent machine learning progress, which made has been made only with domain specific AIs, contributes at all, in any way, to the project of creating artificial general intelligence. It depends on what the “general” in general intelligence is.

(3) In fact, we don’t know what “intelligence” is either.

Just as AGI researchers were formulating the goal of creating a general intelligence, many psychologists were beginning to deny that there is such a thing as general intelligence – something like an IQ or whatever is measured by an IQ test. Some have argued that we should break “intelligence” into a number of more specific intelligences – not that different from the specific ones that AGI researches are hoping to transcend. This stuff is too controversial to be very helpful at this point. But I think it is fair to say this. Forget knowing what “general” intelligence is, psychologists and neuroscientists still don’t agree on what intelligence is – or how to measure it – well enough to predict anything about what machine intelligence would be like if it could exist. In the mean time, if you are an AI researcher and you think that the best way to understand intelligence is by amalgamating domain specific abilities, just keep in mind that some psychologist think that the best way to understand general intelligence is by breaking it down into domain specific abilities.

(4) We also don’t know the character and space of possible intelligences.

But let’s pretend that we do know what intelligence is and we do know what general intelligence is.

Maciej Ceglowski argues that another essential premise of the argument for an AGI apocalypse that “the space of all possible minds is large…In particular, [there’s nothing like a] physical law that puts a cap on intelligence…” This is important because the AGI apocalypse seems to require, not just human level general intelligence, but something way beyond that. Is that even possible?

As Ceglowski points out, especially since we know so little about what intelligence is, why shouldn’t we expect that there will be all kinds of limitations on it that we also don’t know about yet? Further, as I hinted in the beginning, isn’t there an odd assumption here that intelligence is like a superpower – or a megalomaniacal kind of sociopathy. If you are intelligent, in the doomsday story, you are also super persuasive (hook me to the internet!), wildly inventive (I just made a new doomsday weapon!), and single-mindedly motivated to seek power at all costs (bow down before me or die!) – among other things. (There’s a great story in which scientists turn on a supercomputer and ask it if there is a God and it answers, “There is now.”*)

Most AGI doomsayers rely on an even stronger claim than just that there are no limits on general intelligence. As noted, they claim that an AI exhibiting general intelligence will be recursively self-improving. The way AGI goes from merely irritatingly literal to world-ending so quickly we can’t turn it off, on this account, is that once it’s smarter than us, it starts making itself smarter and smarter. Which is a weird idea. If that’s a thing that can be done, why don’t we do it? Are we not smart enough? Or is it that we don’t have access to our own code or central processor or something? But large language models don’t have access to their own inner workings either, do they? Anyway, where have we seen computers that can do this? From what I can tell, machine learning is about more and more data. It is not the case that the algorithm becomes more intelligent, much less makes itself more intelligent.

But even if I’m wrong about all that, since none of these large-language models are even purported to exhibit general intelligence, by definition, none of them are recursively improving their general intelligence. So far, we just have no evidence that this is a thing that could happen. It’s pure science fiction. A cool story idea. But is that how we behave as intelligent beings? Are we totally focused on becoming smarter and smarter all the time above all else, no matter what it takes?

(5) We also don’t know the significance of the fact that intelligence seems to be social.

As I said at the beginning, the AGI story finally explains why the most intelligent people in the world are also the richest and run everything  – not to mention are the most socially adept and persuasive (e.g., see Musk’s tweets). That’s one issue about how intelligence is social. Here’s another.

Human beings are social animals. Most other relatively intelligent animals (chimps, dolphins, etc.) are also social. There seems to be some kind of connection between intelligence and sociability. Maybe, it’s just that social cooperation requires reasoning or makes reasoning useful. But starting with language and going all the way up to general intelligence, many philosophers now see all intelligence as deeply social. We participate in complicated, and sometimes vast, networks of cooperation in making stuff, but also in making language and concepts and ideas.

A solitary AGI would be wholly dependent on us for its intellectual development. No matter how smart it is, why believe that it can somehow out think all the cumulative achievements of collective human thought over eons and overturn everything people have created very quickly and all by-itself?

There’s a familiar narrative of transcendent “specialness” that I believe is behind this way of thinking – see, Ayn Rand, X-Men, Ender’s Game, Harry Potter, etc. This narrative is especially comforting to young, smart, socially isolated people, but it encourages a distorted view of our social world. Why think that AGI can manipulate social interactions so successfully that it can take over the world – or just even avoid being turned off? Intelligence was not the best predictor of success in social interactions at my high school.

I wonder if there is some group of people somewhere who wildly overestimate the significance of intelligence, all by itself? And I wonder if such people might have their own favored apocalyptic theory centered around the thought that somewhere in the universe there might be someone smarter that them. Hearing all this, they might think that I don’t know just how smart an AGI could be be. The thing is, neither do they.

*I thought that this was an Asimov story, but it seems to be a Fredric Brown story. I am not entirely sure. Stephen Hawking, another doomsday AI believer told it to John Oliver without any attribution. Just saying.