Terrible AI Arguments (and, No, AIs Will Not be Recursively Self-Improving on Computer-Like Time Scales)

by Tim Sommers

(The butter robot realizing the sole purpose of its existence is to pass the butter.)

In the halcyon days of “self-driving cars are six months away,” you probably encountered this argument. “If self-driving cars work, they will be safer than cars driven by humans.” Sure. If, by “they work,” you mean that, among other things, they are safer than cars driven by humans, then, it follows, that they will be safer than cars driven by humans, if they work. That’s called begging the question. Unfortunately, the tech world has more than it’s share of such sophistries. AI, especially.

Exhibit #1

In a recent issue of The New Yorker, in an article linked to on 3 Quarks Daily, Geoffrey Hinton, the “Godfather of AI,” tells Joshua Rothman:

“‘People say, [of Large Language Models like ChatGPT that] It’s just glorified autocomplete…Now, let’s analyze that…Suppose you want to be really good at predicting the next word. If you want to be really good, you have to understand what’s being said. That’s the only way. So, by training something to be really good at predicting the next word, you’re actually forcing it to understand. Yes, it’s ‘autocomplete’—but you didn’t think through what it means to have a really good autocomplete.’ Hinton thinks that ‘large language models,’ such as GPT, which powers OpenAI’s chatbots, can comprehend the meanings of words and ideas.”

This is a morass of terrible reasoning. But before we even get into it, I have to say that thinking that an algorithm that works by calculating the odds of what the next word in a sentence will be “can comprehend the meanings of words and ideas” is a reductio ad absurdum of the rest. (In fairness, Rothman attributes that view to Hinton, but doesn’t quote him as saying that, so maybe that’s not really Rothman’s position. But it seems to be.)

Hinton says “training something to be really good at predicting the next word, you’re actually forcing it to understand.” There’s no support for the claim that the only way to be good at predicting the next word in a sentence is to understand what is being said. LMMs prove that, they don’t undermine it. Further, if anything, prior experience suggests the opposite. Calculators are not better at math than most people because they “understand” numbers.

And does Hinton really mean to imply that there could be comprehension of the entirety of a human language without any senses or experiences or a body? Could a program generating sentences “understand” redness or sorrow just by understanding the likelihood that that word is about to come up next (given their calculation of how often, and where, it typically gets used) in a sentence being created by them? In fact, how can a chatbot be expressing something by essentially predicting what it will itself say? What it will itself say about literally nothing, by the way, since, on most accounts of reference, there’s no possibility that the language it deploys refers in these circumstances to anything.

It’s fine to say that intelligence can be multiply-realized, as functionalist do, or that different, but of the right sort, underlying quasi-psychological processes might lead to a quasi-psychological outcome, but there’s no mystery here. While we may not know what’s going on in an LLM from moment to moment, we know what, in general, what is going on. And we have no reason to believe that that process could give rise to understanding, no matter how well the chatbot functions or how much data it is fed.

Exhibit #2

I thought I was meeting a soul mate when I ran into an article recently called “AI Will Not Want to Self-Improve” by Peter Salib. I thought, well, it’s unlikely at this point that AI will ever want anything, but I’m with you, friend, on the idea that AIs will not self-improve into superintelligence as soon as they reach human level intelligence. Then I read the paper. Here’s the conclusion.

“AI self-improvement is less likely than currently assumed among those who argue that AI represents an existential risk to humans. That is, perhaps ironically, because their arguments are too good. They are good enough to show that highly capable AI poses a serious threat to the humans who might create it. But they are also good enough to show that highly capable AI poses a serious threat to the AIs who might create it…AIs would have strong reasons not to self-improve and that they might be able to collectively resist doing so. These findings should help to guide future allocations of investment to competing strategies for promoting AI safety.”

So, we can go right on developing AI smarter than us because, since it is smarter than us, it will not develop AI smarter than it is, because that would be really stupid given these arguments, even though, by hypothesis, we heard the arguments and we did not leave off developing an AI smarter than us even when we thought that it could go very badly.

(Full disclosure. Salib’s a lawyer not a coder or whatever. And I’m a philosopher not a computer scientist. So, factor that in.)

Exhibit #3

The following argument, which I have talked about before, is behind much of the discourse that takes us to the supposedly existential threat of AI.

(1) Whatever kind of smart AI we make, and whatever goals we give it, it will want to improve itself. In order to improve, (2) it will redesign itself to be smarter, (3) have more storage capacity, etc. (4) It will do this over and over again (5) on computer, rather than human, time-scales, until it achieves (6) superintelligence (7) enslaving the human race or wiping it out.

Every single bit of this is terrible. And not terrible in a good way like a zombie movie.

(1) Maciej Ceglowski, who has a lot more, and smarter, things to say about AI than I do, says that this first premise, that any AI will, of course, first and foremost, want to improve itself, is “the most unabashedly American premise.” Why will AI want to improve itself? Doesn’t everybody? Don’t we spend most of our time improving ourselves? This is projection meets self-delusion.

(2) How does the AI know how to improve itself? Being smart, no matter how smart, doesn’t mean you know everything and can do anything, despite what certain people might believe. A smart AI may not even know how computers or LLM work. Most smart humans don’t know much about how they, or computers, work. It could learn. But not instantaneously – or over a weekend. There’s this sleight of hand in arguing about AI where the AI is just like a computer where that helps the argument (learn on computer time scales!) and like a person (understand and do novel things with what you’ve learned!) where that helps.

Does this AI have a lab to research it self-improve in? Or does it just think about self-improvements and, thereby, make them happen? Mindfulness?

Can I just think of how to make myself smarter? It’s not working.

Have people always made “smarter” computers in the past by dint of sheer reasoning power? Is that how we got microwave ovens too? (I’m obsessed with microwave ovens.)

(3) The issue of increased storage capacity takes us to the question of how an AI with no senses or limbs not only designs, but makes stuff. Does it just talk people into making stuff for it? How does it interact with the physical world? And, by the way, how does it access its own mind?

(4) It will do this over and over again. Apparently, it’s just never satisfied.

(5) Wait. Even if the AI operates on computer rather than human time-scales, it still has to obey, if nothing else, natural laws. It can’t just create new physical infrastructure instantly out of nothing on computer like time-scale. And how can it indefinitely make itself smarter without upgrading its physical infrastructure?

(6) Why is anyone sure that “superintelligence” is even a thing? Why think there’s no upper bound to intelligence – or that the upper bound is very, very far from where we are? Forget, superintelligence, are you positive you know what intelligence is? Maybe, Einstein or Turing is the best we are ever going to do – man or machine.

(7) Why does the superintelligence want to kill us? Why not just pop off to explore the galaxy? Or solve all our problems and bask in our adoration?

To be clear, I have no proof that nothing like this can happen. I also don’t have any proof that the universe was not created five minutes ago with a lot of planted evidence (your memories, fossils, etc.) to make it seem like it’s a lot older. It’s just that any reasoning I have seen about the existential threat of self-improving AI makes its likelihood seem on par with the recent creation of the universe. I can’t rule it out. But I wouldn’t worry about it.

In fact, I think this reasoning is so bad that I can’t believe that all of the smart people making these arguments really believe them either. So, why do they make them? Now that, I worry about.