by Tim Sommers
(The butter robot realizing the sole purpose of its existence is to pass the butter.)
In the halcyon days of “self-driving cars are six months away,” you probably encountered this argument. “If self-driving cars work, they will be safer than cars driven by humans.” Sure. If, by “they work,” you mean that, among other things, they are safer than cars driven by humans, then, it follows, that they will be safer than cars driven by humans, if they work. That’s called begging the question. Unfortunately, the tech world has more than it’s share of such sophistries. AI, especially.
Exhibit #1
In a recent issue of The New Yorker, in an article linked to on 3 Quarks Daily, Geoffrey Hinton, the “Godfather of AI,” tells Joshua Rothman:
“‘People say, [of Large Language Models like ChatGPT that] It’s just glorified autocomplete…Now, let’s analyze that…Suppose you want to be really good at predicting the next word. If you want to be really good, you have to understand what’s being said. That’s the only way. So, by training something to be really good at predicting the next word, you’re actually forcing it to understand. Yes, it’s ‘autocomplete’—but you didn’t think through what it means to have a really good autocomplete.’ Hinton thinks that ‘large language models,’ such as GPT, which powers OpenAI’s chatbots, can comprehend the meanings of words and ideas.”
This is a morass of terrible reasoning. But before we even get into it, I have to say that thinking that an algorithm that works by calculating the odds of what the next word in a sentence will be “can comprehend the meanings of words and ideas” is a reductio ad absurdum of the rest. (In fairness, Rothman attributes that view to Hinton, but doesn’t quote him as saying that, so maybe that’s not really Rothman’s position. But it seems to be.)
Hinton says “training something to be really good at predicting the next word, you’re actually forcing it to understand.” There’s no support for the claim that the only way to be good at predicting the next word in a sentence is to understand what is being said. LMMs prove that, they don’t undermine it. Further, if anything, prior experience suggests the opposite. Calculators are not better at math than most people because they “understand” numbers. Read more »