by Tim Sommers
(The butter robot realizing the sole purpose of its existence is to pass the butter.)
In the halcyon days of “self-driving cars are six months away,” you probably encountered this argument. “If self-driving cars work, they will be safer than cars driven by humans.” Sure. If, by “they work,” you mean that, among other things, they are safer than cars driven by humans, then, it follows, that they will be safer than cars driven by humans, if they work. That’s called begging the question. Unfortunately, the tech world has more than it’s share of such sophistries. AI, especially.
Exhibit #1
In a recent issue of The New Yorker, in an article linked to on 3 Quarks Daily, Geoffrey Hinton, the “Godfather of AI,” tells Joshua Rothman:
“‘People say, [of Large Language Models like ChatGPT that] It’s just glorified autocomplete…Now, let’s analyze that…Suppose you want to be really good at predicting the next word. If you want to be really good, you have to understand what’s being said. That’s the only way. So, by training something to be really good at predicting the next word, you’re actually forcing it to understand. Yes, it’s ‘autocomplete’—but you didn’t think through what it means to have a really good autocomplete.’ Hinton thinks that ‘large language models,’ such as GPT, which powers OpenAI’s chatbots, can comprehend the meanings of words and ideas.”
This is a morass of terrible reasoning. But before we even get into it, I have to say that thinking that an algorithm that works by calculating the odds of what the next word in a sentence will be “can comprehend the meanings of words and ideas” is a reductio ad absurdum of the rest. (In fairness, Rothman attributes that view to Hinton, but doesn’t quote him as saying that, so maybe that’s not really Rothman’s position. But it seems to be.)
Hinton says “training something to be really good at predicting the next word, you’re actually forcing it to understand.” There’s no support for the claim that the only way to be good at predicting the next word in a sentence is to understand what is being said. LMMs prove that, they don’t undermine it. Further, if anything, prior experience suggests the opposite. Calculators are not better at math than most people because they “understand” numbers. Read more »

It might strike you as odd, if not thoroughly antiquarian, to reach back to Aristotle to understand gastronomic pleasure. Haven’t we made progress on the nature of pleasure over the past 2500 years? Well, yes and no. The philosophical debate about the nature of pleasure, with its characteristic ambiguities and uncertainties, persists often along lines developed by the ancients. But we now have robust neurophysiological data about pleasure, which thus far has increased the number of hypotheses without settling the question of what exactly pleasure is.
Sughra Raza. Self Portrait in Praise of Shadows. Shalimar Bagh, Lahore, December 10, 2023.

Andrew Torba, Christian Nationalist founder of the rightwing social media site Gab, recently argued on his podcast that the fact that many of the most beloved Christmas songs were written by Jewish composers was part of a conspiracy to take Christ out of Christmas: to secularize one of the holiest Christian holidays and allow Jews to subtly infiltrate Christian-American culture with their own agenda. He might just be right.


I have no idea what the lyrics to the Oasis song “Champagne Supernova” mean,
Sughra Raza. Fire Painting at Takht-e-Lahore, December 9, 2023.


