Hyperintelligence: Art, AI, and the Limits of Cognition

by Jochen Szangolies

Deep Blue, at the Computer History Museum in California. Image Credit: James the photographer, CC BY 2.0, via Wikimedia Commons

On May 11, 1997, chess computer Deep Blue dealt then-world chess champion Garry Kasparov a decisive defeat, marking the first time a computer system was able to defeat the top human chess player in a tournament setting. Shortly afterwards, AI chess superiority firmly established, humanity abandoned the game of chess as having now become pointless. Nowadays, with chess engines on regular home PCs easily outsmarting the best humans to ever play the game, chess has become relegated to a mere historical curiosity and obscure benchmark for computational supremacy over feeble human minds.

Except, of course, that’s not what happened. Human interest in chess has not appreciably waned, despite having had to cede the top spot to silicon-based number-crunchers (and the alleged introduction of novel backdoors to cheating). This echoes a pattern well visible throughout the history of technological development: faster modes of transportation—by car, or even on horseback—have not eliminated human competitive racing; great cranes effortlessly raising tonnes of weight does not keep us from competitively lifting mere hundreds of kilos; the invention of photography has not kept humans from drawing realistic likenesses.

Why, then, worry about AI art? What we value, it seems, is not performance as such, but specifically human performance. We are interested in humans racing or playing each other, even in the face of superior non-human agencies. Should we not expect the same pattern to continue: AI creates art equal to or exceeding that of its human progenitors, to nobody’s great interest? Read more »



Monday, July 25, 2022

Clever Cogs: Ants, AI, And The Slippery Idea Of Intelligence

by Jochen Szangolies

Figure 1: The Porphyrian Tree. Detail of a fresco at the Kloster Schussenried. Image credit: modified from Franz Georg Hermann, Public domain, via Wikimedia Commons.

The arbor porphyriana is a scholastic system of classification in which each individual or species is categorized by means of a sequence of differentiations, going from the most general to the specific. Based on the categories of Aristotle, it was introduced by the 3rd century CE logician Porphyry, and a huge influence on the development of medieval scholastic logic. Using its system of differentiae, humans may be classified as ‘substance, corporeal, living, sentient, rational’. Here, the lattermost term is the most specific—the most characteristic of the species. Therefore, rationality—intelligence—is the mark of the human.

However, when we encounter ‘intelligence’ in the news, these days, chances are that it is used not as a quintessentially human quality, but in the context of computation—reporting on the latest spectacle of artificial intelligence, with GPT-3 writing scholarly articles about itself or DALL·E 2 producing close-to-realistic images from verbal descriptions. While this sort of headline has become familiar, lately, a new word has risen in prominence at the top of articles in the relevant publications: the otherwise innocuous modifier ‘general’. Gato, a model developed by DeepMind, we’re told is a ‘generalist’ agent, capable of performing more than 600 distinct tasks. Indeed, according to Nando de Freitas, team lead at DeepMind, ‘the game is over’, with merely the question of scale separating current models from truly general intelligence.

There are several interrelated issues emerging from this trend. A minor one is the devaluation of intelligence as the mark of the human: just as Diogenes’ plucked chicken deflates Plato’s ‘featherless biped’, tomorrow’s AI models might force us to rethink our self-image as ‘rational animals’. But then, arguably, Twitter already accomplishes that.

Slightly more worrying is a cognitive bias in which we take the lower branches of Porphyry’s tree to entail the higher ones. Read more »