Forget Turing, it’s the Tolkien test for AI that matters

by John Hartley

With CAPTHCHA the latest stronghold to be breeched, following the heralded sacking of Turing’s temple, I propose a new standard for AI: The Tolkien test.

In this proposed schema, AI capability would be tested against what Andrew Pinsent terms ‘the puzzle of useless creation’. Pinsent, a leading authority on science and religion asks, concerning Tolkien: “What is the justification for spending so much time creating an entire family of imaginary languages for imaginary peoples in an imaginary world?”

Tolkien’s view of sub-creation framed human creativity as an act of co-creation with God. Just as the divine imagination shaped the world, so too does human imagination—though on a lesser scale—shape its own worlds. This, for Tolkien, was not mere artistic play but a serious, borderline sacred act. Tolkien’s works, Middle-earth in particular, were not an escape from reality, but a way of penetrating reality in the most acute sense.

For Tolkien, fantasia illuminated reality insofar is it tapped into the metaphysical core of things. The the artistic creation predicated on the creative imagination opened the individual to an alternate mode of knowledge, deeply intuitive and discursive in nature. Tolkien saw this creative act as deeply rational, not a fanciful indulgence. Echoing the Thomist tradition, he viewed fantasy as a way of refashioning the world that the divine had made, for only through the imagination is the human mind capable of reaching beyond itself.

The role of the creative imagination, then, is not to offer a mere replication of life but to transcend it. Here is the major test for AI, for in doing so, it accesses what Tolkien called the “real world”—the world beneath the surface of things. As faith seeks enchantment, so too does art seek a kind of conversion of the imagination, guiding it towards the consolation of eternal memory, what Plato termed ‘anamnesis’. Read more »



Monday, April 1, 2024

Russell’s Bane: Why LLMs Don’t Know What They’re Saying

by Jochen Szangolies

Does the AI barber that shaves all those that do not shave themselves, shave itself? (Image AI generated.)

Recently, the exponential growth of AI capabilities has been outpaced only by the exponential growth of breathless claims about their coming capabilities, with some arguing that performance on par with humans in every domain (artificial general intelligence or AGI) may only be seven months away, arriving by November of this year. My purpose in this article is to examine the plausibility of this claim, and, provided ‘AGI’ includes the ability to know what you’re talking about, find it wanting. I will do so by examining the work of British philosopher and logician Bertrand Russell—or more accurately, some objections raised against it.

Russell was a master of structure in more ways then one. His writing, the significance of which was recognized with the 1950 Nobel Prize in literature, is often a marvel of clarity and coherence; his magnum opus Principia Mathematica, co-written with his former teacher Alfred North Whitehead, sought to establish a firm foundation for mathematics in logic. But for our purposes, the most significant aspect of his work is his attempt to ground scientific knowledge in knowledge of structure—knowledge of relations between entities, as opposed to direct acquaintance with the entities themselves—and its failure as originally envisioned.

Structure, in everyday parlance, is a bit of an inexact term. A structure can be a building, a mechanism, a construct; it can refer to a particular aspect of something, like the structure of a painting or a piece of music; or it can refer to a set of rules governing a particular behavioral domain, like the structure of monastic life. We are interested in the logical notion of structure, where it refers to a particular collection of relations defined on a set of not further specified entities (its domain).

It is perhaps easiest to approach this notion by means of a couple of examples. Read more »