Why even a “superhuman AI” won’t destroy humanity

by Ashutosh Jogalekar

Photo credit

AGI is in the air. Some think it’s right around the corner. Others think it will take a few more decades. Almost all who talk abut it agree that it applies to a superhuman AI which embodies all the unique qualities of human beings, only multiplied a thousand fold. Who are the most interesting people writing and thinking about AGI whose words we should heed? Although leaders of companies like OpenAI and Anthropic suck up airtime, I would put much more currency in the words of Kevin Kelly, a superb philosopher of technology who has been writing about AI and related topics for decades; among his other accomplishments, he is the founding editor of Wired magazine, and his book “Out of Control” was one of the key inspirations for the Matrix movies. A few years ago he wrote a very insightful piece in Wired about four reasons why he believes fears of an AI that will “take over humanity” are overblown. He casts these reasons in the form of misconceptions about AI which he then proceeds to question and dismantle. The whole thing should be dusted off and is eminently worth reading.

The first and second misconceptions: Intelligence is a single dimension and is “general purpose”.

This is a central point that often gets completely lost when people talk about AI. Most applications of machine intelligence that we have so far are very specific, but when AGI proponents hold forth they are talking about some kind of overarching single intelligence that’s good at everything. The media almost always mixes up multiple applications of AI in the same sentence, as in “AI did X, so imagine what it would be like when it could do Y”; lost is the realization that X and Y could refer to very different dimensions of intelligence, or significantly different in any case. As Kelly succinctly puts it, Intelligence is a combinatorial continuum. Multiple nodes, each node a continuum, create complexes of high diversity in high dimensions.” Even humans are not good at optimizing along every single of these dimensions, so it’s unrealistic to imagine that AI will. In other words, intelligence is horizontal, not vertical. The more realistic vision of AI is thus what it already has been; a form of augmented, not artificial, intelligence that helps humans with specific tasks, not some kind of general omniscient God-like entity that’s good at everything. Some tasks that humans do will indeed be replaced by machines, but in the general scheme of things humans and machines will have to work together to solve the tough problems. Read more »

Monday, April 1, 2024

Russell’s Bane: Why LLMs Don’t Know What They’re Saying

by Jochen Szangolies

Does the AI barber that shaves all those that do not shave themselves, shave itself? (Image AI generated.)

Recently, the exponential growth of AI capabilities has been outpaced only by the exponential growth of breathless claims about their coming capabilities, with some arguing that performance on par with humans in every domain (artificial general intelligence or AGI) may only be seven months away, arriving by November of this year. My purpose in this article is to examine the plausibility of this claim, and, provided ‘AGI’ includes the ability to know what you’re talking about, find it wanting. I will do so by examining the work of British philosopher and logician Bertrand Russell—or more accurately, some objections raised against it.

Russell was a master of structure in more ways then one. His writing, the significance of which was recognized with the 1950 Nobel Prize in literature, is often a marvel of clarity and coherence; his magnum opus Principia Mathematica, co-written with his former teacher Alfred North Whitehead, sought to establish a firm foundation for mathematics in logic. But for our purposes, the most significant aspect of his work is his attempt to ground scientific knowledge in knowledge of structure—knowledge of relations between entities, as opposed to direct acquaintance with the entities themselves—and its failure as originally envisioned.

Structure, in everyday parlance, is a bit of an inexact term. A structure can be a building, a mechanism, a construct; it can refer to a particular aspect of something, like the structure of a painting or a piece of music; or it can refer to a set of rules governing a particular behavioral domain, like the structure of monastic life. We are interested in the logical notion of structure, where it refers to a particular collection of relations defined on a set of not further specified entities (its domain).

It is perhaps easiest to approach this notion by means of a couple of examples. Read more »