by Ashutosh Jogalekar

AGI is in the air. Some think it’s right around the corner. Others think it will take a few more decades. Almost all who talk abut it agree that it applies to a superhuman AI which embodies all the unique qualities of human beings, only multiplied a thousand fold. Who are the most interesting people writing and thinking about AGI whose words we should heed? Although leaders of companies like OpenAI and Anthropic suck up airtime, I would put much more currency in the words of Kevin Kelly, a superb philosopher of technology who has been writing about AI and related topics for decades; among his other accomplishments, he is the founding editor of Wired magazine, and his book “Out of Control” was one of the key inspirations for the Matrix movies. A few years ago he wrote a very insightful piece in Wired about four reasons why he believes fears of an AI that will “take over humanity” are overblown. He casts these reasons in the form of misconceptions about AI which he then proceeds to question and dismantle. The whole thing should be dusted off and is eminently worth reading.
The first and second misconceptions: Intelligence is a single dimension and is “general purpose”.
This is a central point that often gets completely lost when people talk about AI. Most applications of machine intelligence that we have so far are very specific, but when AGI proponents hold forth they are talking about some kind of overarching single intelligence that’s good at everything. The media almost always mixes up multiple applications of AI in the same sentence, as in “AI did X, so imagine what it would be like when it could do Y”; lost is the realization that X and Y could refer to very different dimensions of intelligence, or significantly different in any case. As Kelly succinctly puts it, “Intelligence is a combinatorial continuum. Multiple nodes, each node a continuum, create complexes of high diversity in high dimensions.” Even humans are not good at optimizing along every single of these dimensions, so it’s unrealistic to imagine that AI will. In other words, intelligence is horizontal, not vertical. The more realistic vision of AI is thus what it already has been; a form of augmented, not artificial, intelligence that helps humans with specific tasks, not some kind of general omniscient God-like entity that’s good at everything. Some tasks that humans do will indeed be replaced by machines, but in the general scheme of things humans and machines will have to work together to solve the tough problems. Read more »