by Malcolm Murray

When it comes to AI, or even worse, “AGI”, we are facing a crisis of language. Different people use the terms to mean drastically different things. This is deeply unhelpful for productive debate. This point was hammered home to me several times this week. On LinkedIn, I debated appropriate risk management techniques for AI with a professor, and it turned out we were talking about very different kinds of AI. In New York, the proposed RAISE act made the A16Z lobbying army, fresh from its bloody victories in the California legislature, reload its weapons, despite the two sides talking about very different kinds of AI.
AI, as the current buzzword, is an extremely big tent and in effect a screen on to which people project their distinct hopes and fears. To make some progress in the AI debate, we should separate AI into its different archetypes. I believe there are at least five: Tool AI, Robot AI, Oracle AI, Golem AI and Agent AI, and they are all distinct, with different lineages and different purposes. Let’s examine each in turn.
First, there is Tool AI. Its lineage can be traced to big data, the buzzword in the business world in the early 2010s. This is the AI we have had for more than a decade, the AI that gets advertised in B2B SaaS solutions. It is AI as a prediction engine, deployed in the Amazon storefront to recommend your next purchase. This is AI in the TikTok feed, optimizing content for your engagement. It is statistics, but statistics on steroids. This is the type of AI for which VC firms like Andreessen Horwitz (A16Z) are techno-optimists and that can lead to large productivity increases for companies. It is a complement to humans, not a substitute.
Second, there is Robot AI. Its lineage includes the first use of the word robot, in the Czech play R.U.R. (Rossum’s Universal Robots) in 1920, and Robotic Process Automation (RPA), a buzzword in the business world in the late 2010s. This signifies a machine that can automate and therefore replace a hitherto human-conducted process, whether analog or digital. Since the Industrial Revolution, repetitive factory processes have become automated. Instead of the human, blue-collar worker picking up the product to be manufactured and painted, say, the machine does it. More recently, we are seeing digital processes, on computer systems, also becoming possible to automate. Instead of the human, white-collar worker picking up the piece of data from one database and pasting it in another, the machine does it. This is also an AI that the A16Zs of the world would approve of, that can lead to large-scale productivity enhancements. At the same time, this is a type of AI that politicians worry about, since it will inevitably lead to job losses, as it is a substitute to humans rather than a complement.
Third, there is Oracle AI. This is new in the sense that it can now actually be instantiated at scale, but has roots far back in human history and tales. The ancient Greeks had the Oracle in Delphi to consult for when to launch a war, and Asimov had his Prime Radiant that the psychohistorians could consult regarding how the future would unfold. Having an AI that has ingested all the information on the Internet at your fingertips to ask any question is a godsend for curious people and infovores. As one of the most curious people on the planet, Tyler Cowen, economics professor and blogger extraordinaire, pronounced OpenAI’s o3 to be AGI, saying “just how much smarter was I expecting AGI to be”? It can answer any economics question he throws at it and math questions dreamed up by top mathematicians to be as devilishly difficult as possible. Yes, it will sometimes hallucinate and make things up, but then again, the semi-clad women high on euphoric gases at the Oracle in Delphi did nothing but hallucinate, so it seems this might be a feature of oracles, not a bug.
Fourth, there is Golem AI. This AI traces its roots back to Mary Shelley’s Frankenstein, the golems of Jewish folklore and the long-standing fear of the “Other”, the “human-like but not human”. This has likely been a prominent human fear since humans became conscious. Prominent examiners of Golem AI include Good and Vinge in the 20th century and Yudkowsky and Bostrom in the 21st. Their fears of superintelligent AI are valid, but that does not mean they automatically apply to today’s AI. The AI that Yudkowsky warns about in his forthcoming book may or may not be the same AI that the AI companies are currently building, given its jagged nature. Its current superhuman performance in some areas may or may not be a harbinger of performance across all reasoning domains. It is too early to tell. Bostrom’s warnings regarding Superintelligence happened to come at roughly the same time as Google’s Transformer architecture revolutionized deep learning, but that does not mean they necessarily speak of the same AI. Similarly, it is likely unhelpful when we use anthropomorphic concepts in trying to understand the inner workings of AI. It is crucial that we understand how it works, but it might be helpful to apply concepts that are less human-centric than power seeking, lying and deception.
Finally, we have Agent AI. This is an AI that can act as an agent, autonomously taking actions. This is the AI of Skynet in the Terminator, or more benevolently, the robot maid in Richie Rich. This should be stripped of its Golem, human-like aspects, to keep the debate precise, but regardless, it is the most concerning type of AI. An AI that takes actions without a “human-in-the-loop” could mean the sudden or gradual loss of human agency and control over our future. This is the AI for which we need very strict guardrails on their development and deployment. Ideally, agentic AI beyond certain capability thresholds should not be developed at all until we know enough to put in place guardrails allowing us to maintain control over outcomes.
These five AI archtypes are all different. We can build one without building the other, as we have recently seen advocated by leading AI thinkers. Yoshua Bengio launched his organization LawZero to create a non-agentic Scientist AI. Max Tegmark delineated between the A, the G and the I in AGI, to stress that we can build the AI with high intelligence without building the Agentic AI.
Further, they all come with distinct opportunities as well as distinct risks. In order to address their risks and reap the opportunities, it is key that the risk mitigation is targeted at the right kind of AI. Being more precise in our AI nomenclature might allow us to avoid the kneejerk reactions against any regulation that we see with the RAISE Act. The idea that startups would flee New York is ridiculous, since the act would only apply to $100 million training runs. This is as far as I know is not commonly conducted by your local mom and pop AI startup, unless your “startup” happens to be called OpenAI. Let us stop conflating tools, robots, oracles, golems and agents, and maybe some progress can be made.
Enjoying the content on 3QD? Help keep us going by donating now.
