Julien Crockett in the LA Review of Books:
A GROWING FEAR and excitement for today’s AI systems stem from the assumption that as they improve, something—someone?—will emerge: feed large language models (LLMs) enough text and, rather than merely extracting statistical patterns in data, they will become intelligent agents with the ability to understand the world.
Alison Gopnik and Melanie Mitchell are skeptical of this assumption. Gopnik, a professor of psychology and philosophy studying children’s learning and development, and Mitchell, a professor of computer science and complexity focusing on conceptual abstraction and analogy-making in AI systems, argue that intelligence is much more complicated than we think. Yes, what today’s LLMs can achieve by consuming huge swaths of text is impressive—and has challenged some of our intuitions about intelligence—but before we can attribute to them something like human intelligence, AI systems will need the ability to actively interact with and engage in the world, creating their own “mental” models about how it works.
How might AI systems reach this next level? And what is needed to ensure their safe deployment? In our conversation, Gopnik and Mitchell consider various approaches, including a framework to describe our role in this next phase of AI development: caregiving
More here.