‘World Models,’ an Old Idea in AI, Mount a Comeback

John Pavlus in Quanta:

The latest ambition of artificial intelligence research — particularly within the labs seeking “artificial general intelligence,” or AGI — is something called a world model: a representation of the environment that an AI carries around inside itself like a computational snow globe. The AI system can use this simplified representation to evaluate predictions and decisions before applying them to its real-world tasks. The deep learning luminaries Yann LeCun (of Meta), Demis Hassabis (of Google DeepMind) and Yoshua Bengio (of Mila, the Quebec Artificial Intelligence Institute) all believe world models are essential for building AI systems that are truly smartscientific and safe.

The fields of psychology, robotics and machine learning have each been using some version of the concept for decades. You likely have a world model running inside your skull right now — its how you know not to step in front of a moving train without needing to run the experiment first.

So does this mean that AI researchers have finally found a core concept whose meaning everyone can agree upon? As a famous physicist once wrote: Surely youre joking. A world model may sound straightforward — but as usual, no one can agree on the details. What gets represented in the model, and to what level of fidelity? Is it innate or learned, or some combination of both? And how do you detect that its even there at all?

More here.

Enjoying the content on 3QD? Help keep us going by donating now.