A Google AI Watched 30,000 Hours of Video Games—Now It Makes Its Own

Jason Dorrier in Singularity Hub:

AI continues to generate plenty of light and heat. The best models in text and images—now commanding subscriptions and being woven into consumer products—are competing for inches. OpenAI, Google, and Anthropic are all, more or less, neck and neck. It’s no surprise then that AI researchers are looking to push generative models into new territory. As AI requires prodigious amounts of data, one way to forecast where things are going next is to look at what data is widely available online, but still largely untapped. Video, of which there is plenty, is an obvious next step. Indeed, last month, OpenAI previewed a new text-to-video AI called Sora that stunned onlookers. But what about video…games?

It turns out there are quite a few gamer videos online. Google DeepMind says it trained a new AI, Genie, on 30,000 hours of curated video footage showing gamers playing simple platformers—think early Nintendo games—and now it can create examples of its own.

Genie turns a simple image, photo, or sketch into an interactive video game. Given a prompt, say a drawing of a character and its surroundings, the AI can then take input from a player to move the character through its world. In a blog post, DeepMind showed Genie’s creations navigating 2D landscapes, walking around or jumping between platforms. Like a snake eating its tail, some of these worlds were even sourced from AI-generated images. In contrast to traditional video games, Genie generates these interactive worlds frame by frame. Given a prompt and command to move, it predicts the most likely next frames and creates them on the fly. It even learned to include a sense of parallax, a common feature in platformers where the foreground moves faster than the background.

More here.