The Brain Implants That Could Change Humanity

Moises Velasquez-Manoff in the New York Times:

Jack Gallant never set out to create a mind-reading machine. His focus was more prosaic. A computational neuroscientist at the University of California, Berkeley, Dr. Gallant worked for years to improve our understanding of how brains encode information — what regions become active, for example, when a person sees a plane or an apple or a dog — and how that activity represents the object being viewed.

By the late 2000s, scientists could determine what kind of thing a person might be looking at from the way the brain lit up — a human face, say, or a cat. But Dr. Gallant and his colleagues went further. They figured out how to use machine learning to decipher not just the class of thing, but which exact image a subject was viewing. (Which photo of a cat, out of three options, for instance.)

One day, Dr. Gallant and his postdocs got to talking. In the same way that you can turn a speaker into a microphone by hooking it up backward, they wondered if they could reverse engineer the algorithm they’d developed so they could visualize, solely from brain activity, what a person was seeing.

More here.