Grace van Deelen in The Scientist:
For the first time, scientists report they have devised a method that uses functional magnetic resonance imaging brain recordings to reconstruct continuous language. The findings are the next step in the quest for better brain-computer interfaces, which are being developed as an assistive technology for those who can’t speak or type. In a preprint posted September 29 on bioRxiv, a team at the University of Texas at Austin details a “decoder,” or algorithm, that can “read” the words that a person is hearing or thinking during a functional magnetic resonance imaging (fMRI) brain scan. While other teams had previously reported some success in reconstructing language or images based on signals from implants in the brain, the new decoder is the first to use a noninvasive method to accomplish this.
“If you had asked any cognitive neuroscientist in the world twenty years ago if this was doable, they would have laughed you out of the room,” says Alexander Huth, a neuroscientist at the University of Texas at Austin and a coauthor on the study. Yukiyasu Kamitani, a computational neuroscientist at Kyoto University who was not involved in the research, writes in an email to The Scientist that it’s “exciting” to see intelligible language sequences generated from a noninvasive decoder. “This study . . . sets a solid ground for [brain-computer interface] applications,” he says.
More here.