From Nature:
The brain’s electrical activity can be decoded to reconstruct which words a person is hearing, researchers report today in PLoS Biology.
Brian Pasley, a neuroscientist at the University of California, Berkeley, and his colleagues recorded the brain activity of 15 people who were undergoing evaluation before unrelated neurosurgical procedures. The researchers placed electrodes on the surface of the superior temporal gyrus (STG), part of the brain's auditory system, to record the subjects’ neuronal activity in response to pre-recorded words and sentences. The STG is thought to participate in the intermediate stages of speech processing, such as the transformation of sounds into phonemes, or speech sounds, yet little is known about which specific features, such as syllable rate or volume fluctuations, it represents. “A major goal is to figure out how the human brain allows us to understand speech despite all the variability, such as a male or female voice, or fast or slow talkers,” says Pasley. “We build computational models that test hypotheses about how the brain accomplishes this feat, and then see if these models match the brain recordings.” To analyse the data from the electrode recordings, the researchers used an algorithm designed to extract key features of spoken words, such as the time period and volume changes between syllables. They then entered these data into a computational model to reconstruct 'voicegrams' showing how these features change over time for each word. They found that these voicegrams could reproduce the sounds the patients heard accurately enough for individual words to be recognized.
More here.