Duncan Greene in Wired (UK):
Neuroscientists at the University of California, Berkeley, have figured out a way of recreating visual activity taking place in the brain and reconstructing it using YouTube clips.
The team used functional Magnetic Resonance Imaging (fMRI) and computational models to decode and reconstruct visual experiences in the minds of test subjects. So far, it's only been used to reconstruct movie trailers, but it could, it is hoped, eventually yield equipment to reconstruct dreams on a computer screen.
The participants, who were members of the research team (as they had to stay still inside the scanner for hours at a time), watched two sets of movie trailers while the fMRI machine measured blood flow in their visual cortex.
Those measurements were used to come up with a computer model of how the visual cortex in each subject reacted to different types of image. “We built a model…that describes how shape and motion information in the movie is mapped into brain activity,” said Shinji Nishimoto, lead author of the study.
After associating the brain activity with what was happening on the screen in the first set of trailers, the second set of clips was then used to test the theory. It was asked to predict the brain activity that would be generated based on the visual patterns on-screen. To give it some ammunition for that task, it was fed 18 million seconds of random YouTube videos.
Then, the 100 YouTube clips that were found to be most similar to the clip (embedded below) were merged together, forming a blurry but reasonably accurate representation of what was going on on-screen.