Scientists have developed a way of ‘decoding’ someone’s brain activity to determine what they are looking at. There have been previous efforts at brain-reading using functional magnetic resonance imaging (fMRI), but these have been quite limited. In most such attempts, volunteers’ brain responses were first monitored when looking at a discrete selection of pictures; these brain scans could then be used to determine which picture from this set a person is looking at.
In the experiment, the brain activity of two subjects (two of Gallant’s team members, Kendrick Kay and Thomas Naselaris) was monitored while they were shown 1,750 different pictures. The team then selected 120 novel images that the subjects hadn’t seen before, and used the previous results to predict their brain responses. When the test subjects were shown one of the images, the team could match the actual brain response to their predictions to accurately pick out which of the pictures they had been shown. With one of the participants they were correct 72% of the time, and with the other 92% of the time; on chance alone they would have been right only 0.8% of the time.