Emily Singer in Simons Foundation:
As scientists record from an increasingly large number of neurons simultaneously, they are trying to understand what structure the activity pattern takes and how that structure relates to the population’s computing power. A number of studies have suggested that neuronal populations are highly correlated, producing low-dimensional patterns of activity. That observation is somewhat surprising, because minimally correlated groups of neurons would be able to transmit more information. (For more on this, see Predicting Neural Dynamics From Connectivity and The Dimension Question: How High Does It Go?)
In the new study, Carsen Stringer and Marius Pachitariu, now researchers at the Janelia Research Campus in Ashburn, Virginia, and previously a graduate student and postdoctoral researcher, respectively, in Harris’ lab, used two photon microscopy to simultaneously record signals from 10,000 neurons in the visual cortex of mice looking at nearly 3,000 natural images. Using a variant of principal component analysis to analyze the resulting activity, the researchers found it was neither low-dimensional nor completely uncorrelated. Rather, neuronal activity followed a power law in which each component explains a fraction of the variance of the previous component. The power law can’t simply be explained by the use of natural images, which have a power-law structure in their pixels (neighboring pixels tend to be similar). The same pattern held for white-noise stimuli.
Researchers used a branch of mathematical analysis called functional analysis to show that if the size of those dimensions decayed any slower, the code would focus more and more attention on smaller and smaller details of the stimulus, losing the big picture. In other words, the brain would no longer see the forest for the trees. Using this framework, the researchers predicted that dimensionality decays faster for simpler, lower-dimensional stimuli than for natural images. Experiments showed this is indeed the case — the slope of the power law depends on the dimensionality of the input stimuli. Harris says the approach can be applied to different types of large-scale recordings, such as place coding in the hippocampus and grid cells in the entorhinal cortex.
More here.