Andy Clark in the New York Times:
Might the miserly use of neural resources be one of the essential keys to understanding how brains make sense of the world? Some recent work in computational and cognitive neuroscience suggests that it is indeed the frugal use of our native neural capacity (the inventive use of restricted “neural bandwidth,” if you will) that explains how brains like ours so elegantly make sense of noisy and ambiguous sensory input. That same story suggests, intriguingly, that perception, understanding and imagination, which we might intuitively consider to be three distinct chunks of our mental machinery, are inextricably tied together as simultaneous results of a single underlying strategy known as “predictive coding.” This strategy saves on bandwidth using (who would have guessed it?) one of the many technical wheezes that enable us to economically store and transmit pictures, sounds and videos using formats such as JPEG and MP3.
In the case of a picture (a black and white photo of Laurence Olivier playing Hamlet, to activate a concrete image in your mind) predictive coding works by assuming that the value of each pixel is well predicted by the value of its various neighbors. When that’s true — which is rather often, as gray-scale gradients are pretty smooth for large parts of most images — there is simply no need to transmit the value of that pixel. All that the photo-frugal need transmit are the deviations from what was thus predicted. The simplest prediction would be that neighboring pixels all share the same value (the same gray scale value, for example), but much more complex predictions are also possible. As long as there is detectable regularity, prediction (and hence this particular form of data compression) is possible.
More here. And a short sequel here.