A new technology called compressive sensing slims down data at the source

Brian Hayes in American Scientist:

ScreenHunter_15 Jul. 15 12.33 When you take a photograph with a digital camera, the sensor behind the lens has just a few milliseconds to gather in a huge array of data. A 10-megapixel camera captures some 30 megabytes—one byte each for the red, green and blue channels in each of the 10 million pixels. Yet the image you download from the camera is often only about 3 megabytes. A compression algorithm based on the JPEG standard squeezes the file down to a tenth of its original size. This saving of storage space is welcome, but it provokes a question: Why go to the trouble of capturing 30 megabytes of data if you’re going to throw away 90 percent of it before anyone even sees the picture? Why not design the sensor to select and retain just the 3 megabytes that are worth keeping?

It’s the same story with audio recording. Music is usually digitized at a rate that works out to roughly 32 megabytes for a three-minute song. But the MP3 file on your iPod is probably only 3 megabytes. Again, 90 percent of the data has been discarded in a compression step. Wouldn’t it make more sense to record only the parts of the signal that will eventually reach the ear?

Until a few years ago, these questions had a simple answer, backed up both by common sense and by theoretical precept. Sifting out the best bits without first recording the whole signal was deemed impossible because you couldn’t know which bits to keep until you’d seen them all. That conclusion now seems unduly pessimistic. A suite of new signal-processing methods known as compressed or compressive sensing can extract the most essential elements “on the fly,” without even bothering to store the rest. It’s like a magical diet: You get to eat the whole meal, but you only digest the nutritious parts.

More here.