An article adapted from Nate Silver's The Signal and the Noise: Why So Many Predictions Fail — but Some Don’t, in the NYT Magazine [h/t: Jennifer Ouellette]:
From the inside, the National Centers for Environmental Prediction looked like a cross between a submarine command center and a Goldman Sachs trading floor. Twenty minutes outside Washington, it consisted mainly of sleek workstations manned by meteorologists working an armada of flat-screen monitors with maps of every conceivable type of weather data for every corner of the country. The center is part of the National Weather Service, which Ulysses S. Grant created under the War Department. Even now, it remains true to those roots. Many of its meteorologists have a background in the armed services, and virtually all speak with the precision of former officers.
They also seem to possess a high-frequency-trader’s skill for managing risk. Expert meteorologists are forced to arbitrage a torrent of information to make their predictions as accurate as possible. After receiving weather forecasts generated by supercomputers, they interpret and parse them by, among other things, comparing them with various conflicting models or what their colleagues are seeing in the field or what they already know about certain weather patterns — or, often, all of the above. From station to station, I watched as meteorologists sifted through numbers and called other forecasters to compare notes, while trading instant messages about matters like whether the chance of rain in Tucson should be 10 or 20 percent. As the information continued to flow in, I watched them draw on their maps with light pens, painstakingly adjusting the contours of temperature gradients produced by the computers — 15 miles westward over the Mississippi Delta or 30 miles northward into Lake Erie — in order to bring them one step closer to accuracy.