Jonah Lehrer in Wired:
The good news is that, in the centuries since Hume, scientists have mostly managed to work around this mismatch as they’ve continued to discover new cause-and-effect relationships at a blistering pace. This success is largely a tribute to the power of statistical correlation, which has allowed researchers to pirouette around the problem of causation. Though scientists constantly remind themselves that mere correlation is not causation, if a correlation is clear and consistent, then they typically assume a cause has been found—that there really is some invisible association between the measurements.
Researchers have developed an impressive system for testing these correlations. For the most part, they rely on an abstract measure known as statistical significance, invented by English mathematician Ronald Fisher in the 1920s. This test defines a “significant” result as any data point that would be produced by chance less than 5 percent of the time. While a significant result is no guarantee of truth, it’s widely seen as an important indicator of good data, a clue that the correlation is not a coincidence.
But here’s the bad news: The reliance on correlations has entered an age of diminishing returns. At least two major factors contribute to this trend. First, all of the easy causes have been found, which means that scientists are now forced to search for ever-subtler correlations, mining that mountain of facts for the tiniest of associations. Is that a new cause? Or just a statistical mistake? The line is getting finer; science is getting harder. Second—and this is the biggy—searching for correlations is a terrible way of dealing with the primary subject of much modern research: those complex networks at the center of life.
More here.