Howard Wainer in American Scientist:
The concept of replicability in scientific research was laid out by Francis Bacon in the early 17th century, and it remains the principal epistemological tenet of modern science. Replicability begins with the idea that science is not private; researchers who make claims must allow others to test those claims. Over time, the scientific community has recognized that, because initial investigations are almost always done on a small scale, they exhibit the variability inherent in small studies. Inevitably, as a consequence, some results will be reported that are epiphenomenal—false positives, for example. When novel findings appear in the scientific literature, other investigators rush to replicate. If attempts to reproduce them don’t pan out, the initial results are brushed aside as the statistical anomalies they were, and science moves on.
Scientific tradition sets an initial acceptance criterion for much research that tolerates a fair number of false positives (typically 1 out of 20). There are two reasons for this initial leniency: First, it is not practical to do preliminary research on any topic on a large enough scale to diminish the likelihood of statistical artifacts to truly tiny levels. And second, it is more difficult to rediscover a true result that was previously dismissed because it failed to reach some stringent level of acceptability than it is to reject a false positive after subsequent work fails to replicate it. This approach has meant that the scientific literature is littered with an embarrassing number of remarkable results that were later shown to be anomalous.