From Physics Central:
Although statistical significance can be a good guideline for many physics experiments, scientists can't base their results solely on these benchmarks. In fact, other errors can creep into the data and contaminate entire datasets, even very promising ones.
Remember when neutrinos were supposedly traveling faster than light late last year? That result reached a six-sigma level of confidence – even higher than the 5-sigma level convention required for new particle discoveries. But we learned earlier this year that neutrinos indeed obey the universal speed limit, so what went wrong?
Most crucially, the faster-than-light neutrino experiment suffered from a systematic error that affected all of the data; faulty cables consistently gave the researchers bad readings. No matter how many times physicists repeated the experiments, they would get the same yet inaccurate results.
This situation is akin to measuring someone's height with a meter stick that is several inches longer than it should be. Even if you take hundreds of measurements and average all of the tiny human errors and approximations, you'll never avoid the fact that your meter stick is giving you consistently bad results.
So how do scientists make sure they avoid this problem when statistical analyses can't account for it? Part of the answer is using independent experiments, like CMS and ATLAS, because systematic errors are less likely to affect experiments with different designs.
This is part of the reason why scientists are so excited about the recent results. Scientists are seeing not only very high sigma bumps in the data but also similar bumps from two independent experiments.
More here.