Michael White makes the case over at Adaptive Complexity (via Sci Tech Daily):
Many of these researchers don’t understand what it means to test a theory. They build these complex models, which involves making important assumptions that could easily be wrong, and then if their models fit existing data, they think the model is right.
Hence you get this McColloh guy claiming that his network analysis model was responsible for a big drop in sniper attacks, ignoring the much more obvious and plausible causes for the drop in violence: the addition of 30,000 troops and the US Military’s major new approach to counterinsurgency implemented by Petraeus. The network researchers can’t justify ruling out the more obvious explanation; their only retort is to say that their critics don’t understand their fancy methods. (Which is not true in many cases – there are plenty physicists, biologists, and economists who understand the mathematical/statistical/computational techniques, who are bothered by the scientific culture of complex systems research.)
This a dangerous mindset to have in science. What these researchers are doing is practicing a sham form of science called by Feynman Cargo Cult Science:
There is also a more subtle problem. When you have put a lot of ideas together to make an elaborate theory, you want to make sure, when explaining what it fits, that those things it fits are not just the things that gave you the idea for the theory; but that the finished theory makes something else come out right, in addition.
And no, that does not mean simply training your model on half of your data set and showing that you can effectively explain the other half of your data.