Over at three-toed sloth:
[L]et me back up a minute to the bit about relying on “peer review and rebuttals to expose any relevant issue”. There are two problems here.
One has to do with the fact that, as I said, it is really very easy to find the rebuttals showing that Rushton’s papers, in particular, are a tragic waste of precious trees and disk-space. For example, in the very same issue of the very same journal as the paper by Rushton and Jensen which was one of Saletan’s main sources, Richard Nisbett, one of the more important psychologists of our time, takes his turn banging his head against this particular wall. Or, again, if Saletan had been at all curious about the issue of head sizes, which seems to have impressed him so much, it would have taken about five minutes with Google Scholar to find a demonstration that this is crap. So I really have no idea what Saletan means when he claimed he relied on published rebuttals — did he think they would just crawl into his lap and sit there, meowing to be read? If I had to guess, I’d say that the most likely explanation of Saletan’s writings is that he spent a few minutes with a search engine looking for hits on racial differences in intelligence, took the first few blogs and papers he found that way as The Emerging Scientific Consensus, and then stopped. But detailed inquiry into just how he managed to screw up so badly seems unprofitable.
The other problem with his supposed reliance on peer review is that he seems confused about how that institution works. I won’t rehash what I’ve already said about it, but only remark that passing peer review is better understood as saying a paper is not obviously wrong, not obviously redundant and not obviously boring, rather than as saying it’s correct, innovative and important. Even this misses a deeper problem, a possible failure mode of the scientific community. A journal’s peer review is only as good as the peers it uses as reviewers. If everyone, or almost everyone, who referees for some journal is in the grip of the same mistake, then they will not catch it in papers they review, and the journal will propagate it. In fact, since journals usually recruit new referees from their published authors or people recommended by old referees, mistakes and delusions can become endemic and self-confirming in epistemic communities associated with particular journals. To give a concrete example, the community using Physica A is pretty uniformly (and demonstrably) mistaken about how to tell when something is a power-law distribution, so what that journal publishes about power laws is unreliable, and those who derive their training and information from that journal go on to propagate the errors. It would be easy to find even more extreme examples from the physical and mathematical sciences (especially, I must say, among journals published by Elsevier), but it would take too long to explain why they are wrong.