Demanding that a theory is falsifiable or observable, without any subtlety, will hold science back

Adam Becker in Aeon:

ScreenHunter_3033 Apr. 07 20.39The Viennese physicist Wolfgang Pauli suffered from a guilty conscience. He’d solved one of the knottiest puzzles in nuclear physics, but at a cost. ‘I have done a terrible thing,’ he admitted to a friend in the winter of 1930. ‘I have postulated a particle that cannot be detected.’

Despite his pantomime of despair, Pauli’s letters reveal that he didn’t really think his new sub-atomic particle would stay unseen. He trusted that experimental equipment would eventually be up to the task of proving him right or wrong, one way or another. Still, he worried he’d strayed too close to transgression. Things that were genuinely unobservable, Pauli believed, were anathema to physics and to science as a whole.

Pauli’s views persist among many scientists today. It’s a basic principle of scientific practice that a new theory shouldn’t invoke the undetectable. Rather, a good explanation should be falsifiable – which means it ought to rely on some hypothetical data that could, in principle, prove the theory wrong. These interlocking standards of falsifiability and observability have proud pedigrees: falsifiability goes back to the mid-20th-century philosopher of science Karl Popper, and observability goes further back than that. Today they’re patrolled by self-appointed guardians, who relish dismissing some of the more fanciful notions in physics, cosmology and quantum mechanics as just so many castles in the sky. The cost of allowing such ideas into science, say the gatekeepers, would be to clear the path for all manner of manifestly unscientific nonsense.

But for a theoretical physicist, designing sky-castles is just part of the job.

More here.