by David Kordahl
Imagine, if you will, that I own a reliably programmable qubit, a device that, when prepared in some standard and uncontroversial way, has a 50/50 probability of having one of two outcomes, A or B. Now imagine also that I have become convinced of my own telekinetic powers.
Suppose that the qubit has been calibrated within an inch of its life, and I have good reason to believe that the odds for the two possible outcomes, A or B, are in fact equally matched. My telekinetic powers, on the other hand, are weak—not strong enough to make heads explode like that guy in Scanners, nor strong enough to levitate chalk like in Matilda. Yet neither am I powerless. If I reign myself in—no more than a few attempts per night (I take care not to tire myself), and no counting tries when my juju’s off (remember, my gifts are unremarkable)—then I have been able, through intense concentration and force of will, to favor outcome A just slightly, just barely bumping its odds up, let’s say, from 50.0% to 50.1%.
Squinting, I claim statistical significance. But when I share these findings with you, my scientifically trained colleague, you are unimpressed.
Okay, but why not? I might insist that experimental controls have been properly implemented. I might even allow you to propose a list of criteria for a follow-up experiment. I might grow impatient, and thrust papers at you on meta-analyses of the para-psychological literature, and pass you a copy of Synchronicity—or at least my review of it—showing how the founders of quantum mechanics were themselves interested in psi effects. Look, I might huff, have you not read Freeman Dyson on the possibility of ESP? Have you not noticed that even critics suggest further study?
I hope this description does not describe the way that I actually behave. But why not? What about this response, rudeness aside, would be so bad?
How scientists should spend their time is a pressing question for individual scientists, if not for society at large. But what if their incentives get corrupted? Huw Price, writing for this very website, argued that an unpopular subject could become a “reputation trap,” a subject whose “image is so bad that many scientists feel that they risk their own reputations if they are seen to be open-minded about it, let alone to support it.”
I haven’t followed up on the scientific questions that Price asked in that essay—he wrote in particular about low-energy nuclear reactions (LENR), aka “cold fusion”—but by now I have accepted that such traps as he describes in fact exist. (My name is attached to critical comments below that article, but I’ll admit it: I should have given Price more credit.) Still, there might be other reasonable ways to discount my telekinetic experiments.
The surest bet that a scientist can make is to work on a project whose results will be significant regardless of the outcome. To push the issue so far back as to be uncontroversial, we might consider Léon Foucault, whose second-most-famous experiment (after his pendulum) measured the speed of light in water. Foucault found that light going from air to water slows down, which cast doubt on Newton’s optics, which predicted that light should speed up in dense materials. But the demotion of Newton seems somehow contingent, here, since the measurement would have been significant regardless of how it turned out, pro- or anti-Newtonian. Foucault’s place in the history of science was secure either way.
Not every scientist manages to pick projects that are quite so ripe. Yet in every field a range of possible projects exist. Typically, ambitious scientists work on subjects that are perhaps less glamorous than cold fusion or ESP, but which are perceived to have a better chance of success.
And within this perceived chance of success, even cold fusion and ESP are not equivalent projects. No one can simply dismiss low-energy nuclear reactions out of hand, since their basic promise—that you can get more energy out than you put in as stored nuclear energy becomes thermal energy—copies of the same mass-energy to kinetic-energy conversion as is at play in other nuclear reactors, just at a much lower energy scale. Questions in LENR, as with most unknowns where scientists place their bets, involve empirical specifics, not fundamental principles.
Psychic powers are a different case entirely. It’s not that scientists necessarily risk sanction (according to the New York Times, paranormal belief is on the rise among Americans), nor that such data is so hard to find (anecdotal evidence—cowards would use scare quotes—is all over the place). The problem, instead, is twofold, and is common to many “paranormal” topics. To the best of my knowledge, psychic phenomena—if real—lack both easy reproducibility and any straightforward physical mechanism.
Inverting this problem gives us a decent rule of thumb for scientific wagering. If an observable effect is easily reproduced, it doesn’t matter whether you have an explanation in hand. Go ahead, make the bet. And if you have a plausible mechanism to explore within the framework of standard science—well, who knows? Your wager might pan out. But effects that are both hard to control and hard to explain…let’s just allow, conservatively, that those are the best bets to avoid.
Of course, this is just a pragmatic rule of thumb, and has disturbingly little to do with anything like reason or truth. It’s one thing to admit that a question is intriguing, but you also don’t want to waste your time. If some problems at hand might yield to the tools that are already in your box, why not hammer on those first? Sure, the world needs its mavericks and martyrs, those heroes who push against the conventional wisdom with little promise of reward, but most of us would rather achieve modest gains than risk making none at all. As my telekinetic qubit experiments are cruelly ignored, progress is likely to continue elsewhere, nonetheless.