by Joseph Shieber
Suppose that you were in charge of a large government agency responsible for dealing with public health emergencies. Suppose furthermore the country is in the midst of just such an emergency, and that there is a particular action that, if a large enough percentage of the public performs that action, will alleviate a significant amount of the ill-effects of the current public health emergency. Let’s say that the large enough percentage of participation is 70%. The action is one that requires ongoing participation: even if a member of the public is currently performing the action, they have to KEEP performing it in order for the benefits to accrue.
Suppose finally that your agency is the only agency with access to accurate information about the participation rate, and your tracking data suggests that the participation rate is at 50%. At a press conference, a reporter asks you a question about the current participation rate.
So far, this imagined example has none of the farfetched qualities that people associate with philosophical thought-experiments. Indeed, most of the example will involve the sorts of situations that could very plausibly emerge. However, there is one aspect of the example that WILL be fantastical, but not in a way that should affect our judgments about the questions that I want to raise – in fact, this aspect should just serve to sharpen those questions.
Before I get to the fantastical part, it will be useful to remind you of a fact about social influence: social proof is a very powerful tool for activating social compliance. To borrow an example from Robert Cialdini, if I see a sign in my hotel room saying that 75% of guests in this room choose to reuse their bath towels in order to save water and help the environment, I am much more likely to also choose to reuse my bath towel.
Now the fantastical part: suppose that you KNOW (with as much certainty as you like) that if you tell the reporter that 50% of people comply with the public health action, no new people will be convinced to comply with the action and the compliance rate will stay at 50%. However, if you fudge the numbers and say that 70% of people comply, as soon as that information hits the airwaves there will be a spike in compliances such that it will actually be the case that 70% of people will comply with the action.
Note that what’s fantastical here is that you could KNOW these facts about the effects of your statements about participation rates; that such statements could in fact have such effects is not fantastical at all – indeed, it’s exactly what the bandwagon effect predicts.
Now here’s the question: is it obvious in this case that you should tell the truth as opposed to fudging the numbers in a way that, because of the self-fulfilling power of your assertion, will actually result in that assertion’s becoming true?
(To be fair, I’m fudging philosophical details here. The philosophically persnickety reader will of course note that the assertion, in order to express a complete proposition, would have to refer to a particular time at which the compliance rate is 50%. For that reason, if the compliance rate rises at a later time, that would make a DIFFERENT proposition true, but it wouldn’t change the truth value of the proposition you express at the conference.)
The answer to this question is related to the answer to a question debated in Kant’s day — namely the question of whether it is EVER alright to lie. (Kant says it’s not, even if your lie could save a life. For a different interpretation of Kant on lying, see here.)
The complication here is that your lie in this case would not only save lives, but it would also not be a lie for long — once it’s publicized, it will become true. (Again, I’m fudging the details here because of the nature of propositions, but not in a way that matters for the question that I’m after.)
The means-ends (the technical term is consequentialist) answer to the traditional debate about whether it’s ever right to lie is that it would be okay to tell a lie if the harm caused by lying is outweighed by the benefit resulting from the lie. The response to that way of thinking for someone who wants to raise the stakes against the propriety of lying is to emphasize that, in weighing the harm caused by lying, one must also consider knock-on effects if it were to become known that people were lying so freely.
Considering the knock-on effects in this case, however, seems to muddy the picture further, rather than clarifying it. To take just one important consideration, suppose that people ALREADY account for the fact that public pronouncements are often strategic — in other words, public figures lie, and people assume a certain amount of lying when they hear a public figure speak.
In such a case, if you (remember, you’re the head of the government agency) say that the public health intervention has a 50% compliance rate, then people are likely to assume that you’re already fudging the numbers, which might well reduce the compliance rate to 30%.
In other words, if we weigh your public pronouncement based on whether it accurately corresponds to the rate of public participation at the time in which your pronouncement is widely disseminated, your pronouncement is doomed to fail to reflect the numbers regardless of what you say. At that point, wouldn’t it be understandable if you chose to make the pronouncement that actualized the best public health outcomes, even if it amounted to a fudging of the numbers at the time in which you made the pronouncement?
The self-actualizing aspect of this case is reminiscent of a different discussion, that of William James’s example of the “Alpine wanderer”:
Suppose, for instance, that you are climbing a mountain, and have worked yourself into a position from which the only escape is by a terrible leap. Have faith that you can successfully make it, and your feet are nerved to its accomplishment. But mistrust yourself, and think of all the sweet things you have heard the scientists say of maybes, and you will hesitate so long that, at last, all unstrung and trembling, and launching yourself in a moment of despair, you roll in the abyss. In such a case (and it belongs to an enormous class), the part of wisdom as well as of courage is to believe what is in the line of your needs, for only by such belief is the need fulfilled. Refuse to believe, and you shall indeed be right, for you shall irretrievably perish. But believe, and again you shall be right, for you shall save yourself. You make one or the other of two possible universes true by your trust or mistrust,—both universes having been only maybes, in this particular, before you contributed your act. (James, “Is Life Worth Living?”, Section IV)
James’s point in invoking the “Alpine wanderer” (something he does in multiple essays) is to suggest that some beliefs have exactly the sort of self-actualizing property that I’ve attributed to the case of the public health pronouncement. The wanderer’s belief in their ability to make the leap makes it so, just as your pronouncement, on behalf of the public health agency, that 70% of the public is compliant, also makes it so.
In the case of belief, the contrasting view is W.K. Clifford’s: that “it is wrong always, everywhere, and for any one to believe anything upon insufficient evidence” (From Clifford’s Lectures and Essays). Compare this and, from 3QD, this.)
Whereas I don’t find Clifford’s position very attractive in the case of belief, I do find the corresponding rejection of the fudging of the numbers in the assertion case far more compelling. I’m still working out why, but I’ll save that for another essay.