Pushing the right beliefs, for the wrong reasons

by Julia Galef
Orator2 For a crash course in the tactics of persuasion, you can’t do much better than religion. Religious rhetoric is thick with arguments that win people over despite being logically flawed. Just a few of the most common:
Appeals to authority: “Believe in God, because your parents and teachers tell you to.”
Appeals to consequences: “You should believe in God because without Him, people would be wicked.”
Anecdotal evidence: “I prayed for my mom who had cancer, and she recovered.”
Ad hominem: “People who don’t believe in God are wicked.”
Appeals to fear: “Believe in God, or you will suffer for eternity.”

Atheists, skeptics, and rationalists complain about arguments like these, and rightfully so. None of the above constitutes good evidence for the existence of a God. But there’s a reason religions use those appeals to authority, consequences, and fear — they work. The unfortunate truth is that people seem to be more susceptible to certain irrational arguments than they are to rational ones, which raises a troubling question for those of us who would like to combat false beliefs in society: Should we make the argument that constitutes the best evidence for the true claim, or the argument that’s most likely to persuade the person we’re talking to?

To be clear, I’m not talking about lying. I’m talking about making an argument which is true but which isn’t good evidence for the claim you’re trying to advance. So for example, let’s say I wanted to convince a Catholic of the truth of the theory of evolution. My first instinct might be to lay out the evidence for the theory, showing them examples of natural selection at work, pointing to examples of transitional fossils, and so on. If my goal is to change their belief, however, I’d probably be better off explaining that the Vatican’s position is that evolution is consistent with Catholic dogma. That appeal to authority is going to be more persuasive, for someone who already trusts the authority in question, than an appeal to the relevant evidence.

Alternately, let’s say I want to dissuade someone of certain right-wing political views. I could lay out the empirical and logical evidence for why I think she’s wrong about, say, Obama being an alien or about the Democrats trying to impose “death panels” on the country. Or I could tell her about the sordid personal life of one of her favorite right-wing idealogues, like Rush Limbaugh, in an attempt to discredit his political views. It’s an ad hominem attack, but given the degree to which people’s views are influenced by those of people they like and trust, it might well help change her mind.

Another conflict between an argument’s quality and its persuasive power that I face all the time is the use of anecdotal evidence. For most people, a single compelling story is more convincing than a statistic. (“One death is a tragedy; a million is a statistic,” said Stalin; he may not be the sort of fellow you’d like to invite round for tea, but he was right on this count.) That’s why charities have learned that instead of trumpeting the statistics about the hundreds of thousands of people dying of civil war or malaria or malnutrition, it’s much more effective to give the public a compelling story about a single suffering person. The average amount of money people are willing to contribute “to save one child” is consistently larger than the average amount of money people are willing to contribute “to save eight children.”

So if I want to persuade someone to vaccinate her children, I might be tempted to use arguments about how the original paper showing a connection between vaccines and autism was fraudulent, about how subsequent research has failed to find any evidence for such a connection, about how the removal of thimerosol (the allegedly offending ingredient in vaccines) was followed by an increase in autism, not a decrease, and about how rates of diseases like mumps and measles are skyrocketing because of parents’ refusal to vaccinate their children. Those are all good arguments, epistemically. But I might have more success if I simply told a poignant story about a child who died from the measles because of the decline in vaccination rates.

Of course, tactics like anecdotal evidence, ad hominem attacks, and appeals to authority, aren’t as ethically problematic as outright lies. But I think there’s still some implicit deception occurring when you try convince someone to believe a claim by making an argument which is true but which is not a good reason for believing that claim. Simply by virtue of the fact that you’re making a particular argument, you’re arguably implying that you think it’s a good reason for believing the claim in question. So if I cite the Pope’s support of evolution or Limbaugh’s hypocrisy in dealing with drug addiction, in an attempt to persuade someone to believe in evolution or disbelieve in right-wing ideology, I’m implicitly saying that I think the Pope’s support is a good reason to believe in evolution and that someone’s personal hypocrisies are a good reason to disbelieve in a political view he happens to hold. And that would be a lie.

But even if you're not concerned about the ethical questions surrounding these persuasive tactics, you should at least be concerned with the tactical questions. For one thing, opting for a persuasive argument over a good one means you’re passing up the chance to make a case for critical thinking. If I convince someone not to believe in astrology because there’s no evidence for it and no scientific basis for the theory, then I’ve reinforced in him the understanding that he should judge claims on evidence and scientific logic. But if I convince him not to believe in astrology by saying, “You know that famous astrologer? She denounced the field last year,” then I’ve reinforced in him the understanding that he should listen to authority figures.

Also, if you take the approach of convincing someone with the evidence that is more convincing to him personally, but which isn’t the real reason you believe the claim in question, then you’re setting yourself up for failure if that evidence turns out to be false. So, for example, I know a lot of people who make the case against discrimination of gays by arguing that homosexuality is innate. And judging from the ubiquity of that argument, it does seem to be one of the most persuasive arguments for gay rights as far as the general public is concerned.

Yet while I agree that the evidence is overwhelming that homosexuality is innate, I’m loath to make that argument, because in my opinion that’s not the real reason we shouldn’t discriminate against homosexuals. The real reason, as far as I’m concerned, is that it’s none of our business if consenting adults want to sleep with each other, as long as they’re not hurting anyone else. By making the “homosexuality is innate” argument, I’d be staking my anti-discrimination case on an empirical question which, if it unexpectedly turned out to be false, would seriously undermine what is actually a very worthwhile case.

But it’s possible that’s a risk worth taking. All of the issues I’ve discussed, and many of the issues I find myself debating publicly and privately, are more than mere academic questions. There are real and often serious consequences to people’s beliefs about God, evolution, vaccines, astrology, homosexuality, and politics – both for the people themselves, and for the rest of us. That means pure idealism about how we choose to persuade is a luxury we probably can't afford. I’m not at all certain where to draw the line on the spectrum of persuasion, from smiles at one end to lies at the other. But at the least, I think it’s important to recognize that there is often a tradeoff between a good argument and a persuasive one, and to ask ourselves what our goal really is: improving people’s beliefs, or improving the processes of reasoning that they use to arrive at their beliefs?