The Short Shelf Life of “Longtermism”

by Tim Sommers

The New Headquarters of the Effective Altruists, Wyntham Abbey, Oxfordshire

Despite what you might have heard, it almost certainly wasn’t Yogi Berra or Samuel Goldwyn who said it. It may be an old Danish Proverb. But it is probably a remark made by someone in the Danish Parliament between 1937-1938, recorded without attribution in the voluminous autobiography of one Karl Kristian Steincke. It being:

“It is difficult to make predictions, especially about the future.”

This is one reason you should probably be much less concerned with the end of the world than longtermists like Sam Bankman-Fried, Elon Musk, Peter Thiel, and William MacAskill are – or claim to be.

Here’s a slightly more accurate, if more pretentious, way of putting it, this time unequivocally from Wittgenstein:

“When we think of the world’s future, we always mean the destination it will reach if it keeps going in the direction we can see it going in now; it does not occur to us that its path is not a straight line but a curve, constantly changing direction.”

That’s the moral. Here’s the story.

It was not MacAskill, Hilary Greaves, or Nick Bostrom – much less Bankman-Fried – that came up with longtermism, perhaps the most controversial element of the most controversial and visible philosophical and moral movement of the twenty-first century, “effective altruism.” Longtermism, specifically, is the view that we owe the future a certain priority over the present, especially when it comes to existential risks, like nuclear war, pandemics, artificial intelligence, and nanotechnology.

Three thing I should mention to begin. One, I left off climate change as an existential risk to highlight that MacAskill, in his recent book What We Owe the Future, argues, much to the chagrin of climatologists everywhere, that climate change is not an existential threat. Two, for me, it’s already evident that something has gone wrong with a view that puts the existential hazards of nanotech, or even AI, on par with nuclear war or climate change. You might as well prioritize the existential risks of time travel. Three, while any definition is going to be controversial here, and there are many longtermisms, if a view doesn’t include giving some kind of priority to the future, leaving aside how and how much, it can’t be something we should call longtermism.

So, where does longtermism come from? It comes from an argument made by Derek Parfit in the final section, section one hundred and fifty-four, of his massively influential book Reasons and Persons, a section titled “How Both Human History, and the History of Ethics, May Be just Beginning.”

Here’s the first bit of the argument.

“Compare three outcomes: (1) Peace. (2) A nuclear war that kills 99% of the world’s existing population. (3) A nuclear war that kills 100%. (2) would be worse than (1), and (3) would be worse than (2). Which is the greater of these two differences? Most people believe that the greater difference is between (1) and (2). I believe that the difference between (2) and (3) is very much greater.”

Parfit says that two very different groups of philosophers would agree with him on this. For simplicity, and because Parfit like effective altruists is a utilitarian, I am going to stick to the utilitarian version of the argument (as opposed to what I would call the “perfectionist” version of the argument (which I would be happy to discuss in comments, along with anything else)).

Anyway, Parfit says:

“The Earth will remain inhabitable for at least another billion years. Civilization began only a few thousand years ago. If we do not destroy mankind, these few thousand years may be only a tiny fraction of the whole of civilized human history. The difference between (2) and (3) may thus be the difference between this tiny fraction and all of the rest of this history. If we compare this possible history to a day, what has occurred so far is only a fraction of a second…Classical Utilitarians…would claim, as Sidgwick did, that the destruction of mankind would be by far the greatest of all conceivable crimes. The badness of this crime would lie in the vast reduction of the possible sum of happiness.”

That’s pretty much it.

I greatly admire Parfit. (In fact, I take every chance I can find to tell people I had a class with him. See, I’m doing it right now.) And I love thought experiments and counterintuitive arguments. (See, for example, my “Life’s a Puzzle.”) But a great danger of thought experiments is that by stipulating away so much detail, we can inadvertently conceal that the argument proves much less than it appears to.

Just to make sure we are all on the same page. Consequentialism is the view that the right thing to do is always whatever has the best overall consequences. Utilitarianism combines consequentialism with the view that the only thing that is intrinsically valuable, and/or matters morally, is the utility (i.e., welfare, well-being, flourishing, etc.) of “persons” or “sentience beings” (or some properly specified set of those covered by morality). It’s important to see, however, that utilitarians don’t care at all about the distribution of welfare across people, only about maximizing the sum total. If sacrificing some for the greater good maximizes utility we are morally required to do it. Utilitarianism is the ultimate ‘the ends justify the means’ morality. (Before you reject it out of hand, however, you might want to ask yourself, ‘If the ends don’t justify the means, what does?’) But I don’t want this to turn into an examination of utilitarianism.

The point is that if we assume there will be many more people in the future than in the past, as long as these people in the future have a net positive sum of welfare, then the total amount of utility across human history depends mostly on the existence of these future people. In fact, there will be so many more people in the future than there are now (according to longtermists), that people now will matter very much less than these future people. Hence, morality should prioritize long term existential risks to the existence of these future people over everything else.

I won’t linger on it, but one worry is that this is the kind of argument has been used in the past to justified some pretty bad behavior. (For examples see, The Rebel by Camus or Koestler’s Darkness at Noon. Or longtermist philosopher Nick Bostrom, who has said that the two world wars, AIDS and the Chernobyl nuclear accident while “tragic…to the people immediately affected, [are] in the big picture of things…mere ripples on the surface of the great sea of life.” The concern is that a focus on “the great sea of life” that turns the Second World War into a ripple might be, um…dangerous?)

But to return directly to Parfit. Consider some possibilities he leaves out. What if (2) occurs, and then a generation later the 1% that were left after (2) are killed by a pandemic or an asteroid strike? Or suppose that the 1% that survive, perhaps chagrin at the enormous damage done to the environment by their nuclear war, decide to keep the human population very small, and they all die out before the number of people born after (2) exceeds the number born before (2)? Finally, Parfit says the Earth is likely habitable for another billion years. Well, it’s probably been habitable at least three, maybe four, billion years already. Suppose we invent time travel (but not practical space travel) and discover that we could make it the case that many more people have already lived in the past than the number that are likely to live in the future?

What’s the point of these examples? The point is that “the future” and “existential risks” are red herrings. What makes (2) worse than (1) is exactly the same as what makes (3) worse than (2). That the difference in the sum of utility between (3) and (2) is supposed to be very much greater than the difference between (1) and (2) is either an assumption or a prediction. If it is an assumption, then the argument proves nothing. We are still where we started, ‘Do what ever maximizes utility.’ If it’s a prediction, then it is an empirical claim and not a tenant of morality that is at stake.

Look at another way. At best, one might think, Parfit’s argument proves that, for utilitarians, we should always chose having more people over fewer. But it doesn’t really even prove that. That depends on assuming that the “more people” have, as a group, overall greater utility than whatever smaller group we are comparing them to has. The future only matters more now, in other words, if the people in it have more utility (either because there are more of them or just enough with high enough utility). As always, all the matters for utilitarianism is maximizing utility. The argument tells us nothing about what to do for – or about – future people. It’s actually a bunch of factual assumptions, if anything, that take us to longtermism from here.

To see that more clearly, let’s close with Macaskill’s version of the argument for longtermism.

(1) Future people have moral worth.*

(2) There could be a very large number of future people.

(3) What we do today can affect the lives of future people in the long run.

MacAskill treats this as the official argument for longtermism. But all of these premises are uncontroversial. How do they get us to something as controversial as longtermism from such weak, widely accepted premises? Maybe, as Wittgenstein said, all of philosophy is just assembling “reminders for a particular purpose.” But I don’t think so. I think that, alone, these premises don’t get us to longtermism, at all. There’s no reason to prioritize people in the future, even if there could be a lot more of them, since we don’t really know how to help them whereas we could do things now almost guaranteed to help people now – who, in any case, are the ones who could bring about the later existence of these other people. There is a further premise, however, buried in MacAskill’s explanation of premise three.

“[W]hile it is difficult to foresee the long-run effects of many actions, there are some things that we can predict. For example, if humanity suffered some catastrophe that caused it to go extinct, we can predict how that would affect future people: there wouldn’t be any. This is why a particular focus of longtermism has been on existential risks.”

Notice the trick. He says that despite the limits on our ability to predict the future, we can definitely predict that if people go extinct then there won’t be any future people – and that’s why longtermism is all about existential risks. But that’s not at all what the dispute is. This is a tautology. If something is an existential threat and it occurs, then we can count on our nonexistence after that. But the dispute is whether we can now predict what the real existential threats actually are, how likely they are, and/or how we can succeed at preventing, or at least reducing the risk of, them – not to mention, we need to know how to weigh the possibility of such events against the sacrifices and costs of trying to prevent them. We can’t focus on the greatest risks if we don’t know what they are. So how do we justify prioritizing the future welfare of hypothetical people whose situation we can only guess at over the welfare of real, knowable people right now?

Let me put it like this. I don’t want to talk about the financial and legal woes of the most famous (for the moment), aforementioned longtermist on the planet, or whether his altruism was sincere or cover for his (alleged) crimes. But I will say that the most ironic irony in that saga is not the irony of a selfish guy pretending to be altruistic. It’s the irony of someone committed to a view so utterly dependent on predicting the future, who is also so very bad at doing so. But then – and here is my real point and where we started – we all are.

_______________________________________________

*I said that these principles are not controversial, but that’s not strictly true. The claim that future people have moral worth is contestable. After all, future people do not now exist. How can something that does not exist have moral worth? It’s likely that there will be people in the future and, if so, they will have moral worth. But that is different from saying that these people, whomever they turn out to be eventually, currently have moral worth, even though they don’t exist. This is especially true since, and Parfit may have been the first to point this out, who exists in the future is determined by what we do now. If you exist in the future, and your life is worth living, it seems you can have no complaint against those who brought you into existence since, had they acted differently, you would not be alive and leading a life worth living. This is called the nonidentity problem. Maybe, therefore, we should prioritized people who exist over those that don’t, rejecting even parity for future, but nonexistent, people.