by David Kordahl
Adam Becker alleges that tech intellectuals overstate their cases while flirting with fascism, but offers no replacement for techno-utopianism.

People, as we all know firsthand, are not perfectly rational. Our beliefs are contradictory and uncertain. One might charitably conclude that we “contain multitudes”—or, less charitably, that we are often just confused.
That said, our contradictory beliefs sometimes follow their own obscure logic. In Conspiracy: Why the Rational Believe the Irrational, Michael Shermer discusses individuals who claim, in surveys, to believe both that Jeffery Epstein was murdered, and that Jeffery Epstein is still alive. Both claims cannot be true, but each may function, for the believer, less as independent assertions, and more as paired reflections of the broader conviction that Jeffery Epstein didn’t kill himself. Shermer has called this attitude “proxy conspiracism.” He writes, “Many specific conspiracy theories may be seen as standing in for what the believer imagines to be a deeper, mythic truth about the world that accords with his or her psychological state and personal experience.”
Adam Becker’s new book, More Everything Forever: AI Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity, criticizes strange beliefs that have been supported by powerful tech leaders. As a reader of 3 Quarks Daily, there’s a good chance that you have encountered many of these ideas, from effective altruism and longtermism to the “doomer” fears that artificial super-intelligences will wipe out humankind. Becker—a Ph.D. astrophysicist-turned-journalist, whose last book, What Is Real?: The Unfinished Quest for the Meaning of Quantum Physics, mined the quantum revolution as a source of social comedy—spends some of his new book tracing the paths of influence in the Silicon Valley social scene, but much more of it is spent pummeling the confusions of the self-identified rationalists who advocate positions he finds at once appalling and silly.
This makes for a tonally lumpy book, though not a boring one. Yet the question I kept returning to as I read More Everything Forever was whether these confusions are the genuine beliefs of the tech evangelists, or something more like their proxy beliefs. Their proponents claim these ideas should be taken literally, but they often seem like stand-ins for a vaguer hope. As Becker memorably puts it, “The dream is always the same: go to space and live forever.”
As eventually becomes clear, Becker thinks this is a dangerous fantasy. But given that some people—including this reviewer—still vaguely hold onto this dream, we might ponder which parts of it are still useful.
Consider Will MacAskill, the bestselling author of Doing Good Better (2015) and What We Owe the Future (2022), and the first cautionary figure Becker discusses. MacAskill started as a young follower of Peter Singer, the moral philosopher whose views on issues like infanticide (sometimes pro) and animal agriculture (always anti) have long been controversial. MacAskill found himself convinced by Singer’s arguments that the scope of our moral concern should extend widely, across the lines of state and species, and he became one of the early proponents of “effective altruism,” a movement that encouraged people to give money to those charities that proved most effective at reducing global suffering, per dollar given.
From the beginning, effective altruists supported certain counter-intuitive claims. For instance, the idea of earning to give held that the optimal moral act for some people is to make a lot of money, so as to increase their ability to give it away. Furthermore, early EA leaders insisted that all charities are not equivalent, arguing, e.g., that a dollar spent on mosquito netting to prevent malarial spread was much better spent, in terms of moral efficiency, than a dollar spent on cancer research.
In time, EA ideas became even more unusual. By the time he wrote What We Owe the Future, Will MacAskill had come to believe that we should further extend our moral concern to those beings who are not yet alive, but who might live in the distant future. This idea was known to EA enthusiasts as longtermism.
Absent a crystal ball, it’s hard to know how best to spend your money on future people. It requires you to rough out the trajectory of, let’s say, the next million years. So what vision did EA leaders adopt? They borrowed views associated with figures like Ray Kurzweil and Eliezer Yudkowsky, who split history into before and after the advent of artificial general intelligence (AGI), which they suppose will lead to a technological “singularity,” characterized by unfathomably rapid change.
In such books as The Age of Intelligent Machines, Ray Kurzweil has argued that this singularity will produce sweeping alterations in our relationship to the world. For Kurzweil, this is good news. The abundant availability of intelligence promises a future where our most fundamental problems can potentially be solved—including getting rid of illness and death. (The exact mechanism is still TBD.)
Now, you may question the plausibility of this scenario. Becker certainly does. But Kurzweil’s second-generation followers, including Eliezer Yudkowsky, take AGI and its attendant advancements for granted. What they don’t take for granted is that these changes will be good for us. Yudkowsky has made a name for himself arguing for the dangers of “AGI misalignment,” whereby the goals of AGIs may not be aligned with those of humans. More Everything Forever opens on Yudkowsky, who speaks with Becker at the Machine Intelligence Research Institute, his self-founded think tank, about the “glorious transhumanist future” that awaits if alignment is achieved, and the brutal AI apocalypse that might transpire in its absence.
For longtermist effective altruists like Will MacAskill, taking such scenarios seriously pushed them out of the “bed-nets era” (as a New Yorker profile of MacAskill characterized it) and into AI research. The basic argument will be familiar to anyone who has heard arguments for the near-certainty of extraterrestrial life that hinge on multiplying the very small, and totally unknown, probability of ETs per star by the very, very large, and somewhat well-known, total number of stars, yielding a probability near 100%. EA leaders have applied the same basic idea to AI research. The probability that any particular research project might alter the future of humankind is very small, but—in the estimate of an optimal future—the number of future people it might influence is very, very large.
Hence the longtermist argument for think-tank funding.
Now, again, you may question whether this argument hangs together. While reading about these ideas, I reflected that my own life would not pass EA muster. There are surely ways that I could earn more to give more, and, in any case, I basically spend all my money raising my three children. I might also wonder what compels me to follow utilitarian arguments in the first place, since I find myself respecting some confused mish-mash of Christian traditions and inherited duties.
Such are not the sorts of small-scale considerations that Becker discusses. His typical attack is a one-two punch that first criticizes the science, and soon follows up to add that even if the science were sound, he would still morally disapprove.
In the case of MacAskill’s longtermism, Becker first lays out the limits to energy growth. “If humanity’s energy usage continues to grow by […] 2.3 percent per year, then in about four hundred years, we’d reach Earth’s limit—we’d be using as much energy as the Sun provides to the entire surface of the Earth annually.” He follows this reasoning to its absurd end, a few millennia later, when this growth rate would require capturing all the output of all the stars in the observable universe.
But after this, he introduces the real problem, which he claims is the ideology of technological salvation. This is where I lose the line of argument. Becker assures us, “This future (or set of futures) doesn’t work,” and discusses some known difficulties with space settlement. But he soon moves on to MacAskill’s association with Sam Bankman-Fried, whose commitment to EA was widely reported as a factor that led him toward risky bets, and ultimately to imprisonment.
Becker’s frequent implication of guilt by association mars his book. More Everything Forever includes many fine discussions of instances where tech enthusiasts have overstated their cases, but I often found myself disagreeing with his moral opprobrium. If an outcome is impossible, we shouldn’t also find it scary.
The best skeleton key to Becker’s moral critique is in a footnote on page 265 discussing the racism, colonialism, etc., of early influences on classic science fiction:
Cosmism, with its promise of eternal life in space through technology—and its colonialist and eugenicist logic—has a clear ideological link to modern movements like transhumanism and singularitarianism. Through cosmism’s influence on twentieth-century science fiction, the link is historical as well. Timnit Gebru and Émile Torres have dubbed this set of related ideologies (traced throughout this book) the TESCREAL bundle: Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, and Longtermism. Gebru and Torres have done extensive work linking these ideologies to each other and to the core of racist logic they share.
I invite readers to look at the article by Gebru and Torres online. It alleges that racism, xenophobia, classism, ableism, and sexism are baked into the modern push toward artificial general intelligence, since the “TESCREAL bundle” is adjacent, in its hopes for a “glorious transhuman future,” to twentieth-century eugenics.
As is probably clear by this point, this seems to me like just more guilt by association. Despite the vague family resemblance, insofar as twentieth-century eugenicists and twenty-first-century TESCREALs both want(ed) a better world through science, one cannot simply pass the sins of one group on to another.
At the end of More Everything Forever, Becker allows himself to admit that he, too, was once a techno-optimist, as a child watching TV shows like Cosmos and Star Trek: The Next Generation — “But then I grew up.” In the final pages, he muses on what might take the place of our hopes for an immortal future in space, and he admits he doesn’t know. The best suggestion he has is taxing billionaires out of existence…which is fine, but hardly a replacement for space immortality.
Then again, maybe the dream of space immortality has itself always been a proxy for something else. I expect that Becker would be quick to agree that Utopias rarely live up to their promises, but maybe that’s not the point. Suppose, instead, that we take these philosophical scenarios as motivating us to find ways to live longer lives, to build better software, and to explore ever vaster stretches of space. Even if no singularity ever arrives, these are already worthwhile outcomes.
Enjoying the content on 3QD? Help keep us going by donating now.
