How to Avoid the Eugenics Wars: Principles for Enhancement Alignment

by Kyle Munkittrick

Gemini’s “60s Psychedelic Poster of The Culture”

We’re on the cusp of The Culture. This Iain M. Banks series has recently replaced Star Trek as the lodestar for The Future We Want. Why? Because it shows us how good the future can be with AI. Critically, both fictions also offer key lessons about the threat and promise of human enhancement: it is coming, it could be amazing, but first it will be contentious, and if we’re not ready, we’ll suffer its worst harms and get few of its best benefits. We want The Culture and to get there, we need to take Star Trek seriously.

In his excellent essay for Arena Dean Ball asked, in essence, “Where are the clearly articulated benefits of a world with AI? Why is it worth all this risk?” The answer to Ball is, “Go read The Culture. Start with The Player of Games.” Imagining a peaceful, prosperous, and pluralistic post-scarcity utopia that is that way because it is run by benevolent ASI (called ‘Minds’) is difficult. Imagining a world where AI has so ‘solved’ biological and medical science so completely that its citizens are fundamentally post-human is even more so. The vision of The Culture is so grand and so alien that it takes a series of novels, following the daily lives of the protagonists, to begin to grasp just how incredible the future with AI could be.

In Star Trek, however, though we can reach the stars, medical technology seems not much advanced beyond that of the 20th century. It’s not due to a limitation of science, but of society. Before reaching the stars, humans endured the Eugenics Wars—a global conflict arising from first the reckless pursuit of, then catastrophic backlash to and banning of, human enhancement technologies. Think, ‘The Butlerian Jihad, but for biology.’

Both pieces of fiction are important because of what is very likely about to happen. In his 15,000-word manifesto Machines of Loving Grace, Anthropic CEO Dario Amodei bolds one, and only one, paragraph:

“[M]y basic prediction is that AI-enabled biology and medicine will allow us to compress the progress that human biologists would have achieved over the next 50-100 years into 5-10 years. I’ll refer to this as the “compressed 21st century”: the idea that after powerful AI is developed, we will in a few years make all the progress in biology and medicine that we would have made in the whole 21st century.”

Let’s call this ~60% year-over-year acceleration of progress the “BOOM (Biological Orders of Magnitude) Decade” (2025-2035). To ‘feel the BOOM’, imagine going from discovering antibiotics (~1930) to all the medical technology we have today (IVF, MRI, GLP-1s) by 1940. People would freak out, to put it mildly. Society needs frameworks and mental models to be able to absorb and adjust to change at that speed. Just as AI alignment principles guided AI development, we urgently need enhancement alignment principles to guide the coming biological revolution. Without AI alignment, we risked creating Skynet or the Paperclip Optimizer; without enhancement alignment, we risk the Eugenics Wars—either through reckless implementation or through panicked prohibition that prevents beneficial technologies.

In preparation for the BOOM, I propose a set of principles for human enhancement technologies (HETs) as a starting point for the conversation and as fodder for consideration by both the humans and the AI who will be building these biological and medical technologies.

The Problem

We face twin challenges with human enhancement technologies:

First, we fear biological change. From IVF to organ transplants, our initial reaction to biological innovations is often repugnance. Leon Kass championed this as the “‘wisdom’ of repugnance,” but our history suggests otherwise—we repeatedly knee-jerk reject technologies (stem cell research) or declare them unnatural (IVF) before eventually embracing their benefits.

This fear was manageable when biological technologies diffused slowly, giving society decades to adjust. IVF took thirty years to transition from “playing god” to standard medical practice. But the era of slow biotech is over. We are witnessing weekly breakthroughs that would have each merited Nobel Prizes in previous decades. Even when the evidence is overwhelming that something is incredibly good, like GLP-1s, there is an instinct to seek problems with it.

Second, our institutions aren’t ready. Our entire medical system is built around treatment, not enhancement. Even basic prevention is poorly supported. Physician Peter Attia is considered radical for suggesting low-dose statins before disease appears. If keeping people healthy is controversial, enhancing them beyond “normal” will trigger far stronger reactions. You can practically hear the simultaneous eye-rolling and hand-wringing in the headline of the Time profile for Bryan Johnson.

Meanwhile, academic bioethics—the field meant to guide us through these questions—appears unable to address these challenges. The current biotech revolution is being covered by economists, entrepreneurs, and podcasters while bioethicists remain largely silent. As Noah Smith points out, our politicians and governments aren’t prepared for the future in general. Dean Ball is pretty blunt that our institutions will not be ready for AI, let alone the BOOM, and it’s hard not to agree. Two natural experiments—the suspension of state licensing restrictions for clinicians during the pandemic and the way the GLP-1 shortage paradoxically expanded access—demonstrated better approaches that our health institutions promptly ignored once the pressure eased. Not exactly confidence-inspiring for the BOOM decade ahead.

So what do we do? Without clear principles guiding enhancement technologies, we risk either reckless implementation or reflexive prohibition—or a cycle of both, reaping the worst outcomes. Even the pioneering work of thinkers like Julian Savulescu and James Hughes (and legions of dorks, including yours truly, arguing online in the early 2000s), while valuable, hasn’t provided actionable guidance for the accelerated bio-science timeline we now face.

Towards Enhancement

The success of AI alignment efforts, particularly Constitutional AI approaches, offers a model for enhancement. Just as we don’t try to control complex AI systems through rigid code but through principles and values, we can’t micromanage enhancement technologies through traditional regulation alone.

I want to strongly emphasize something here: many of the industry leading AI labs (OpenAI, Anthropic, xAI) were founded on and because of AI alignment principles. These might be among the most successful examples of applied philosophy since the founding of the United States. Given The Culture’s permeation of Silicon Valley by way of AI, we have an opportunity to repeat that success with enhancement alignment.

We need principles flexible enough to apply across diverse technologies while robust enough to prevent both reckless implementation and reflexive prohibition. We don’t know, exactly, what the BOOM will look like, so anchoring on particular technologies is a mistake. The following principles are a humble initial attempt to do just that: create a pathway that acknowledges both tremendous potential and serious risks of human enhancement technologies, while giving a set of plausible, easy to understand heuristics that let us intuitively test if a technology, company, or policy (especially ones we cannot predict) deserves some extra attention, be it support or scrutiny.

Enhancement Alignment: Principles for Human Enhancement Technologies (HETs)

HETs should be good for individuals

  1. The person being changed is still that person after the change. We want HETs that preserve continuity of identity. Existing things that meet this bar: tattoos, surgeries, getting pregnant, and moving to a new country. Despite fundamental—in some cases irreversible—impact on our identities and bodies, a clear before-and-after, we recognize the self is contiguous across those milestones. HETs that radically alter the self should be viewed with skepticism.
  2. The person being changed is the person that benefits most. We want HETs that preserve continuity of value. Getting a vaccine or taking a GLP-1 do the most good for me regardless of population level benefits (e.g. herd immunity, net cost savings). HETs that require people to change in ways that don’t benefit them, only others, should be viewed with maximum skepticism. This is the anti-eugenics principle, reinforced by the other two.
  3. The person being changed would choose this change independent of status. We want HETs that are good behind the Veil of Ignorance. Many enhancements are desirable regardless of your current status or position in society. For example, nearly everyone would see value in being more intelligent by 10 IQ points, or having a healthspan of an additional decade, regardless of any other information.

HETs should be good for pluralistic societies

  1. HETs that are social goods should be voluntary and incentivized. The best HET policies will err towards being voluntary, even for things that get better with combined efforts (e.g. vaccines). Companies should seek to sell the benefits of an HET product and attract customers. Policies using incentives and encouragement via norms should be strongly preferred over mandates.
  2. HETs that encourage expansive life path should be preferred over specialized life paths. The best HET policies and companies will facilitate increasing the options and capacities available to a person. For example, modifying intelligence to increase base IQ or improve a weak capacity (e.g. creativity) that can be used for nearly any life path is preferred over optimizing for being an engineer or an artist. This is the anti Brave New World principle.
  3. HETs that encourage heterogeneity should be preferred over those that require homogeneity. The best HET policies and companies will facilitate and thrive amid differences. For example, many existing fertility centers that offer gender selection allow it only for family balancing — you can’t choose the gender of your first child. You can only choose a child’s gender (e.g. a girl) as part of pre-implantation selection if and only if you already 1) have a child that 2) is not that gender.
  4. Good HETs will move from exclusive to ubiquitous by design. We should celebrate and preferentially support HETs designed with near-universal accessibility as a core goal rather than an afterthought. The iPhone/Bugatti distinction is instructive: Apple deliberately engineered its products to scale from luxury to ubiquity, while Bugatti intentionally remains exclusive. The path where HETs improve while becoming increasingly accessible is fairer and safer. HETs with limited distribution risk creating permanent biological castes and triggering precisely the backlash that led to the fictional Eugenics Wars.

Meta-Principles

  • These are not solutions to The Big Questions. Just as the Hard Problem of Consciousness did not prevent pursuit of AGI, nor does the Non-Identity Problem or others prevent pursuit of HETs.
  • These principles are mutually reinforcing. While each can be evaluated independently, the sum are greater than the parts.
  • These are not rules, but are practical guidelines. “AI must be aligned to exist” was not a rule, instead the principle of “AI should align to our values” guided the process by which AI have been trained.
  • These are social norms, not laws. There is no governing body of AI, save the companies building. Anthropic is a major player in the space and, some might argue, builds some of the best models because it embraces AI risk norms, not in spite of.
  • Technologies that fall short of some or many principles are not evil, nor are the people who build them. They are riskier, in that they may harm more people or lead to a backlash that kills that area of research or sets it back decades.

Principles in Action: Cognitive Enhancement

Cognitive enhancement is the most analogous and complementary to AI, so let’s see how these principles would apply:

Should we improve general intelligence by incentivizing reproduction, rearing, and/or immigration among those with genetic predispositions for high intelligence capacities (e.g. creativity, g, low-neuroticism, etc)? No. This is eugenics. The framing here intentionally sounds like it works—incentives and general capacities. These policies, however, would not benefit the person changed (the persons incentivized) most. The people who benefit most by the policies own standards—the child and general ‘society’—do so indirectly. Clear violation of the second principle and questionable under all the pluralistic society principles.

PKD Inc. is building a ‘Vibes Organ’ which lets the user ‘dial’ what headspace they’re in — creative, focused, rational, open. Should they advertise it as the luxury of choice? Maybe! Tesla helped usher in the EV revolution by shifting perception of EVs from cheap, slow hippie-mobiles to cool, fast sports cars (the roadster was first, remember!). If PKD Inc.’s mission is “Make mental health the new wealth” we should be skeptical. If it’s “Make mental health the new wealth, and make everyone a billionaire” then we should celebrate that latter half and, as OpenAI employees did, vote with their feet if they feel the mission is violated.

What is key here is that in both these examples, parts of them are good. Incentivizing general intelligence capacities is good, but doing it via population mechanisms is not. Giving control over mental states is good, but doing so with the explicit goal of keeping it exclusive is not. These principles are designed to allow us to isolate the risks from the technologies and amend our approach to keep the benefits. They do not tell us what to do, they give our intuitions guidance and precision.

Not a conclusion: a beginning.

These principles are sacrificial drafts rather than final pronouncements. They represent an attempt to do for enhancement what alignment principles did for AI: create a framework that guides development without requiring centralized control. If the AI community hadn’t developed alignment principles before capabilities accelerated, we’d be facing greater risks today. Similarly, we need enhancement alignment principles before the BOOM fully arrives.

If every single one of these principles proves wrong but sparks a productive conversation that leads to better ones, I’ll count this a success. The goal is not perfect prediction; the BOOM might be hype and we may just muddle along at our current pace. If that’s the case, these principles are still useful!  But preventing anything remotely like the fictional Eugenics Wars—be it a ‘culture’ war (ha!) or an actual war—is essential. My hope is that, if we can use these principles to keep the widest, safest path open during the BOOM years, we have a chance to actually achieve something like the post-scarcity, post-human utopia of The Culture.

Help Kyle refine these principles. Feedback welcome and appreciated. Leave comments below, tweet @popbioethics, or email kmunkitt@gmail.com

Enjoying the content on 3QD? Help keep us going by donating now.