by Samuel Dunlap
In 2007, the neuroscientist Sabrina Tom slid volunteers into an fMRI scanner at UCLA and offered them coin-flip gambles. Win twenty dollars or lose twenty. Win thirty or lose ten. Will you take the bet? Most people refused any gamble where the potential loss exceeded about half the potential gain. The scans showed why: as potential losses grew, the brain’s reward regions responded more sharply to what could be lost than to what could be gained. Loss aversion — losing something hurts roughly twice as much as gaining it feels good — was visible in the tissue.
That same year, evolutionary psychologists were calling loss aversion a design feature — a calibrated response to environments where losing a day’s food could mean death but gaining extra food meant only marginally better odds. Meanwhile, behavioral economists were calling it a violation of rational choice theory. A rational agent’s choices shouldn’t depend on how identical information is described — but under loss aversion, they do. Frame a surgery as having a 90% survival rate and patients choose it; frame it as having a 10% mortality rate and they hesitate.
Three traditions, three accounts, all well-evidenced — and not three ways of saying the same thing. The neuroscientist suggests modulating the neural response. The behavioral economist suggests reframing the choice. The evolutionary psychologist suggests understanding the adaptive function before overriding it. Nobody has a principled way of deciding among them, because the field has never settled a prior question: what is a cognitive bias?
- • •
Since Tversky and Kahneman’s landmark 1974 paper, at least twenty-seven systems for classifying cognitive biases have been proposed, drawn from cognitive psychology, evolutionary biology, behavioral economics, neuroscience, intelligence analysis, and corporate consulting. I have surveyed them all. These systems disagree about questions as basic as what a bias is and when to correct one, but the disagreements cluster around three starting points, and each shapes not just what researchers find but what they think they know.
Form asks what the bias looks like — the observable pattern, the experiment that reveals it. It finds deviations: departures from a rational standard.
Mechanism asks how the bias occurs — what neural or cognitive process produces it. It finds constraints: built-in limits of the system.
Function asks why the bias exists — what problem it evolved to solve, what role it plays in a larger system. It finds adaptations: solutions to problems posed by the environment.
Of the twenty-seven systems, twenty-three start from form or mechanism. Only four begin with function. The imbalance has a cost.
- • •
Medicine shows what that cost looks like. For most of medical history, doctors saw a fever and treated it as the problem. The form was obvious — a deviation from normal body temperature. The mechanism was correctly identified — inflammatory response, an inherent cost of how the immune system mobilizes. The correction seemed clear: bring the number down.
But what is fever for? Elevated temperature kills pathogens and accelerates immune cell activity. Fever is not the immune system failing; it is the immune system working. Suppressing a moderate fever often slows recovery. The deviation was real, the constraint was real, but without the function question, doctors were correcting a process that was helping.
- • •
Consider confirmation bias. The form is well-documented: an analyst who has decided a foreign government is building weapons reads ambiguous imagery as confirmation; a doctor who suspects a diagnosis notices fitting symptoms and overlooks the rest. The mechanism is understood: the mind preferentially seeks, recalls, and weights evidence that fits what it already believes. And the prescription follows from the deviation — teach people to seek disconfirming evidence. Right, right — and, in practice, often wrong. Decades of individual debiasing training have produced modest and fragile results.
Hugo Mercier and Dan Sperber asked the function question in their 2017 book The Enigma of Reason and got a different answer. Human reasoning, they argued, did not evolve for solitary truth-seeking. It evolved for social persuasion — for making arguments in groups, where other people push back. In that context, seeking confirming evidence is not a deviation. It is an adaptation, because the adversarial structure of group argument is what produces good collective outcomes.
The prescription changes: don’t train the individual to think differently; build the structure that makes individual bias productive. After the intelligence failures of the early 2000s, the CIA did something like this. They built red teams: units assigned to argue against the prevailing analysis. The function perspective did not just reinterpret the bias. It suggested the intervention that actually worked.
- • •
Each starting point, taken alone, produces a characteristic failure.
Overemphasizing mechanism produces the explanation trap. Researchers spent decades debating whether Kahneman’s System 1 and System 2 are distinct systems or labels for a continuum. The debate sharpened cognitive theory but delivered little guidance for the courtrooms, clinics, and boardrooms where biased decisions actually happen. Mechanism can tell you how overconfidence arises. It struggles to tell a surgeon what to do about it on Tuesday morning.
Overemphasizing form produces the catalogue trap. The number of named biases now exceeds two hundred, with no principled way to decide when to stop. Nobody can hold two hundred deviations in memory while making a decision — and you cannot anticipate a bias not yet on the list. A catalogue gives you symptoms with no theory connecting them.
Overemphasizing function produces adaptive storytelling. Once you ask what a bias is for, every pattern starts to look designed. Overconfidence becomes a bluffing mechanism; procrastination becomes an information-gathering strategy. Some of these accounts are well-evidenced. Others are just-so stories in evolutionary language — and the design space of plausible ancestral problems is vast enough that a researcher can almost always construct one.
- • •
But these are not three equivalent errors, because form, mechanism, and function are not three equivalent starting points. They have a logical order.
A process doing what it was built to do can look exactly like a malfunction — if you judge it against a standard it was never built to meet. And knowing how the process works won’t tell you which case you’re in. You can describe the biology of fever in perfect detail and still not know whether the fever is the disease or the cure. You can map every pathway of confirmation bias and still not know whether you are looking at a reasoning failure or a persuasion system doing what it was built to do. Only the function question settles this.
Worse, the reversal conceals itself. A research program that starts from form and finds a real pattern, then investigates mechanism and finds a real process, generates internally consistent results at every stage — and a program that generates correct findings at every stage looks like a program that has no gap — and has no reason to look for one.
The principle is: function poses the question, mechanism investigates it, form measures the result. Here is what happens when the order is reversed. Base-rate neglect is the tendency to ignore background probabilities when evaluating a specific case — a doctor who sees a cluster of symptoms and diagnoses a rare disease without considering how much more likely a common explanation is. It was replicable. The explanation seemed clear: people judge by resemblance rather than by probability, so a case that looks like a rare disease gets diagnosed as one. The field built decades of debiasing interventions on this — training people to think more statistically. Then Gigerenzer asked the function question: what kind of statistical problems did the mind evolve to solve? Not ones expressed as percentages — no one encountered those in the ancestral environment. But natural frequencies — 1 in 1,000 rather than 0.1% — are the format the mind was built for. Present the same information that way, and people reason correctly. The bias wasn’t a flaw in the reasoning. It was an artifact of asking the mind to do something it was never designed to do. The findings were never wrong. They were complete enough to be mistaken for the whole picture.
But the cycle also runs the other way. Gigerenzer, the psychologist who most persistently challenged the heuristics-and-biases tradition, could not have reframed the problem without the deviation data that Kahneman and Tversky spent decades collecting. Mercier and Sperber’s reinterpretation of confirmation bias required the very research it overturned. The cycle comes around. But the direction you enter it determines what you can see from inside it.
This is not just a methodological point. It is a feature of how language works when a field coalesces around a paradigm. The word bias is not a neutral description — it is a verdict. It means error: departure from a standard the speaker assumes is correct. To apply it to a cognitive pattern is to have already answered the function question, whether or not anyone has asked it. Once the vocabulary carries that verdict, everything built on it inherits the assumption. The mechanism becomes an account of malfunction. The form becomes a measurement of failure. The evolutionary psychologist who calls loss aversion a “design feature” and the behavioral economist who calls it a “violation” are not disagreeing about the data. They are disagreeing about the verdict. This essay uses the same word, because there is no neutral alternative and because the point is easier to see from inside the vocabulary than from outside it.
The consequences extend beyond academia. The nudge movement — auto-enrollment in pension plans, smarter defaults in healthcare, simplified tax forms — has produced large welfare gains. But every nudge is built on the assumption that the bias it targets is a deviation, not an adaptation. If status quo bias is sometimes a precautionary response to irreversible losses — a worker declining a new retirement plan not from inertia but from earned distrust — then overriding it might trade a measurable short-term gain for something harder to measure. Treating every heuristic as a flaw to be engineered away is its own kind of overconfidence.
If the function question is so important, why does the mainstream defer it? The answer is structural. Form produces clean, publishable findings: set up a normative standard, show people departing from it, and you have a paper. Mechanism produces fundable research: brain imaging studies, neural pathway mapping, computational models of cognition. Function often produces a negative result — “this is not actually an error” — which is harder to publish, harder to fund, and harder to build a career on. The discipline’s preference for form and mechanism is not a failure of reasoning. It is an adaptation to the incentive structure of academic science. The field that studies cognitive biases has a cognitive bias, and it is the one the field is least equipped to see.
- • •
That diagnosis is not a call for revolution. The deviation frame and the adaptation frame are in genuine tension — not the kind that more data will resolve. You cannot fully characterize loss aversion as a departure from rational choice and simultaneously characterize it as a calibrated response to ancestral environments. But the function question is foundational, because it is the only one that tells you whether the other two are worth asking.
What the discipline needs is a norm of priority. Ask the function question first. Before measuring a deviation, ask what the process is doing. Before proposing a correction, ask what the correction will disrupt. Every debiasing intervention, every taxonomy of cognitive error, every corporate training program built on the science of irrationality should be expected to declare which question it starts from and what it cannot see as a result.
A journal reviewing a debiasing study could require authors to specify whether the target bias is being treated as a deviation, a constraint, or an adaptation — and what the intervention leaves unaddressed as a consequence. A surgeon considering a checklist for overconfidence should be able to say whether the checklist treats overconfidence as an error to be reduced or as a signal to be redirected — and what is lost by choosing one frame over the other. Bias research prescribes this self-awareness for everyone else — for doctors, judges, investors, voters. It has not yet turned the lens on itself.
- • •
Think back to that fMRI scanner. The neuroscientist, the behavioral economist, and the evolutionary psychologist are all looking at the same brain responding to the same gamble. Each is right about what they see. And each is unable to see what the others see, because the framework that makes one pattern visible makes the others disappear. Kahneman himself acknowledged that decades of studying biases had not made him better at avoiding them, and that the best debiasing strategy was institutional structure rather than individual willpower. He was more right than he knew. For fifty years, the institutions that study bias have been asking the wrong question first — and the answers they have found are not wrong. They are worse than wrong. They are incomplete in ways that look like knowledge.
That is what a cognitive bias is.
A Note on the Survey
This essay draws on a survey of twenty-seven classification systems for cognitive biases. Of the twenty-seven, fourteen start from form, nine from mechanism, and four from function. Among form-based systems, the most influential include Thaler and Sunstein’s behavioral economics framework, the Bayesian departures tradition, Dimara et al.’s task-based taxonomy, and Pohl’s taxonomy of cognitive illusions. Among mechanism-based systems: Tversky and Kahneman’s heuristic origin model, Kahneman’s dual-process theory, Stanovich’s tripartite model, Korteling, Brouwer, and Toet’s neural network framework, and Heuer and Pherson’s intelligence analysis model. The four function-based systems are Haselton and Buss’s error management theory, Haselton and Nettle’s threefold evolutionary taxonomy, the broader adaptive/evolutionary tradition, and Oreg and Bayazit’s motivational endpoint model.
Acknowledgment
I owe much of how I think to my friend and teacher Forrest Landry. His work — particularly An Immanent Metaphysics — gave me the tools to build the ideas in this essay.
***
Samuel Dunlop is an author, advisor, and former couples therapist based in New York City and Aspen, Colorado. His work explores a central question: How can we make more effective choices in building healthier relationships with ourselves, our work, and the people we most care about?
***
Enjoying the content on 3QD? Help keep us going by donating now.
