Empty Brains and the Fringes of Psychology

by Rebecca Baumgartner

Photo by Milad Fakurian on Unsplash

There’s a fascinating figure wandering aimlessly around the halls of psychology on the internet, and his name is Robert Epstein. 

Epstein is a 69-year-old psychologist who trained in B.F. Skinner’s pigeon lab in the 70s and now works at the American Institute for Behavioral Research and Technology in California, a nonprofit supporting a theory of the brain that supposedly “does not rely on metaphor.” Despite his credentials – he holds a doctorate from Harvard – both his nonprofit’s website and his own professional website give you the unsettling feeling that he exists on the fringes of psychology, and science more generally. This feeling is confirmed when you read anything he’s written.

One of the most vilified pieces of writing I’ve ever read was his piece “The Empty Brain,” which first appeared in Aeon back in 2016, but didn’t cross my desk until recently. In a nutshell, the article is about Epstein’s claim that the brain does not process information or contain memories – he believes these concepts are merely metaphors borrowed from computing and do not accurately describe what the brain does. 

What brains do, in Epstein’s view, is change in an orderly way in response to input and allow us to relive an experience we’ve had before – nothing so mechanistic as “processing” and “storing” things! (It’s never made clear how this new formulation differs from processing information and storing memories.

He also fails to realize that under that formulation, you could just as easily say that computers don’t store anything; the hard drive has simply been changed in an orderly way in response to input. Apparently our computers are not computers either.

The article is full of cringey moments like this, including a classroom demonstration that is so logically flawed yet which he is clearly so proud of, that you can’t help feeling embarrassed on his behalf. The demonstration is this: He asks a student to draw, from memory, a detailed picture of a dollar bill on the blackboard. He then covers up this drawing, pulls out a dollar bill, and asks her to draw it again, using the real dollar bill as a model. He then reveals the first drawing, and to Epstein’s ongoing amazement, “the drawing made in the absence of the dollar bill is horrible compared with the drawing made from an exemplar, even though Jinny has seen a dollar bill thousands of times.” This wouldn’t make sense, he claims, if our brains had memories! If they did, the student could have just retrieved the image of a dollar bill from memory and reproduced it in exquisite detail! 

With a flourish, Epstein drives the final nail in the coffin of the computational basis of the mind: “a thousand years of neuroscience will never locate a representation of a dollar bill stored inside the human brain for the simple reason that it is not there to be found.”

The reasons why this conclusion is wrong are too numerous to get into here, but at the very least, it is worth pointing out that our brains prioritize what we need to know and remember based on its relevance to us – this is the nature of attention. Of course our recollections aren’t perfectly accurate. They’re better than that; they’re useful. 

If the same demonstration had been performed with an employee of the Federal Reserve Bank, or a historian of U.S. currency, or the graphic designer who formatted the layout of the dollar bill, or a counterfeiter, or someone with a photographic memory, they would undoubtedly be able to reproduce a much more detailed image of the bill without any help. That’s because their brains have prioritized noticing and remembering more details about what a dollar bill looks like, something the average person has no need to do.

More importantly, if Epstein truly wants to do away with the idea of mental representations, he needs to be able to explain our ability to notice that the bad drawing is bad in the first place. The only way we know that it looks “horrible” is that we know what the right version should look like, even without a real dollar bill present.

Photo by Hal Gatewood on Unsplash

Based on this very shaky foundation, Epstein asks a question that, if phrased more intelligently and asked in good faith, could actually be a real question (“How does the brain form and store memories?”). Instead, we get this:

“The idea, advanced by several scientists, that specific memories are somehow stored in individual neurons [note: nobody has claimed this] is preposterous; if anything, that assertion just pushes the problem of memory to an even more challenging level: how and where, after all, is the memory stored in the cell?”

This is a gauntlet not worth picking up, as it rests on a fundamental failure to understand the distributed nature of the nervous system and the way that brain functions occur at the level of neuronal populations over time and in context-dependent ways, rather than on the level of an individual neuron. A “memory” is not a thing that lives somewhere, it’s a process: a pattern of connections involving millions of neurons distributed across the brain. And it’s a process that is more akin to a re-enactment than a retrieval of a stored facsimile. The fact that this memory process is expressed as electrochemical signaling rates is what we would expect from a processor that uses electrochemical signals to do anything at all. It’s hard to tell if Epstein genuinely doesn’t know this, or is merely being perverse.

Such a resplendent failure to understand the nature of cognitive processes reminds me of the scene from Amadeus where the Emperor criticizes Mozart’s piece for having “too many notes” and advises him to “cut a few.” Mozart’s indignant reply (“Which few did you have in mind, Majesty?”) highlights how absurd it is to take a modular view of an integrated system. Remove too many notes, and the piece would no longer be itself, but it’s impossible to say which notes make the piece what it is, or which combinations of notes would be fatal to its coherence as an object, were they removed. The integrity of the piece does not exist at the note level.

You are able to maintain a coherent autobiographical sense of self throughout your life despite the turnover in your body’s cells. How exactly this happens is worth investigating and is an active realm of research on consciousness, but the point here is that the degradation of one particular neuron doesn’t cause you to lose a particular memory, because that is not how memory works. Epstein’s attempted defense of materialism rests on a misunderstanding of how the brain functions, which leads him to the insane position that if the brain is made of neurons and all its functions can be explained in terms of neuronal activity – but there isn’t a static one-to-one correspondence between neurons and memories – then memories must not exist.

Photo by Alexandre Debiève on Unsplash

In short, this “Empty Brain” piece is so universally reviled and has been so thoroughly discredited that one of the many rebuttals that followed it was titled That Fucking Empty Brain Article. Hypothetically, Epstein’s stance could have been valid, in the sense that our brains are not much like the PC or smartphone you’re reading this on. But those are only one type of computer. The source of the confusion was put to rest in a bracingly sane article titled “The Brain-Computer Metaphor Debate Is Useless”:

“…the question of whether brains are computers (or like computers) is really a matter of semantics: it depends on which definition you are using. If you adopt the definition of ‘computer’ based on how computer scientists use the word (to refer to physical machinery that can theoretically engage in any decidable computation), then brains are literally computers. Alternatively, if we adopt the definition of ‘computer’ based on the usage from outside of computer science (to refer to devices that sequentially and discretely process inputs in a passive manner), then brains are not computers, and at best, computers serve as a weak metaphor for only a limited slice of human cognition.”

So we can safely consign the “Empty Brain” idea to the dustbin. 

However, there’s a strange footnote to this story. While that article seems to have been conceived as a defense of materialism – It’s all just neurons! There is no ghost in the machine! – Epstein has more recently written a piece that takes up the same theme, but advocates the complete opposite.

Last year, he wrote an article titled Your Brain is Not a Computer. It is a Transducer. “What if evolution, at some point,” he asks, “produced a special kind of transducer that could shift signals from the physical world as we know it to a very different kind of world?” Epstein believes this idea of the brain as transducer can explain everything from dreams, near-death experiences, and déjà vu to hallucinations, schizophrenia, and parallel universes. 

In a disconcerting callback to his “Empty Brain” piece, he writes at one point: “The main reason we should give serious thought to such a theory has nothing to do with ghosts. [This is the sort of statement that just fills you with confidence, isn’t it?] It has to do with the sorry state of brain science and its reliance on the computer metaphor.” He then suggests a variety of brain areas as candidates for “transduction sites,” including the pineal gland, based on the fact that Descartes identified this gland as the seat of the soul in the 1600s. This piece, needless to say, is doing his reputation as a scientist even fewer favors than the “Empty Brain” article.

Perhaps the most frustrating part is that there are interesting ideas and theories adjacent to Epstein’s claims – things like embodied cognition and emergence – that are worth discussing, and neuroscience as a field is undergoing a transformation in how we talk about and represent the brain, with an increasing recognition that the cleanly delineated categories we have traditionally used may not accurately reflect the way our brains work. There’s the potential to make a valid critique of the metaphors we use in cognitive science. But Epstein isn’t engaging with anything as reasonable as that. It’s not clear what he does with the massive amounts of evidence from neuroscience that contradict virtually everything he’s written recently on the subject of the brain. If that evidence isn’t enough, it’s hard to imagine what could ever possibly convince him. And if the answer is “nothing,” then his position is one of dogma, not science.