Escape From Brain Prison III: Could Artificial Brains be Conscious?

by Oliver Waters 

Part I of this series argued that transferring your personal identity to an artificial brain should be possible. It’s one thing however to preserve the informational content of your identity, and quite another for that content to be conscious. It would be a real shame if your new artificial self was getting about town as a zombified version of you: spending your wages, high fiving your friends – all with no inner subjective awareness.

In her book Artificial You (2019), the philosopher Susan Schneider entertains the possibility that entire alien civilisations may have taken the reckless gamble of transitioning to artificial brains and in the process inadvertently killed off their conscious minds. We would obviously prefer to avoid this nightmarish fate, and our best defence is a proper scientific theory of how consciousness works.

Many doubt that such a theory will arrive any time soon, with some claiming that consciousness is simply beyond our capacity to ever understand. There is also an intuitively compelling and popular notion that a scientific understanding of consciousness is impossible because we only have direct access to our own conscious minds. This view is largely motivated by a mistaken epistemology. Namely, an ‘empiricist’ view that the scientific process consists of building up a theoretical understanding of the world out of the components of raw, direct, sensory inputs. If you think of scientific knowledge as emerging this way, as a systematic reorganisation of what is available to your senses, then the realm of other conscious minds must forever remain out of reach.

But as the philosopher Karl Popper pointed out, ‘sensory inputs’ actually have no meaning unless they are part of a theoretical construct: observation is inextricably ‘theory-laden’. The theory doesn’t have to be a formal scientific theory, by the way. Your intuitive conception of what is going on in front of you (perceiving a sunset, for instance) counts perfectly well as ‘theorising’ in this context.

The fallacy of empiricism fools us into thinking that we have privileged and unique access to the contents of our own conscious minds. You may think you know for certain what it is like to for you to experience the world, just as Descartes claimed hundreds of years ago. But the reality is that whenever you think about an experience, you are representing the content of that experience with your current thought.

For instance, I might reflect on how painful my visit to the dentist was yesterday. That state of reflection is not an infallible portal into what the experience was. It is a sketch – a trace. It can be a highly vivid and accurate one, but it is a fallible representation of the phenomenon, not the phenomenon itself.

What if you focus on the moment being experienced right now? Well, you are still ‘observing’ abstract objects with properties. For instance, when you look at an apple, you are not directly observing raw sensory data. You are seeing ‘An Apple’, composed of features like juiciness and redness that your mind has constructed. You are also noting ‘affordances’ of the apple – namely what you could or couldn’t do with it. There is a structure to your experience that is not a passive record of photons hitting your retina.

The theory-mediated nature of our access to our own mental states means that it is not different in kind to our access to the mental states of others. Indeed, recent scientific work suggests that the same neural faculties underpin both our ‘mindreading’ (representing the mental states of other people) and ‘metacognition’ (representing your own mental states) abilities.

Of course, we often have more reliable access to our own mental states because there are fewer barriers in the way. But all perception is fallible, even that of our own minds.

This is obvious in our daily lives, where we often have greater insight into the mental states of friends or family than they do themselves. You can easily become aware that your friend is angry in a discussion before they do. A parent might understand better than their teenaged child what they are feeling after their first heartbreak. The flurry of intense emotion might be incomprehensible to the young romantic – such that they really don’t know what they’re experiencing. We’ve all felt this throughout our lives – being baffled at our own feelings. It’s why we talk about the need to ‘process’ an emotional or traumatic event.

The way out of the empiricism trap is to follow Popper in conceiving of knowledge as consisting of creative conjectures about the unobservable reality that is behind what is currently observable. All of our most powerful scientific theories imply the existence of things that are beyond our capacity to observe: from the events occurring at the centre of stars, to the structure of space-time itself. What we can observe are the measurable implications of these phenomena, which we can use to subject our theories to rigorous empirical testing.

The same is true for consciousness as a natural phenomenon. We will never solve consciousness entirely, just as we will never truly solve ‘gravity’. Our understanding of how both phenomena work will just keep improving indefinitely.

The philosopher David Chalmers famously undermined this rosy picture of progress with the claim that consciousness is a uniquely ‘hard problem’ for science. We may well make progress on the ‘easy’ problems’ of explaining the structure and function of our physical brains. But it’s conceivable, he contends, that we could explain all of these neural phenomena yet still lack an explanation for why they are accompanied by consciousness.

I suspect Chalmers is too demanding of what a scientific theory must do, while being not demanding enough on his own imagination of what future engineering efforts will do. If you were to read a truly satisfying scientific explanation of consciousness tomorrow, that knowledge alone wouldn’t automatically grant you access to all possible experiences. In much the same way that understanding Newtonian physics doesn’t automatically provide you with a functional rocket-ship.

The ‘rocket-ship’ for consciousness would be a virtual reality simulator operating on our future artificial brains that authentically demonstrates ‘what it is like’ to be a different kind of conscious agent. Though, the simpler the creature gets, the less interesting this experience will be. Take the notorious bat that philosopher Thomas Nagel wondered ‘what it would be like’ to be. You might think it would be fun to experience flying around using echolocation and catching bugs. But you’re really imagining magically compressing your own brain down to the size of a bat’s. Our human consciousness is richly informed by our causal and semantic understanding of the world, afforded by our huge neocortex. In a real simulation, you would just have the simpler mind of a bat. This means there would be no internal monologue while you’re soaring around – ‘wow, this is so cool!’

And you wouldn’t have a cool story to tell your friends later, because as far as we can tell, bats don’t have the kind of structured, episodic memories that we have. Certainly not the kind that can be intentionally accessed at will, and translated into propositional language over beers at the pub.

As neuroscientist Anil Seth puts it, the ‘real problem’ (as distinct from the ‘hard problem’) of consciousness is all about refining our understanding of which mental states correspond to which physical implementations. So, let’s get on with making progress on the real problem: what kinds of physical mechanism are conscious, and why?

Computational mechanisms sure seem like a good place to start, given our conscious thoughts tend to involve information processing. But of course, just because we replicate a computational function of our minds in a computer doesn’t mean we replicate all of the physical properties of our minds that may be necessary for consciousness to come along for the ride.

By analogy, we represent the causal functions of weather events like rain and wind inside computer programs, but these programs are neither wet nor breezy. Is consciousness like ‘wetness’? Does it require a particular physical process to take place?

Here it’s worth introducing David Marr’s three levels of understanding a computational device. The first is the computational level: this is the problem that is being solved, perhaps finding the sum of two numbers. There are then multiple specific methods that one might use to solve this problem, thus the second algorithmic or representational level. The third level is the hardware implementation level: whether this algorithm is performed by a pocket calculator or your mushy, lazy human brain.

Obviously, the fact that you are conscious while calculating 24 + 473 doesn’t mean that your Excel spreadsheet is also conscious when it does so. The question is what exactly is different at the representational or implementational level of information processing that accounts for this? Could your computer be conscious if it solved this problem using the exact same informational process you did, or would it need to be made of the same hardware (or wetware, as it were) too?

We encountered the principle of computational universality in Part II in relation to the prospect of machine superintelligence. It follows from this that any suitably programmed Universal Turing Machine, with sufficient memory and compute speed, can become generally intelligent to any arbitrary degree. The physicist David Deutsch speculates on this basis that we therefore must already have the requisite computer hardware to instantiate an AGI now– we simply lack the knowledge of what program would be required.

The interesting question is whether such an AGI could process information identically to us (which seems possible) yet lack any consciousness whatsoever. To build up to this question, we should first ask whether consciousness plays a necessary causal role for our own computational capacities. Or is it just an inert ‘epiphenomenon’ – akin to the smoke coming out of a steam-train’s chimney.

Empirically, it seems that consciousness is necessary for certain cognitive representations to occur. Neuroscientist Stanislas Dehaene and others surveyed the research and identified three main informational functions of consciousness.

The first is durable and explicit information management. Studies consistently find that non-conscious information decays exponentially in the brain, whereas consciously remembered information can reliably be retained. Secondly, consciousness is required for novel combinations of operations. Without consciousness, only automated and routine actions are possible, not complex planning or metacognition.

Thirdly, consciousness is required for generating intentional behaviour. ‘Blindsight’ patients are people who have lost the conscious ability to see (usually due to damage to their visual cortex) but can still navigate obstacles due to unconscious visual processing. Studies show however that they cannot initiate any actions to manipulate objects in their visual field – they can only react.

These findings do not rule out that artificial systems might be able to perform all these functions without being conscious. But they do suggest a tight link exists between consciousness and certain cognitive capacities in the only known systems that currently possess such capacities.

And while it is true that any informational transformation can be described as an algorithm performed by any old universal computing machine, it is a different question whether that algorithm is capable of causally producing that transformation in a reasonable time on that given piece of hardware given the physical constraints of our universe.

Consider that to produce the desired informational output, an algorithm must have access to all relevant input data as well as sufficient compute-power. This means it may well be the case that using today’s computational systems, it is physically impossible to ‘run’ consciousness. It may well require more energy in the known universe to integrate the required data and to run the ‘conscious mind’ algorithm on a von Neumann architecture with regular electronic circuits.

This brings us to the obscure but long-running scientific tradition proposing that conscious thought requires exploiting the special physical properties of electromagnetic (EM) fields.

This view, developed more recently by Johnjoe McFadden and others, purports to address a key mystery of consciousness called the ‘binding problem’. How can bits of inanimate matter bumping into themselves produce a coherent, subjective perspective on the world? When you examine your current conscious experience, you notice that it consists of a unified sensory complex, with distinct ‘gestalt’ representations. This phenomenon just does not seem like it can be built out of atomic bits of matter. It does however seem more like a kind of structured, energetic field, not too dissimilar to the electromagnetic fields that we know pervade the universe and our own minds.

Crucially, EM fields can integrate informational content about the world in time and space, in a way that circuits of neurons interacting via neurochemicals cannot. The EM fields emanating from thousands of neurons automatically integrate to form a single unified field, via positive and negative interference. At a given moment in time, this field can represent the vast variety of information that makes up the current scene in your conscious mind, and is accessible to every neuron in your brain, like a TV signal is accessible by every television in a country.

Compare this to neurons firing in the primary visual cortex interacting chemically with others in the frontal cortex and thalamus. Nowhere in this picture of the brain is information represented holistically. Nowhere is it bound together simultaneously in the neat package with which we’re so familiar.

This idea is that by exploiting electromagnetic field effects, the brain also enhanced its computational capacity, increasing its evolutionary fitness. Recent experiments by neuroscientists Earl Miller and Dimitris Pinotsis suggest that higher-level parts of the brain can coordinate ensembles of lower-level neurons via synchronised waves of neural activity. EM fields seem to play a crucial role in forming and consolidating memories, acting as control parameters enslaving the buzzing complexity of lower-level neural activity and allowing the transfer of information from electric fields to neurons.

This would overcome what is called ‘representation drift’: the finding that individual neurons seem to come and go from being part of a cognitive representation. Hardly surprising, being as they are fickle little creatures with their own agendas. Harnessing entire networks of neurons with different frequency brainwaves means that a single neuron going offline would not disrupt an entire semantic representation.

This collective neural electromagnetic field is not some magical ether operating beyond the constraints of physical law. It’s just not made up of particulate matter. This means that if the EM field theory of consciousness is correct, many differing philosophical views may be reconciled. The dualists would be right that consciousness is not material, but the ‘physicalists’ would also be correct that consciousness is not supernatural. It’s only the most hardcore of computational functionalists, who think their current laptops could be conscious, who may be disappointed. Such devices are designed to ignore electro-magnetic field effects, not exploit them.

The optimistic upshot for our purposes is this: even if it turns out that there are irreducibly physical requirements for replicating our full conscious minds, the engineering feat of mind-transfer (with all its benefits) will still be possible, so long as we build artificial brains that respect these physical requirements. We may not be able to upload our minds to ‘the cloud’ in its current form, but then, it was always a stange desire to be reduced to a bunch of bits scattered across a server farm. We can and should dream bigger.