by Robyn Repko Waller
The case for the illusion of conscious agency from neuroscience is far from a straightforward conclusion.
Last month I introduced a curious disconnect in public perception of neurotechnology. Whereas reports of brain-computer interfaces (BCI) inspire celebration of expanding agency, the public seem wary that neuroimaging exposes the illusion of conscious agency. The curiosity being that both use neurotechnology to decode motor intentions from the same brain regions of interest. If one threatens our conscious control as human agents, doesn’t the other? If one is a celebration of human agential control, isn’t the other?
That is, I suggested there that, to the contrary, these like research programs ought to be treated alike. Either both applications of neurotechnology deal in diminished agency or, alternatively, neither does. I ended that discussion with a promissory note to defend my insistence that such research doesn’t threaten our control as agents. Here I’ll briefly outline the case, as it’s made, for the illusion of conscious will from neuroscience. Then I’ll argue why we ought to strike a more optimistic note about our scientific understanding of humans as acting consciously and freely (elsewhere I’ve laid out more detailed discussions of science of free will).
I’ve elaborated frequently in this column about the sense of agency and free will that most of us believe we enjoy. I’ll rehearse those important notions again here. The narrative of human agency is not simply that we act in goal-directed ways, actively affecting change beyond the impinging of happenings to us. Humans (and perhaps other complex animals) don’t just forage about locating resources or evading predators, or so we contend. It seems we exercise a much more meaningful kind of agency. That is, free will is not just that I control my bodily movements, but that I exercise meaningful control over what I decide to do.
Indeed we humans are serious planning agents. We make an astounding number of decisions a day, from seemingly innocuous ones like deciding to pick up the coffee cup to more important ones like a decision to start a daily exercise routine now to the perhaps too frequent anxiety-inducing, life-defining decisions, such as deciding which job to take, which medical treatment to pursue, whether and when to start a family, and so on. Moreover, we take it that it is that decision to start the exercise routine now, a decision I am aware of, which drives me to leave the house and to run about intentionally for a few miles.
In this sense, the decisions that I am aware of speak for me, the conscious agent, who I am and how I impact the world, either locally or, perhaps for some folks, in a more profound global way. These practical decisions about what to do are intentions I form — they indicate my commitment to certain actions in the future. Moreover, these intentions guide my bodily movements when I act. Further, not only do we experience ourselves as in control of intentional movements issuing from our intentions, but others hold us accountable both personally and legally for our actions, in part on the basis of our intentions and values expressed in action. Hence, conscious intentions — for our purposes, intentions that we are aware of having — play a critical role in the narrative of human free and moral agency.
So what can neuroscience tell us about human agency? And why does it seems to challenge the narrative we’ve traversed here?
Neuroscience of action can tell us a great deal about the neurological mechanisms underpinning human decision-making and intentional action. Granted, the kinds of decisions and actions in the lab aren’t arduous decisions between, say, this job offer in Chicago and that competing offer in Denver. And they aren’t typically in-the-wild diurnal matters like deciding when to start your run. But (historically) neuroscientists approximate voluntary agency by studying endogenously generated actions. These are actions for which the cue for acting does not comes from the experimenter or external stimuli, but rather from within the participant. For example, the participant decides when or whether to press a button or, instead, which button to press.
Using these endogenously generated paradigms, neuroscientists have discovered neural activity that seems to encode action plans prior to self-initiated bodily movement in the lab. That much is not new. For instance, last month I described how as far back as the 1960s neurologists Kornhuber and Deeke demonstrated that before simple self-initiated bodily movements in the lab (such as finger movements and button presses), there is a bilateral build-up of electrical motor activity detectable with electroencephalography (EEG). This bit of the brain prepares for upcoming movement and plausibly encodes action plans. That finding in itself did not cause any commotion among philosophers and the public as to the veridicality of our experience of free will. Indeed BCI researchers today use neural activity in these motor regions and connected ones to “read” motor intentions of users.
What did cause quite a lot of commotion, however, was Benjamin Libet’s extension of this work. One interpretation of the Kornhuber and Deeke studies is that the measured neural activity stands for, or represents, a decision or intention to move, neurally realized. We tend to think, qua the narrative of human agency, that I, the agent, via my conscious deliberation and conscious intentions control how I act. Of course the brain prepares to move, often in ways I cannot access. But in cases of intentional action I, the agent, tell my body to move via my conscious intention; my conscious intention kicks off motor preparation.
Libet and colleagues aimed to test this folk narrative using endogenously generated movements in the lab. If the folk are right, our conscious intentions should occur prior to, or simultaneous with, the neural activity representative of motor preparation. What they found was, to many, startling: preparatory neural motor activity for movement precedes the time that participants report that they were aware of intending to move by a third of a second. Libet and others, including some in the popular press, concluded from the results that conscious intentions lag behind unconscious intention to move and preparation to move. As such, on this view, conscious intentions to act don’t initiate action preparation. The conscious agent is diminished in power (and possibly disappearing, if one fears that neural causes of action are incompatible with free agency).
The basic finding that neural activity encodes action preparation prior to reported awareness of intention to move has been replicated using EEG, fMRI, and intracortical methods. Accordingly, pessimism about agency has proliferated around these findings, although not all scientists and journalists have been convinced that neuroscience reveals our lack of conscious agency. Some skeptics take the temporal priority of nonconscious processes over conscious ones to be the threat, while others worry that the reported conscious intentions don’t really do anything — in the sense that it’s the nonconscious stuff that’s causing movement. According to these theorists, conscious intentions are just bodily bookkeepers, notifying the person-level system that the upcoming movement is an action of their body. If free will requires the agent is in control of her actions, and if the conscious intention is the agent’s representative, then free agency is threatened. And if moral agency requires free agency, we’re off to the races of tearing down large swatches of moral and legal responsibility.
In what remains here I hope to begin to reassure you that meaningful agency is not under threat. In fact, once we move beyond the initial Libet study, we cannot be at all certain that the measured neural activity is a nonconscious intention. Or that conscious intentions always follow action preparation and are powerless.
First, one ought to be skeptical that localized neural activity measured in such studies is the intention to act in the brain. Although some neuroscientists make claims about localizing intentions neurally, it is implausible that a relatively restricted set of brain regions could play all of the rich roles of practical intention. Rather, as others have argued, intention is represented in a distributed fashion via a network of brain regions. Some areas, such as the supplementary motor area (SMA) and pre-SMA implicated in the Libet study carry out the work of pre-movement intentions, and other regions, such as the promotor area and primary motor area represent the actor’s intention or motor representations as she is acting in more finely specified ways. Other regions, including the prefrontal cortex, encode agent-level conscious aspects of intention.
Indeed, further studies have found that some of the proposed sites of pre-awareness intention to move aren’t really movement specific in function, but may be indicative of the agent’s more domain-general intentionally thinking through a task. Similar brain activity underpins the cognitive activity of subjects who complete mental math problems or judge without reporting the color of some stimulus as the brain activity measured in Libet-like motor tasks.
The upshot is that the picture of the brain’s deciding what to do and initiating movement in one tight localized region is too simplistic. And thus, the case for unconscious intention initiating action and preempting consciousness becomes murkier. Further, there’s independent research that suggests that agents’ conscious plans to do tasks days or weeks later are in fact causally effective at getting them to successfully carry out those tasks as planned. Agents who make plans as to when and where they will, say, study for the big exam are much more likely to complete the task than those who only commit to study for the exam in general. That is, some conscious intentions have been shown to play a role in action production.
Finally, many philosophers have voiced the concern that the kinds of decisions and actions that have been studied historically in neuroscience aren’t the kinds of decisions and actions we really care about. Human meaningful agency encompasses navigating harrowing moral choices and acting under uncertain, often with serious consequences on the line. Agents act in line with their values and may be biased or incentivized to act in certain ways. Button presses and finger movements don’t capture this complexity of human agency. How can we be confident that the role of conscious intentions in these simple movement paradigms can be generalized to more messy in the wild decisions and actions? Recently, neuroscientists have begun to tackle more ecologically valid measures of decisions and action, including experiments inclusive of incentives, uncertainty, weighing of values, and consideration of legal consequences. Attention to these findings offers a clearer window into meaningful agency in the brain.
Alas, a further discussion of the extensive and complex empirical findings in neuroscience of decision-making and voluntary action exceeds this article’s length. Nonetheless, the work above suggests that the case for the illusion of conscious agency from neuroscience is far from a straightforward conclusion.
What bearing does this analysis have, then, for BCI technology that promises to offer mind control? The preceding work suggests that insofar as the decoder algorithm decodes the content of some neural activity that is part of the distributed representation of the user’s intention, the technology is working in the service of the agent’s plans. Otherwise, if the decoded neural activity doesn’t speak for the agent, we might be worried that some not-far-off BCI technology will one day implement the nefarious unconscious plans of the user. Before the user is aware of them.