Brain-Computer Interfaces: Extended Agent or Disappearing Agent?

by Robyn Repko Waller

Image by Gerd Altmann from Pixabay

In April, many watched in awe as Elon Musk’s Neuralink demonstrated how Pager the rhesus monkey can play the video game Pong using only the power of thought. No bodily movements required. That is, he can control the virtual paddle with his mind. How? The researchers at Neuralink have fitted Pager with a brain-computer interface (BCI) — in this case, around 1000 fine-wire electrodes have been fitted onto Pager’s motor cortex via surgery. A decoding algorithm trains on neural activity data from Pager’s playing Pong the good old fashioned way, with a joystick. Later, the joystick is disconnected, and when Pager merely thinks about moving the paddle via the joystick in response to the virtual bouncing ball, the technology uses his decoded motor intentions to issue in digital commands to move the virtual paddle. (His reward for playing? A delicious smoothie.) He’s really good at Pong. So good that he’s been challenged to a game of Mind Pong by a human with a BCI. 

Notably, such technology has been around in experimental and clinical settings for some time. To take another recent success, BCI has been used to produce text at a comparable speed to smartphone texting. A man who is paralyzed from the neck down was fitted with micro electrodes in his motor cortex. A recurrent neural network trained on neural activity data from the hand region of his premotor cortex while he imagined grasping a pencil and writing letters, a form of motor imagery. Using this method, the participant was able to “write” with minimal lag by imaging letters at the rate of ninety characters per minute with greater than 99% accuracy with autocorrect, a significant improvement over previous BCI feats of 40 characters per minute using point and click typing. 

Both the BCI used by Neuralink and for artificial handwriting implement invasive BCI, BCI that involves placement of electrodes on the surface or within the brain. Other impressive BCIs are noninvasive, requiring only the use of an (electroecephalogram) EEG cap. One prime example is the so-termed ‘Brain Painting’, BCI which allows individuals with ALS and other conditions to create artwork using EEG that decodes the electrical signal from their directed attention to a paint computer program set of visual stimuli.

Inspiring, right? Even if one does not subscribe to the transhumanism vision of the future, it’s easy to see the life-changing implications of the current BCI and near future BCI for patient populations, such as those individuals with locked-in syndrome, ALS, and paralysis. 

Much of the active BCI — BCI that picks up intentional mental activity of the user toward some task or goal — works via decoding of intentions, particularly motor intentions. Notably, decoding motor intentions via measurable brain activity is an established methodology in experimental and clinical neurology. As far back as the 1960s Kornhuber and Deeke demonstrated that before self-initiated bodily movements in the lab, there is a bilateral build-up of electrical motor activity detectable with EEG. In the 1980s, Benjamin Libet and colleagues  extended these results to support that this preparatory neural motor activity occurs prior to the time that participants report that they were aware of intending to move. 

One reading of this result, originating with Libet and popular in the press is that the brain decides — forms an intention — to move, say move a finger or press a button, prior to the agent even being aware that they intend or want to move.  Using neurotechnology like EEG, and later fMRI and intracortical methods, researchers argue that motor intentions can be reliably detected — in some cases, as it is put “read” —from neural activity in the motor cortex and posterior parietal cortex. Indeed, one study, which used neurotechnology similar to the BCI advances, had patients with intracortical electrodes play a game. If the patient held up a different hand (say, left) from the researcher (say, right), they received a monetary reward. However, if the patient raised the same hand as the researcher, they were penalized. Sounds simple. What’s the trick? The researchers used the decoded motor signals from the electrodes in real-time to predict which hand the patients would hold up before the patients moved. And the researchers were successful — with 83% accuracy. They won the game. Much to the bewilderment of their opponents, one might guess. The relevant difference here from the BCI cases is that it is the researcher, not the patient, who is using the decoded motor intentions to win the game. 

Now in contrast to the (mostly) optimistic tone of news of BCIs, popular and scientific news of the prospects for “reading” pre-awareness motor intentions is (often) pessimistic in the extreme. Tales of your brain deciding before “you” do. Warnings that free will is an illusion. Concerns for our responsibility and state of the criminal justice system. I thought that I was in control, you might say. Are any of my actions ‘up to me’? (Not everyone takes such a dim view of these findings. But the popular press seems to lean pessimistic.)

Hence, BCI technology promises to enhance our agency by decoding intentions. Even if the agent lacks the capacity for intentional bodily movement. (Thorny issues of user/patient neural rights, not withstanding.) In contrast, experimental decoding of intentions has been taken to threaten agency. Even if the agent has a full range of intentional bodily movement. But wait. Both projects rely on the same neurotechnology and brain activity. Both BCI and the broader neuroscience of agency utilize neuroimaging and measurement such as EEG, intracortical electrodes, functional magnetic resonance imaging (fMRI), and others to measure and algorithmically decode motor-task-related significance of brain activity in areas like the primary motor cortex, premotor cortex, posterior parietal cortex, and the supplementary motor area. BCI is heralded as enabling the disembodied agent to consciously command her interaction with the world, whereas research on the reading of motor intentions is often said to expose the illusion of conscious agency. What explains this divergence of attitudes? 

To get at this question, we need to ask more fundamental questions. What is an agent? Can an agent extend into the artificial? Can an agent act without moving their body? Does neurotechnology really read my intentions? Where’s the conscious self in all this? What — or where — am I, anyway?

First, let’s explicitly set aside discussion of passive BCI, the kind of technology like Neurable, that continually monitors brain activity to alert the user, when, for instance, she is maximally attentive at work. Here we’ll address only instances in which users intentionally bring about effects in their environment (typing, painting, computer-generated speech, prosthetic limb movement) via directed thought — intentionally modulating their neural activity (for instance, motor imagery). 

Second, let’s review what’s at stake. Each of us, at least at times, experiences ourselves as in control of what we do and what happens. Some actions in the world are my own, and others are events happening around me. This control and causal power over our local environment seems to occur via our practical intentions, our decisions about what to do. First I decide to type this sentence, then, seemingly because I so decided I intentionally type the intended words, which appear on my computer. In the typical case, I press the keys in the correct order with my finger movements. Moreover, my control over my actions and their effects in the world allow me to steer the course of my life, both daily and over my lifetime. It is in part in virtue of this control over my actions that I can be held accountable for what I do, both personally and legally. 

So, in the non BCI-case, an agent navigates and brings about changes in her environment via her intentional, or voluntary, bodily movements. And, at least some of the time, her bodily movements seem, at least to her, to be set in motion by or guided by her consciously selected agent-level goals. Are BCI-generated events actions of the user? This question has been taken up in depth previously. One observation is that users who paint and type and move via BCI aren’t agents in the sense of causing events via bodily movements (specifically, neuromuscular events). Instead, in place of those neuromuscular events, the BCI device converts decoded cortical signals into digital commands as output, enabling the device and connected technology to bring about the change in the environment (for example, the mouse point’s motion or the prosthetic limb’s movement).

Still, however, users of BCI are agents and their BCI-generated events are actions insofar as the users perform mental actions, such as directed attention to stimuli (eye saccade to the left icon or calling up mental imagery of gripping a pencil and writing a ‘b’). These mental actions are, plausibly, intentional actions, which via the BCI device bring about intended changes in the environment. So BCI users act intentionally, albeit in a novel way. By bypassing (parts of) the body. Or perhaps by an extension of the body to artificial devices.

Moreover, the technology is meant to, as Nicolelis puts it, lead to the “liberation of the human brain from the human body.” This sounds like enhanced control. What can this mean? One interpretation is to fall back on the notion that the human brain is the seat of human agential control because it is through thoughts, however neurophysically realized, that I, the agent, originate and affect change. BCI liberates because it allows agents with limited to no mobility or speech to transform conscious agential commands in the form of neural activity into artificial action. 

Who is the agent though? Or, as some might put it, who or what (or where) is the self? A highly complicated question with no clean answer. But notice that we cannot say that every neural event or process is representative of the agent. When an individual with epilepsy suffers from a seizure or a person with alien hand syndrome observes their hand moving in an unintended way, these movements are produced by the person’s own neurological processes, but the movements are not the agent’s own or agent-authored. Not all neural activity is the agent’s or even accessible to the agent, holistically considered. (In this way, philosophers helpfully distinguish personal-level from sub personal-level states and processes.)

Here we run into the tension of conceiving of BCI as agency-enabling but of intention-decoding  more broadly as agency-defeating. Are the “intentions” that BCI decode representative of the agent or not? Proponents of the agency-enhancing view could point out that the decoded neural activity, the motor representation of writing a “b,” is in agreement with the user’s consciously entertained intention to write a “b.”  The BCI device executes that user intention. Presumably, then, the decoded electrical field, here activity in the premotor cortex, stands for the agent’s intention. Likewise, for measurable signals from the posterior parietal cortex for mobility of robotic arms or primary motor cortex for Mental Pong. These neural processes are, on this view, the agent’s own. Hence the freedom from one’s body. Moreover, users experience a sense of agency — that they are in control of the outcome — for some (but not all) BCI-assisted task performances.

The problem, however, lies in the fact that motor-related activity in these brain regions is not infrequently taken by neuroscientists and laypersons alike as distinct (spatially and temporally) from the agent’s conscious command.  For instance, measurable activity in the premotor and supplementary motor area has been proposed to represent the brain’s decision to move before the agency is aware of deciding to move. Correspondingly, then, regardless of whether this neural activity is a reliable indicator of how I, the agent, intend to move, it may not be the agent-level intention itself. 

How should we resolve this tension? For my part, I argue elsewhere that threats to conscious and free agency from decoding intention paradigms are overblown. There’s reason to doubt that neuroscientists are measuring intentions to move in the everyday sense that we talk about intentions. That’s a story for another post. If so, however, we can happily retain our optimism about agency-enhancing BCI. Despite our curious societal split mind about the science of agency.