Acting Machines

by Fabio Tollon

Fritzchens Fritz / Better Images of AI / GPU shot etched 1 / CC-BY 4.0

Machines can do lots of things. Robotic arms can help make our cars, autonomous cars can drive us around, and robotic vacuums can clean our floors. In all of these cases it seems natural to think that these machines are doing something. Of course, a ‘doing’ is a kind of happening: when something is done, usually something happens, namely, an event. Brushing my teeth, going for a walk, and turning on the light are all things that I do, and when I do them, something happens (events). We might think the same thing about robotic arms, autonomous vehicles, and robotic vacuum cleaners. All these systems seem to be doing something, which then leads to an event occurring.  However, in the case of humans, we often think of what we do in terms of agency: when we do perform an action things are not just happening (in a passive sense). Rather, we are acting, we are exercising our agency, we are agents. Can machines be agents? Is there something like artificial agency? Well, as with most things in philosophy, it depends.

Agency, in its human form, is usually about our mental states. It therefore seems natural to think that in order for something or other to be an agent, it should at least in principle have something like mental states (in the form of, for example, beliefs and desires). More than this, in order for an action to be properly attributable to an agent we might insist that the action they perform be caused by their mental states. Thus, we might say that for an entity to be considered an agent it should be possible to explain their behaviour by referring to their mental states.

At this point it seems obvious that currently existing machines are not agents: it would be strange to think that robotic arms, autonomous cars, or vacuum cleaners have anything like mental states.

Of course, I cannot do justice to all the various frameworks we might apply to the questions of whether machines could be agents. However, the most intuitive framework seems to be a behaviouristic kind of approach. This is because one of the most important things machines lack is the same internal parts us as humans, but it seems like they can perform similar tasks. Behaviorism, with its focus on outward performance and nonchalance about internal mechanism, might therefore be quite useful in this regard.

On such a behaviouristic view we could ground agency not in the capacity to have intentions to act, but rather in the act itself. That is, they are concerned with observable performance. Here we find no talk of inner mental states and are instead strictly concerned with what can be observed from a third person perspective. Essentially, behaviourism posits an epistemic constraint on our philosophizing about the mental, as it suggests that if we cannot know something through observation, then we are foreclosed from using it in our theorizing (specifically as this concerns mental states and their “content”). Significantly, however, such perspectives have serious shortcomings.

An example of their limitations relates to their focus on successful prediction of behaviour. In order to know whether an entity is an agent or not, we must be able to successfully predict its behaviour, based on the ways we would expect an adult human to behave. However, successful prediction does not entail that we understand the system under investigation. More importantly, such prediction leaves out what causal powers are relevant in the resultant behaviour, and this leaves us with an impoverished account of agency. If we don’t understand why agents do what they do, then this can hardly be a convincing account of agency.

My proposal is therefore to shift the focus of the debate: when we want to know what actions are, we should not focus on their description but rather on their ascription. Instead of describing what is going on in the minds of potential agents, we instead ascribe particular beliefs and desires to them that we can then make use of in order to understand their behaviour. Interestingly, this kind of approach seems to capture what was so attractive about behaviourism, but without the epistemic costs of throwing out our understanding of what might be going on inside the agent. When we ascribe mental states in this way, we need not make any ontological claims about their existence: we can remain agnostic. What is important for such an approach, however, is that these ascriptions explain what the agent does, and, most significantly, what they are responsible for. This shifts the burden of proof, in the case of agency, from “what is the case” (in the sense of necessary and sufficient conditions for agency) to “what should we do” (in the sense of who or what is responsible).

Let us take the example of football, and ask the question, can machines ever play football? Well, we already have things like the “Robo cup” in which autonomous robots (which look a bit like ice cream cones) compete at “football” with one another. We might imagine that it is even possible, in the future, that these robots come to have humanoid form and play in a way that resembles humans. The behaviourist might then say that yes, these are indeed agents. If it walks like a duck, quacks like a duck, and plays football like a duck… well, you get the picture. However, as above, we might insist that no, robots can’t really play football: to really play football one has to understand the rules of the game, the different roles of each player, how to work as a team etc. Robots can’t do any of these things, so they can’t even ‘play’ at all, never mind play football! My response is to claim that proceeding in this way is to have been asking the wrong questions all along: agency is not about they way the world is (“can machines play football”). Instead, it is about asking why we might be interested in whether machines can play football or not (for example). More often than not, what we are interested in in these cases are questions of responsibility. So, talk about the agency of machines should not be about whether they have certain properties or capacities, but rather about what they can be responsible for what they have done. Of course, this opens an entirely different set of questions concerning the nature of responsibility. However, I see this as a feature, not a bug, of ascriptivism more generally. Focusing on responsibility gives us a way to deal with the practical concerns that machines might raise more concretely and prompts us to consider how we ought to deal with these responsibility ascriptions.