Robots, Emotions, and Relational Capacities

by Fabio Tollon

The Talon Bomb Disposal Robot is used by U.S. Army Special Forces teams for remote-controlled explosive ordnance disposal.

I take it as relatively uncontroversial that you, dear reader, experience emotions. There are times when you feel sad, happy, relieved, overjoyed, pessimistic, or hopeful. Often it is difficult to know exactly which emotion we are feeling at a particular point in time, but, for the most part, we can be fairly confident that we are experiencing some kind of emotion. Now we might ask, how do you know that others are experiencing emotions? While, straightforwardly enough, they could tell you. But, more often than not, we read into their body language, tone, and overall behaviour in order to figure out what might be going on inside their heads. Now, we might ask, what is stopping a machine from doing all of these things? Can a robot have emotions? I’m not really convinced that this question makes sense, given the kinds of things that robots are. However, I have the sense whether or not robots can really have emotions is independent of whether we will treat as if they have emotions. So, the metaphysics seems to be a bit messy, so I’m going to do something naughty and bracket the metaphysics. Let’s take the as if seriously, and consider social robots.

Taking this pragmatic approach means we don’t need to have a refined theory of what emotions are, or whether agents “really” have them or not. Instead, we can ask questions about how likely it is that humans will attribute emotions or agency to robots. Turns out, we do this all the time! Human beings seem to have a natural propensity to attribute consciousness and agency (phenomena that are often closely linked to the ability to have emotions) to entities that look and behave as if they have those properties. This kind of tendency seems to be a product of our pattern tracking abilities: if things behave in a certain way, we put them in a certain category, and this helps us keep track of and make sense of the world around us.

While this kind of strategy makes little sense if we are trying to explain and understand the inner workings of a system, it makes a great deal of sense if all we are interested in is trying to predict how an entity might behave or respond. Consider the famous case of bomb-defusing robots, which are modelled on stick insects.

Once they locate a mine, the defuse it by stepping on it with one of their legs. This detonates the mine and eviscerates the leg. Of course, it is much safer to have a robot performing this task as compared to a human. However, something interesting happened during a test of the robot

“Every time it found a mine, blew it up and lost a limb, it picked itself up and readjusted to move forward on its remaining legs, continuing to clear a path through the minefield. Finally it was down to one leg. Still, it pulled itself forward. Tilden [the creator of the robot] was ecstatic. The machine was working splendidly. The human in command of the exercise, however — an Army colonel — blew a fuse. The colonel ordered the test stopped. Why? asked Tilden. What’ s wrong? The colonel just could not stand the pathos of watching the burned, scarred, and crippled machine drag itself forward on its last leg. This test, he charged, was inhumane.”

Of course, this is just a robot, a machine, incapable of experiencing anything. However, this fact did not stop the colonel from experiencing distress and watching something that, had it been capable of such experiences, would have been in a great deal of pain. All this is to say that it is easy to see that we naturally extend human and animal like qualities to robots. It is therefore not far-fetched that we might project emotional properties to them. The question then becomes what we ought to make of this: do we lean into it, and acknowledge that yes, our experience of these entities settles the question in some sense, and so we should, for example, acknowledge that robots can have something like indirect emotions (emotions that we project onto them). Or we could resist this interpretation and insist that sometimes we just get things wrong, and that we need to be more careful in our attributions (and this might even mean stricter design standards for those creating such interactive and agent-like machines). We could also do something completely different.

Instead of asking about the nature of the systems (whether they have emotions, whether they are agents, etc.), I think instead we should ask how they impact us. This echoes the approach of Cindy Friedman, who has recently argued, utilizing and ubuntu framework, that our interactions with (humanoid) robots can become concerning “if they crowd out human relations, because such relations prevent us from becoming fully human”. Given what I have said above our propensity to attribute agency and mental states to artifacts, it seems that this kind of attribution bias would be even more heightened in the case of humanoid robots. Moreover, as Friedman notes, an essential part of what it means to be human involves having relations to other humans. This capacity for relations strikes me as a significant factor in our dealings with robots, humanoid or otherwise. If they do not (or cannot) have capacities for relations, and if they foreclose our ability to have these relations with others, then we need to seriously consider whether we would want the widespread use of such systems. Moreover, as I have been operating at the pragmatic level this whole time, it also means that this can be considered independently of any metaphysical theory of what emotions are or whether robots can in principle have them or not. Thus, we might have strong moral reasons not to create robots that might replace our relations with other humans.