Responsibility Gaps: A Red Herring?

by Fabio Tollon

What should we do in cases where increasingly sophisticated and potentially autonomous AI-systems perform ‘actions’ that, under normal circumstances, would warrant the ascription of moral responsibility? That is, who (or what) is responsible when, for example, a self-driving car harms a pedestrian? An intuitive answer might be: Well, it is of course the company who created the car who should be held responsible! They built the car, trained the AI-system, and deployed it.

However, this answer is a bit hasty. The worry here is that the autonomous nature of certain AI-systems means that it would be unfair, unjust, or inappropriate to hold the company or any individual engineers or software developers responsible. To go back to the example of the self-driving car; it may be the case that due to the car’s ability to act outside of the control of the original developers, their responsibility would be ‘cancelled’, and it would be inappropriate to hold them responsible.

Moreover, it may be the case that the machine in question is not sufficiently autonomous or agential for it to be responsible itself. This is certainly true of all currently existing AI-systems and may be true far into the future. Thus, we have the emergence of a ‘responsibility gap’: Neither the machine nor the humans who developed it are responsible for some outcome.

In this article I want to offer some brief reflections on the ‘problem’ of responsibility gaps.

My intuition is that such gaps are not themselves problematic in some special way. Rather, they increase an already existing difficulty that we have for ascribing responsibility in light of emerging technological systems. In brief: The ‘problem’ of responsibility gaps is epistemic, not metaphysical.

One response might be to wonder why a ‘gap’ in responsibility is a problem at all. For example, accidents are typical cases where we have some outcome, and nobody to hold responsible. This doesn’t lead us to conclude that there is a ‘gap’ in responsibility. Rather, we just acknowledge that nobody can be found to be at fault, and we move on with our lives. If someone spills a drink on me due to turbulence on a plane, they may be causally responsible for the mess, but it would strange to say they are also blameworthy for it. Perhaps this is the same for some AI-systems? When they cause outcomes, we can simply think of these as ‘accidents’.

Of course, this would be insane. AI-systems are designed and deployed in situations where there is more at stake than holding a drink in an airplane. Besides, accidents usually mean that the agent had no control over what happened, whereas in the case of AI-systems there is at least some control on the part of those designing and developing them, as they choose to deploy these systems in the world.

Additionally, in cases where we have instances of negligence or maliciousness on the part of developers, it is clear that they should be blamed and held responsible in those cases. Thus, for a responsibility gap proper to arise it must be the case that the AI system is autonomous to such an extent that even such negligence would no longer be sufficient to justify the ascription of, for example blame, to human operators and designers. It is precisely this assumption that I find objectionable: It does not seem plausible that in these cases we would get a responsibility gap. Rather, we would say that our ordinary ways of attempting to determine levels of blameworthiness would proceed, even though this might be complicated by the unpredictability of AI-systems. For example, we could investigate whether engineers had attempted to anticipate and mitigate possible risks.

Moreover, the most worrying aspect of responsibility gaps seems to be not so much about whether they exist or not, but rather about the consequences of this gap. For example, John Danaher worries that the consequences of such a gap might be that people lose trust in the legal system. The more general worry seems to be that if nobody is responsible in these cases, then there is no incentive to minimize harm. As Peter Königs puts it:

“Lacking incentives to minimize harm and damage, programmers, manufacturers, and operators of autonomous systems will fail to exercise due care or might even cause harm on purpose, because they can do so with impunity.”

This carelessness would then have potentially disruptive social consequences and would work against the prevailing incentive structure which seeks to minimize harm. This is a real worry, but on the analysis I have provided above the real difficulty lies in determining just exactly what carelessness might amount to. Importantly, this is a question that does not turn on whether there is a responsibility ‘gap’ or not, but rather about the risks associated with the particular kind of technology, and the way in which it will be embedded in society. Of course, there are seldom easy answers to these questions, but postulating the existence of a gap in responsibility seems to be a red herring in this context.