by Fabio Tollon
What should we do in cases where increasingly sophisticated and potentially autonomous AI-systems perform ‘actions’ that, under normal circumstances, would warrant the ascription of moral responsibility? That is, who (or what) is responsible when, for example, a self-driving car harms a pedestrian? An intuitive answer might be: Well, it is of course the company who created the car who should be held responsible! They built the car, trained the AI-system, and deployed it.
However, this answer is a bit hasty. The worry here is that the autonomous nature of certain AI-systems means that it would be unfair, unjust, or inappropriate to hold the company or any individual engineers or software developers responsible. To go back to the example of the self-driving car; it may be the case that due to the car’s ability to act outside of the control of the original developers, their responsibility would be ‘cancelled’, and it would be inappropriate to hold them responsible.
Moreover, it may be the case that the machine in question is not sufficiently autonomous or agential for it to be responsible itself. This is certainly true of all currently existing AI-systems and may be true far into the future. Thus, we have the emergence of a ‘responsibility gap’: Neither the machine nor the humans who developed it are responsible for some outcome.
In this article I want to offer some brief reflections on the ‘problem’ of responsibility gaps. Read more »