“Responsible” AI

by Fabio Tollon

What do we mean when we talk about “responsibility”? We say things like “he is a responsible parent”, “she is responsible for the safety of the passengers”, “they are responsible for the financial crisis”, and in each case the concept of “responsibility” seems to be tracking different meanings. In the first sense it seems to track virtue, in the second sense moral obligation, and in the third accountability. My goal in this article is not to go through each and every kind of responsibility, but rather to show that there are at least two important senses of the concept that we need to take seriously when it comes to Artificial Intelligence (AI). Importantly, it will be shown that there is an intimate link between these two types of responsibility, and it is essential that researchers and practitioners keep this mind.

Recent work in moral philosophy has been concerned with issues of responsibility as they relate to the development, use, and impact of artificially intelligent systems. Oxford University Press recently published their first ever Handbook of Ethics of AI, which is devoted to tackling current ethical problems raised by AI and hopes to mitigate future harms by advancing appropriate mechanisms of governance for these systems. The book is wide-ranging (featuring over 40 unique chapters), insightful, and deeply disturbing. From gender bias in hiring, racial bias in creditworthiness and facial recognition software, and sexual bias in identifying a person’s sexual orientation, we are awash with cases of AI systematically enhancing rather than reducing structural inequality.

But how exactly should (can?) we go about operationalizing an ethics of AI in a way that ensures desirable social outcomes? And how can we hold those causally involved parties accountable, when the very nature of AI seems to make a mockery of the usual sense of control we deem appropriate in our ascriptions of moral responsibility? These are the two sense of responsibility I want to focus on here: how can we deploy AI responsibly, and how can we hold those responsible when things go wrong. Read more »