Does AI Need Free Will to be held Responsible?

by Fabio Tollon

We have always been a technological species. From the use of basic tools to advanced new forms of social media, we are creatures who do not just live in the world but actively seek to change it. However, we now live in a time where many believe that modern technology, especially advances driven by artificial intelligence (AI), will come to challenge our responsibility practices. Digital nudges can remind you of your mother’s birthday, ToneCheck can make sure you only write nice emails to your boss, and your smart fridge can tell you when you’ve run out of milk. The point is that our lives have always been enmeshed with technology, but our current predicament seems categorically different from anything that has come before. The technologies at our disposal today are not merely tools to various ends, but rather come to bear on our characters by importantly influencing many of our morally laden decisions and actions.

One way in which this might happen is when sufficiently autonomous technology “acts” in such a way as to challenge our usual practices of ascribing responsibility. When an AI system performs an action that results in some event that has moral significance (and where we would normally deem it appropriate to attribute moral responsibility to human agents) it seems natural that people would still have emotional responses in these situations. This is especially true if the AI is perceived as having agential characteristics. If a self-driving car harms a human being, it would be quite natural for bystanders to feel anger at the cause of the harm. However, it seems incoherent to feel angry at a chunk of metal, no matter how autonomous it might be.

Thus, we seem to have two questions here: the first is whether our responses are fitting, given the situation. The second is an empirical question of whether in fact people will behave in this way when confronted with such autonomous systems. Naturally, as a philosopher, I will try not to speculate too much with respect to the second question, and thus what I say here is mostly concerned with the first. Read more »



Monday, April 12, 2021

“Responsible” AI

by Fabio Tollon

What do we mean when we talk about “responsibility”? We say things like “he is a responsible parent”, “she is responsible for the safety of the passengers”, “they are responsible for the financial crisis”, and in each case the concept of “responsibility” seems to be tracking different meanings. In the first sense it seems to track virtue, in the second sense moral obligation, and in the third accountability. My goal in this article is not to go through each and every kind of responsibility, but rather to show that there are at least two important senses of the concept that we need to take seriously when it comes to Artificial Intelligence (AI). Importantly, it will be shown that there is an intimate link between these two types of responsibility, and it is essential that researchers and practitioners keep this mind.

Recent work in moral philosophy has been concerned with issues of responsibility as they relate to the development, use, and impact of artificially intelligent systems. Oxford University Press recently published their first ever Handbook of Ethics of AI, which is devoted to tackling current ethical problems raised by AI and hopes to mitigate future harms by advancing appropriate mechanisms of governance for these systems. The book is wide-ranging (featuring over 40 unique chapters), insightful, and deeply disturbing. From gender bias in hiring, racial bias in creditworthiness and facial recognition software, and sexual bias in identifying a person’s sexual orientation, we are awash with cases of AI systematically enhancing rather than reducing structural inequality.

But how exactly should (can?) we go about operationalizing an ethics of AI in a way that ensures desirable social outcomes? And how can we hold those causally involved parties accountable, when the very nature of AI seems to make a mockery of the usual sense of control we deem appropriate in our ascriptions of moral responsibility? These are the two sense of responsibility I want to focus on here: how can we deploy AI responsibly, and how can we hold those responsible when things go wrong. Read more »