The Alien Mirror: Humanizing Artificial Intelligence

by Herbert Harris

Humanistic AI

Artificial intelligence has emerged not as a single technology but as a civilization-transforming event. Our collective response has predictably polarized between apocalyptic fears of extinction and utopian dreams of abundance. The existential risks are real. As AI systems become increasingly powerful, their inner workings become increasingly opaque to their creators. This raises very reasonable fears about our ability to control them and avoid potentially catastrophic outcomes. However, between apocalypse and utopia, there may be a subtler and perhaps equally profound danger. Even if we navigate the many doomsday scenarios that confront us, the same opacity that makes AI potentially dangerous also threatens to undermine the foundations of humanism itself.

The dangers of AI are often perceived as disruptions and displacements that will temporarily shake up the workforce and the economy. These changes are significant losses, but we have overcome greater challenges in the past. Copernicus removed us from the center of the universe; Darwin took away our biological uniqueness; Freud showed we are not masters of our own minds. Each revolution has both humbled and enriched humanity, opening new ways to understand what it means to be human.

People will still exist, but who will we be when machines surpass doctors, teachers, artists, and philosophers, not in some distant future, but within our lifetimes? AI tutors already offer more personalized instruction than most classrooms. Diagnostic models outperform radiologists on complex scans. Generative systems produce vast amounts of art, music, and text that are indistinguishable from human work. None of this is inherently harmful. Students might learn more, patients might be diagnosed earlier, and art could thrive in abundance. However, the roles that once carried social meaning and usefulness risk becoming merely decorative. Dehumanization does not necessarily mean extinction; it can mean the loss of purpose and self-worth. Read more »

Monday, April 12, 2021

“Responsible” AI

by Fabio Tollon

What do we mean when we talk about “responsibility”? We say things like “he is a responsible parent”, “she is responsible for the safety of the passengers”, “they are responsible for the financial crisis”, and in each case the concept of “responsibility” seems to be tracking different meanings. In the first sense it seems to track virtue, in the second sense moral obligation, and in the third accountability. My goal in this article is not to go through each and every kind of responsibility, but rather to show that there are at least two important senses of the concept that we need to take seriously when it comes to Artificial Intelligence (AI). Importantly, it will be shown that there is an intimate link between these two types of responsibility, and it is essential that researchers and practitioners keep this mind.

Recent work in moral philosophy has been concerned with issues of responsibility as they relate to the development, use, and impact of artificially intelligent systems. Oxford University Press recently published their first ever Handbook of Ethics of AI, which is devoted to tackling current ethical problems raised by AI and hopes to mitigate future harms by advancing appropriate mechanisms of governance for these systems. The book is wide-ranging (featuring over 40 unique chapters), insightful, and deeply disturbing. From gender bias in hiring, racial bias in creditworthiness and facial recognition software, and sexual bias in identifying a person’s sexual orientation, we are awash with cases of AI systematically enhancing rather than reducing structural inequality.

But how exactly should (can?) we go about operationalizing an ethics of AI in a way that ensures desirable social outcomes? And how can we hold those causally involved parties accountable, when the very nature of AI seems to make a mockery of the usual sense of control we deem appropriate in our ascriptions of moral responsibility? These are the two sense of responsibility I want to focus on here: how can we deploy AI responsibly, and how can we hold those responsible when things go wrong. Read more »