by Herbert Harris

Artificial intelligence has emerged not as a single technology but as a civilization-transforming event. Our collective response has predictably polarized between apocalyptic fears of extinction and utopian dreams of abundance. The existential risks are real. As AI systems become increasingly powerful, their inner workings become increasingly opaque to their creators. This raises very reasonable fears about our ability to control them and avoid potentially catastrophic outcomes. However, between apocalypse and utopia, there may be a subtler and perhaps equally profound danger. Even if we navigate the many doomsday scenarios that confront us, the same opacity that makes AI potentially dangerous also threatens to undermine the foundations of humanism itself.
The dangers of AI are often perceived as disruptions and displacements that will temporarily shake up the workforce and the economy. These changes are significant losses, but we have overcome greater challenges in the past. Copernicus removed us from the center of the universe; Darwin took away our biological uniqueness; Freud showed we are not masters of our own minds. Each revolution has both humbled and enriched humanity, opening new ways to understand what it means to be human.
People will still exist, but who will we be when machines surpass doctors, teachers, artists, and philosophers, not in some distant future, but within our lifetimes? AI tutors already offer more personalized instruction than most classrooms. Diagnostic models outperform radiologists on complex scans. Generative systems produce vast amounts of art, music, and text that are indistinguishable from human work. None of this is inherently harmful. Students might learn more, patients might be diagnosed earlier, and art could thrive in abundance. However, the roles that once carried social meaning and usefulness risk becoming merely decorative. Dehumanization does not necessarily mean extinction; it can mean the loss of purpose and self-worth. Read more »
