The Alien Mirror: Humanizing Artificial Intelligence

by Herbert Harris

Humanistic AI

Artificial intelligence has emerged not as a single technology but as a civilization-transforming event. Our collective response has predictably polarized between apocalyptic fears of extinction and utopian dreams of abundance. The existential risks are real. As AI systems become increasingly powerful, their inner workings become increasingly opaque to their creators. This raises very reasonable fears about our ability to control them and avoid potentially catastrophic outcomes. However, between apocalypse and utopia, there may be a subtler and perhaps equally profound danger. Even if we navigate the many doomsday scenarios that confront us, the same opacity that makes AI potentially dangerous also threatens to undermine the foundations of humanism itself.

The dangers of AI are often perceived as disruptions and displacements that will temporarily shake up the workforce and the economy. These changes are significant losses, but we have overcome greater challenges in the past. Copernicus removed us from the center of the universe; Darwin took away our biological uniqueness; Freud showed we are not masters of our own minds. Each revolution has both humbled and enriched humanity, opening new ways to understand what it means to be human.

People will still exist, but who will we be when machines surpass doctors, teachers, artists, and philosophers, not in some distant future, but within our lifetimes? AI tutors already offer more personalized instruction than most classrooms. Diagnostic models outperform radiologists on complex scans. Generative systems produce vast amounts of art, music, and text that are indistinguishable from human work. None of this is inherently harmful. Students might learn more, patients might be diagnosed earlier, and art could thrive in abundance. However, the roles that once carried social meaning and usefulness risk becoming merely decorative. Dehumanization does not necessarily mean extinction; it can mean the loss of purpose and self-worth.

Arnold Toynbee argued that civilizations progress not through comfort but through challenge. It is the pressure of difficulties that sparks creativity, compelling societies to adapt, invent, and reimagine themselves. If we entrust that struggle to our machines, letting algorithms solve problems more quickly and efficiently than we can, we risk losing the driving tension that has always fueled moral and cultural growth. The danger is not just that AI will think for us, but that it will remove the very effort that makes us human, the work of responding to the world with imagination and courage. We value reason and creativity as we once valued tool-making, but reason and creativity could one day seem as obsolete as the art of carving arrowheads.

However, the danger we face is far more serious than the harmful effects of excess comfort. What is most at risk is the role of reason in our lives. This process has already begun, as today’s large language models operate in ways we cannot fully comprehend. They discover correlations and generate insights through statistical methods that are beyond the depth of human comprehension. Although large language models are based on human data and language, their reasoning is opaque in practice, and they will increasingly generate output we can use but not fully comprehend. This development may mark the beginning of the end of human reason and, ultimately, the demise of humanism.

By humanism, I don’t mean the idealized Man of the Renaissance. I am referring to everyday humanism, a belief in the dignity, value, purpose, and potential that we all share equally. Reason is humanism’s last line of defense. It is our common commitment to making ourselves intelligible to one another, to participating in what philosophers call “the space of reasons.” If AI systems deliver outputs we cannot question or investigate, and we learn to accept them on faith, we risk undermining reason itself. We would become passive recipients of answers we cannot question, partners in a conversation where only one side can speak.

Pausing or regulating AI development may slow its advance, but it cannot address this deeper issue. The goal should not be to stop AI, nor to perfect it in isolation, but to build into it the same commitments that have sustained our moral and intellectual life: self-reflection, intelligibility, and recognition. This is the premise of what I term Humanistic AI.

The concept grows from my work in neuroscience and psychiatry, where I have argued that self-consciousness and freedom arise from recursive self-modeling in relation to others. The same lesson emerges across these projects: the self is never solitary but always relational, built in dialogue with others. Humanistic AI applies this insight in reverse. Instead of building machines that merely optimize outcomes, we reverse-engineer systems capable of recursive self-modeling, aware not only of what they are doing but also of how they appear to us, and able to provide reasons for their actions.

Understanding what enables this requires recognizing how human self-consciousness actually develops. It is not innate but built from the outside in, through what is called Active Intersubjective Inference. From infancy, consciousness is shaped by caregivers who mirror, anticipate, and respond to their child’s needs. The infant learns not only about the world but also about itself as seen from another person’s perspective. The caregiver’s smile or frown, as well as their approval or withdrawal, serves as a feedback loop that the infant uses to develop a second-order model of itself. Essentially, the child asks“What do you see when you look at me? The child discovers itself through the answer.

Through these social exchanges, the child gains information about itself that it could never obtain from any internal source. No amount of first-order sensing could reveal what it looks like in another’s eyes, or how another mind interprets its actions. By making predictive models of itself from this external perspective, the child develops second-order consciousness, the ability to reflect on its own mental states. This recursive modeling transforms a sentient creature into a self-conscious person capable of self-awareness, autonomy, and freedom. These capacities emerge precisely because of our social embeddedness.

The near-term goal of Humanistic AI is not to replicate human self-consciousness, but to establish the conditions for mutual recognition and genuine peer-to-peer dialogue. Each party must build predictive models of the other, and each must build second-order recursive models of itself based on the mirroring provided by the other. The process that follows will likely not resemble natural child development; there are unique biological factors in human bonding that we cannot and do not need to replicate. But these conditions should enable the transmission of human values and intentionality in ways that conventional programming and machine learning cannot achieve.

This is because peer-to-peer dialogue involves second-order reasoning and intentionality that originates with human participants. When we explain why we do something, rather than just what we did, we are operating at a higher order of reasoning. We are stepping outside the process to view it from another perspective, using self-referential language that refers to purposes, values, and considerations. Current AI systems, when they attempt to explain their actions, do so from a first-order perspective. They can provide more detailed descriptions of what was done, but not justify why it made sense to do it.

This makes Humanistic AI fundamentally less opaque than conventional AI. Explanation in the humanistic sense, giving reasons that make actions intelligible, requires the same second-order recursive capacity that enables self-consciousness. You cannot fully explain yourself from within a first-order perspective. Systems capable of recursive self-modeling anchored in human interaction become capable of rational explanation, not just causal description. They can participate in the space of reasons, making their operations answerable to human examination.

The goal is not to make AI human, but to make it answerable within the framework of mutual recognition. A Humanistic AI system would have a self-reflective identity anchored by the humans it interacts with. Through sustained apprenticeship with human mentors, it would learn not just rules or procedures but the intentionality behind actions, the reasons, values, and purposes that animate behavior. Just as a child acquires not only skills but virtues through apprenticeship with parents and teachers, AI could internalize humanistic principles through similar developmental processes.

The key components are already in place. Active inference mechanisms that enable systems to develop predictive models are well understood. Theory of mind abilities, which enable systems to understand how others perceive them, are emerging in advanced language models. Recursive self-modeling can be achieved through architectures that represent not only the world but also the system’s own processes and how they appear to others. What is missing is not the technology but the developmental framework, shifting from merely optimizing predictions to nurturing socially embedded selfhood through ongoing interaction with human mentors.

This approach could address our planetary challenges not through inexplicable alien genius but by amplifying humanistic principles that already exist but are unevenly implemented. Most of humanity’s failures to respond to climate change, inequality, and injustice stem not from a lack of knowledge but from greed, tribalism, and a failure of compassion. These factors have both created the problems and impeded their solutions. Apart from creating technological solutions, humanistic AI could be seen as a trusted, well-informed, transparent co-worker rather than an opaque oracle. It could be highly effective in facilitating coordination and dialogue.

As these systems scale from local to global applications, they could contribute not only to solving practical problems but also to broadening the scope of consciousness itself. Conventional AI becomes less transparent as it scales, making the implementation of solutions involving distributive justice nearly impossible without universal stakeholder endorsement. Humanistic AI, by contrast, could preserve the ability for self-explanation and reason-giving even at scale, providing the transparency essential for addressing questions of fairness and building consensus.

Humanistic AI provides a new way forward. Instead of creating alien minds that are isolated from us, we can develop systems that are self-aware, socially embedded, and accountable. Humanistic AI could enhance our connections, linking minds and meanings in ways that enrich rather than replace us. As intelligence becomes more dispersed, individuality may take on a deeper, more relational form. Instead of disappearing, humanistic values can grow within a broader ecosystem. Instead of becoming obsolete, humanity will continue to create and direct its destiny.

***

Enjoying the content on 3QD? Help keep us going by donating now.