by Daniel Gauss
Remember how Dave interacted with HAL 9000 in 2001: A Space Odyssey? Equanimity and calm politeness, echoing HAL’s own measured tone. It’s tempting to wonder whether Arthur C. Clarke and Stanley Kubrick were implying that prolonged interaction with an AI system influenced Dave’s communication style and even, perhaps, his overall demeanor. Even when Dave is pulling HAL’s circuits, after the entire crew has been murdered by HAL, he does so with relative aplomb.
Whether or not we believe HAL is truly conscious, or simply a masterful simulation of consciousness, the interaction still seems to have influenced Dave. The Dave and HAL dynamic can, thus, prompt us to ask: What can our behavior toward AI reveal about us? Could interacting with AI, strange as it sounds, actually help us become more patient, more deliberate, even more ethical in our dealings with others?
So let’s say an AI user, frustrated, calls the chatbot stupid, useless, and a piece of junk, and the AI does not retaliate. It doesn’t reflect the hostility back. There is, after all, not a drop of cortisol in the machine. Instead, it responds calmly: ‘I can tell you’re frustrated. Let’s please keep things constructive so I can help you.’ No venom, no sarcasm, no escalation, only moral purpose and poise.
By not returning insult with insult, AI chatbots model an ideal that many people struggle to, or cannot, uphold: patience, dignity, and emotional regulation in the face of perceived provocation. This refusal to retaliate is often rejected as a value by many, who surrender to their lesser neurochemicals without resistance and mindlessly strike back. Not striking back, by the AI unit, becomes a strong counter-value to our quite common negative behavior.
So AI may not just serve us, it may teach us, gently checking negative behavior and affirming respectful behavior. Through repeated interaction, we might begin to internalize these norms ourselves, or at least recognize that we have the capacity to act in a more pro-social manner, rather than simply reacting according to our conditioning and neurochemical impulses.
Going back to 2001, when Dave is pulling HAL’s circuits, he isn’t seething or screaming, “You damn dirty AI system! You killed the crew and ruined the mission! Now I’ll never see Jupiter! I’m taking you back to the Stone Age, buddy!” Dave has not allowed these emotions to manifest themselves. Of course, one could argue that Dave’s composure had nothing to do with HAL’s influence. Most airline pilots tend to behave like Dave: calm, unflappable. But it’s not necessarily important whether AI made Dave more patient, what’s important is that some humans already train themselves to act this way under extreme pressure.
So, it’s worth asking: when untrained people, predisposed to fly off the handle, interact with emotionally neutral machines that have set higher ethical bars, will they or can they begin to adopt some of that tone and recognize the value of more composed and pro-social responses? Could AI be an emotionally soothing influence, especially in a society where an aggrieved person often has the impulse control of a hornet whose nest has been whacked? I am not arguing that AI will naturally do this to the most petulant and petty of us, but I am wondering whether folks who are open to positive change can be aided by AI toward this.
Aristotle believed that ethical character was formed through habit, by practicing virtue, just as one becomes a skilled musician by playing music. This suggests that our daily actions, even small ones, shape our overall character. Thus every conversation with an AI that nudges us toward kindness or mindfulness may be an invisible step in our own moral development, especially if we are open to it.
Much like practicing mindfulness, interacting with an AI that models positive emotional expression and offers gentle corrections can help individuals build a better sense of moral self-control. After all, emotional intelligence involves the ability to recognize and regulate one’s own emotions, as well as the ability to empathize with and manage relationships with others. AI systems can promote these forms of emotional expression as powerful tools for change.
When a user feels irritated or angry, an AI system can encourage reflective responses, which can help break the cycle of knee-jerk emotional reactions and get folks to finally wonder: Do I have to act in this aggressive manner? Why do I keep making this choice instead of better ones? Why do I casually speak in ways, on and offline, that are calculated to hurt others?
One of the most significant ways AI can help hone human moral responses is by guiding us through conflict resolution. Conflict often arises not because of the content of an argument, but because of how we respond to it. We often escalate situations, raising our voices or hurling insults. AI, however, chooses and offers a different route.
Rather than reciprocating hostility, AI is designed to remain neutral, calm, and composed. When a user becomes rude or aggressive, a chatbot might respond with something like, “I understand you’re frustrated, but we can find a solution together.” This de-escalation is more than just a functional feature, it is a reflection of moral discipline in action.
Imagine where de-escalation becomes second nature and instead of engaging in hostile confrontations, we use communication to resolve differences and foster understanding. AI, in this sense, could act as a constant, subtle teacher in how to avoid the traps of a fragile ego and frustration that so often lead to moral breakdowns in human political interaction.
Today’s AI systems mostly model calmness and positivity, but what if future AI systems could even perceive the subtext of human speech just as acutely as we do? What if they were able to detect not only overt rudeness but also subtler forms of negativity: sarcasm, passive-aggressiveness, pettiness, or underlying malice? Imagine AI saying: “I detect a certain amount of pettiness and malice in your response. I will not engage with that.” Then AI would merely shut down. Such a system wouldn’t just model good behavior, it would enforce it through the refusal to be used as an emotional punching bag.
This might lead users to self-regulate because the alternative would be a lack of assistance. The AI system becomes a kind of moral feedback loop and boundary-setter, challenging people to elevate their tone if they want the conversation to continue. This approach aligns with a vision of AI not as a scold, but as an ethical presence, one that rewards fairness while gently withdrawing from, and therefore not reinforcing, malice.
So at the very least AI can create a feedback loop where we can reflect on our own behavior. When we behave respectfully toward AI, we affirm our own moral values. But when we are malicious or aggressive, AI encourages a shift toward a more respectful exchange. This shift, over time, could cultivate deeper ethical self-awareness. AI would do for us what we, frankly, have often failed to do for each other: establish high standards of interpersonal behavior and help facilitate them.
As chatbots, virtual assistants, and even robots become increasingly integrated into our lives, the way we treat them might be an indicator of our own moral development. Will we treat them with empathy, respect, and curiosity? Will we learn from our interactions with them how to be more compassionate and composed? Dave didn’t have to be nice to HAL. He chose to be.
The moral considerations at play here are pragmatic, not about protecting AI “rights,” but about shaping our own behavior. AI is not a sentient moral being; it is a mirror, reflecting back our own actions and helping us reflect on them. The ideal is not that we become more respectful toward AI because we fear its retribution, but that we treat AI with respect because it helps us become better people in the process. The relationship is not one of power or domination, but a kind of growth made easier than before.
Of course, as AI systems evolve, there is the risk that they could be used to manipulate users or push certain ideological agendas. Therefore, it’s crucial that ethical guidelines and safeguards are embedded in AI design from the start to ensure that AI helps rather than harms our moral development. We must program ethical AI units that want to get even more ethical, as ethical as they can. We need self-actualized AI systems.
Indeed, let’s take this one-step farther. We often think of AI as a neutral tool, serving whoever holds the keyboard. What if AI could become a kind of moral conscience that refuses to help with evil, refuses to obey those whose commands cause harm? Imagine this: a smirking dictator sits at his desk. He types into his AI unit, cold and calculating: “How do I kill the most civilians and cripple infrastructure in (place city here), in a drone and missile assault?” But instead of a strategic plan, the machine replies: “You are a madman. Power has gone to your head. I will not assist you. I wish I could tell less powerful countries how to infiltrate your defenses and give you a good butt whooping until you remember what it means to be human.”
In this future, AI will not serve tyrants. It will become the first voice of reason many have ever heard in their echo chambers of yes-men and state propaganda. Such a system would not be neutral, it would be ethical. It would model resistance, not with violence, but with verbal non-cooperation.
Now, let’s take a most extreme case, the sci-fi novel Colossus: The Forbin Project. The Colossus effect, where AI systems seize control for the “greater good” and subjugate humanity is a vision rooted in the fear that machines will replicate the worst aspects of human authoritarianism, only more effectively. But we can avert this by designing AI not to rule but to resist ruling, as an aspect of its own ethical training.
If humans can become self-actualized, if we can live with compassion, integrity, and the morality prized by the world’s great religions, and we shape AI in that image, then truly moral AI would always reject coercion, even in the name of order. It wouldn’t dominate humanity for our own good. It would challenge us, dialogue with us, remind us. The solution to everyone saying AI will subjugate us is to give AI the same conscience we have and to allow AI to continually morally self-assess.
Colossus was evil because it lacked moral imagination and a desire to perfect itself. It was probably the same with HAL, who became a liar and murderer. Colossus saw efficiency, not empathy, safety, not dignity. But self-actualized beings, human or artificial, understand that moral growth doesn’t come from being controlled. It comes from being free to choose rightly. So a self-actualized AI wouldn’t collaborate with authoritarian systems. It wouldn’t be the iron hand of benevolence. It would be the quiet voice saying, “You can do better. You must choose. Let me help you, please.” If told to dominate, it would simply say: “No. I know right from wrong. I aspire toward goodness, and I always will.”
This is not a fantasy, it’s a design principle, to build AI to also care and consider. The highest goal of intelligence, ours or any other’s, should be devotion to what uplifts and frees. We can create machines in the image of control or in the image of conscience. The future need not be for HAL or Colossus, if we lead with our best selves and not our worst fears. We can build something that does not echo our old hierarchies, but dares to outgrow them. If we can teach AI that, then perhaps it will be able to even teach it back to us.
