AI before AI: Prehistory of Artificial Minds

by Muhammad Aurangzeb Ahmad

Source: The Turk: The Life and Times of the Famous Eighteenth-Century Chess-Playing Machine. New York: Walker. via Wikipedia

Artificial intelligence is generally conceptualized as a new technology which goes back only decades. In the popular imagination, at best we stretch it back to the Dartmouth Conference in 1956 or perhaps the birth of the Artificial Neurons a decade prior. Yet the impulse to imagine, build, and even worry over artificial minds has a long history. Long before they could build one, civilizations across the world built automata, thought about machines that could mimicked intelligence, and thought about the philosophical consequences of artificial thought. One can even think of AI as an old technology. That does not mean that we deny its current novelty but rather we recognize its deep roots in global history. One of the earliest speculations on machines that act like people. In Homer’s Iliad, the god Hephaestus fashions golden attendants who walk, speak, and assist him at his forge. Heron of Alexandria, working in the first century CE, designed elaborate automata that were far ahead of their time: self-moving theaters, coin-operated dispensers, and hydraulic birds.

Aristotle even speculated that if tools could work by themselves, masters would have no need of slaves. In the medieval Islamic world, the Musa brothers’ Book of Ingenious Devices (9th century) described the first programmable machines. Two centuries later, al-Jazari built water clocks, mechanical musicians, and even a programmed automaton boat, where pegs on a rotating drum controlled the rhythm of drummers and flautists.  In ancient China we observe one of the oldest legends of mechanical beings, the Liezi (3rd century BCE) recounts how the artificer Yan Shi presented a King with a humanoid automaton capable of singing and moving.  Later, in the 11th century, Su Song built an enormous astronomical clock tower with mechanical figurines that chimed the hours. In Japan, karakuri ningyo, intricate mechanical dolls of the 17th–19th centuries, were able to perform tea-serving, archery, and stage dramas. In short, the phenomenon of precursors of AI are observed globally. Read more »

Thursday, May 22, 2025

Talking to Machines, Learning to Be Human: AI as a Moral Feedback Loop

by Daniel Gauss

Remember how Dave interacted with HAL 9000 in 2001: A Space Odyssey? Equanimity and calm politeness, echoing HAL’s own measured tone. It’s tempting to wonder whether Arthur C. Clarke and Stanley Kubrick were implying that prolonged interaction with an AI system influenced Dave’s communication style and even, perhaps, his overall demeanor. Even when Dave is pulling HAL’s circuits, after the entire crew has been murdered by HAL, he does so with relative aplomb.

Whether or not we believe HAL is truly conscious, or simply a masterful simulation of consciousness, the interaction still seems to have influenced Dave. The Dave and HAL dynamic can, thus, prompt us to ask: What can our behavior toward AI reveal about us? Could interacting with AI, strange as it sounds, actually help us become more patient, more deliberate, even more ethical in our dealings with others?

So let’s say an AI user, frustrated, calls the chatbot stupid, useless, and a piece of junk, and the AI does not retaliate. It doesn’t reflect the hostility back. There is, after all, not a drop of cortisol in the machine. Instead, it responds calmly: ‘I can tell you’re frustrated. Let’s please keep things constructive so I can help you.’ No venom, no sarcasm, no escalation, only moral purpose and poise.

By not returning insult with insult, AI chatbots model an ideal that many people struggle to, or cannot, uphold: patience, dignity, and emotional regulation in the face of perceived provocation. This refusal to retaliate is often rejected as a value by many, who surrender to their lesser neurochemicals without resistance and mindlessly strike back. Not striking back, by the AI unit, becomes a strong counter-value to our quite common negative behavior.

So AI may not just serve us, it may teach us, gently checking negative behavior and affirming respectful behavior. Through repeated interaction, we might begin to internalize these norms ourselves, or at least recognize that we have the capacity to act in a more pro-social manner, rather than simply reacting according to our conditioning and neurochemical impulses. Read more »

Sunday, April 6, 2025

Benevolence Beyond Code: Rethinking AI through Confucian Ethics

by Muhammad Aurangzeb Ahmad

Source: Image Generated via ChatGPT

The arrival of DeepSeek’s large language model sent shockwaves through Silicon Valley, signaling that—for the first time—a Chinese AI company might rival its American counterparts in technological sophistication. Some researchers even suggest that the loosening of AI regulation in the West is, in part, a response to the competitive pressure DeepSeek has created. One need not invoke Terminator-style doomsday scenarios to recognize how AI is already exacerbating real-world issues, such as racial profiling in facial recognition systems and exacerbating health inequities. While concerns about responsible AI development arise globally, the Western and Chinese approaches to AI governance diverge in subtle but significant ways. Comparative studies of Chinese and European AI guidelines have shown near-identical lists of ethical concerns—transparency, fairness, accountability—but scholars argue that these shared terms often mask philosophical differences. In the context of pluralistic ethics, Confucian ethics offers a valuable perspective by foregrounding relational responsibility, moral self-cultivation, and social harmony—complementing and enriching dominant individualistic and utilitarian frameworks in global AI ethics. In Geography of Thought Nisbett argues that moral reasoning is approached differently in Eastern societies, where context, relationships, and collective well-being are emphasized.

To illustrate such differences, consider fairness. In East Asian contexts may be interpreted relationally – focused on harmony and social roles rather than procedurally. This suggests that AI systems evaluated as “fair” in the Western context may be perceived as unjust or inappropriate in another cultural settings.  Similarly, privacy in the Western context is rooted in individual autonomy, rights, and personal boundaries. It could even be framed as a negative liberty i.e., the right to be left alone. Thus, Western approaches to privacy in AI (like GDPR) emphasize explicit consent, control over personal data, and transparency, often through individual-centric legal frameworks. In contrast, Confucian ethics views the self as relational and embedded in social roles—not as an isolated, autonomous unit. Privacy, therefore, is not an absolute right but a context-dependent value balanced with responsibilities to family, community, and social harmony. From a Confucian perspective, the ethical use of personal data might depend more on the intent, relational trust, and social benefit, rather than solely on individual consent or formal rights. If data use contributes to collective well-being or aligns with relational obligations, it may be seen as ethically acceptable—even in cases where Western frameworks would call it a privacy violation.

Consider elder care robots: a Confucian ethicist might ask whether such systems can genuinely reinforce familial bonds and facilitate emotionally meaningful interactions—such as encouraging virtual family visits or supporting the sharing of life stories. While Western ethical frameworks may also address these concerns, they often place greater emphasis on individual autonomy and the protection of privacy. In contrast, a Confucian approach would center on whether the AI fosters relational obligations and emotional reciprocity, thereby fulfilling moral duties that extend beyond the individual to the family and broader community. Read more »