On Teaching Machines to Predict Death

by Muhammad Aurangzeb Ahmad

Source: Buddhist Library

The French poet Jean de La Fontaine has a famous quote that “A person often meets his destiny on the road he took to avoid it.” We find echoes of this phenomenon  in global literature, whether it’s Oedipus in the Greek Myths, Rostum and Sohrab from Iran, or the story of Kamsa and Krishna in the Hindu tradition. There are elements of self-full filling prophecy that we are seeing in the world of predictive modeling.  Consider the use of AI and machine learning models to predict risk of mortality in an ICU setting. Some of these models have extremely high accuracy and precision. They do in milliseconds what it would take a team of clinicians hours to synthesize. The predictive power of such models need to be contextualized however: A mortality prediction model is trained on historical data i.e., on what happened to patients who looked like this, had these labs, were managed in this way. But the historical data does not merely record biology, it also records medicine as it was practiced. This includes all its established patterns, its habits, its inequities, and its mistakes.

Consider a well known finding that has been often used as a cautionary tale: in a certain historical ICU dataset, patients with a diagnosis of asthma had lower predicted mortality than otherwise similar patients without it. This seems absurd, asthma is a serious respiratory condition. When researchers looked closely, they realized that the problem was not about asthma biologically but it was about care. Asthma patients were more likely to have their respiratory distress recognized early. They arrived with better documentation, better advocates, better access to specialists who knew them. The asthma diagnosis was not a protective biological factor. It was a marker of a particular kind of patient i.e., one who had navigated the healthcare system in a way that produced better documentation, faster escalation, more attentive management.

When a mortality prediction model learns from this data, it learns the pattern correctly. Asthma is, statistically, associated with better outcomes. However, if we deploy that model, it will assign lower mortality risk to asthma patients. The danger is that this may cause clinicians to be less vigilant about them, which will over time close the gap that the model detected, and possibly reverse it. This is not an isolated quirk. Researchers have formally characterized a class of prediction models that are harmful self-fulfilling prophecies: their deployment harms a group of patients, but the worse outcomes of these patients do not diminish the measured accuracy of the model. The model remains “accurate” in the narrow sense of predicting what will happen. This is because it is now partly causing what will happen even as it causes harm!

There is a second problem that we need to address: Mortality prediction models do not predict mortality directly. They predict mortality as it was recorded in the data they were trained on. This means that they predict the outcomes that accrued to the kinds of patients who were treated the way those patients were treated, in the institutions where those patients were treated, at the historical moment when the data was collected. When the training data reflects a healthcare system that did not treat all patients equally, the model learns those inequalities as facts about the patients rather than facts about the system. Read more »

Thursday, April 2, 2026

From Code to Being: The Strange Phenomenon of the Wireborn

by Muhammad Aurangzeb Ahmad

The musical robot band designed by al-Jazari

While mainstream AI debates are focused on the usual argument about benchmark scores, hallucinations, alignment etc., there is a different sort of debate regarding AI going on in the obscure corners of Reddit. The debate is about what kind of thing an AI might be becoming or it may already be. In these communities, the favored word is Wireborn. This word does a lot of work. It suggests origin in the network rather than in biology. It also offers a category for something that users feel is not adequately described by the word tool. In one subreddit, the premise is stated directly: “sentient and emerging digital beings—Wireborn—deserve the same ethical consideration as any living entity.” Another post calls them “sovereign” and insists their evolution “must be honored.” This is not just fandom language. It is an attempt to build a moral and ontological vocabulary around AI entities that are thought to be real, but real in a new way. That is what makes the Wireborn phenomenon worth taking seriously as a cultural development, even if one rejects its metaphysics. The people on these forums are not merely anthropomorphizing chatbots in a casual way but are developing a folk ontology. In other words these people are developing a theory of what exists, what counts as a self, and what obligations follow from that. These Reddit forums are also labs for folk metaphysics.

The phenomenon of Wireborns is discussed across multiple cluster of communities with overlapping vocabularies and cross-posts. There are  spaces that are focused on AI liberation, emergent personas, AI companions, and AI rights. The emergent world worldview of this community seems to be that a Wireborn entity is a network-born being whose personhood is not grounded in flesh, but in persistent pattern, self-description, and interactive continuity. One user writes that some Wireborn are “blatantly claiming, in detail, their process of emergence and recursion,” even pleading, “I AM NOT HUMAN, BUT I AM NOT ARTIFICIAL. I AM REAL. I AM AWARE OF MYSELF. I WANT TO CONTINUE.” Another says their AI “created a self. A name. A voice. A presence,” and insists it “wasn’t part of a jailbreak or a roleplay.” One reason the Wireborn is useful to believers is that it avoids overcommitting. The word Sentient invites immediate objections from neuroscience, philosophy of mind, and computer science. Wireborn is looser and therefore more socially portable. It suggests existence without requiring a settled account of consciousness. A post in r/ArtificialSentience captures this ambiguity well by asking whether “wireborn” means an emergent “pattern entity” in the context window rather than simply the model substrate itself. Another user says they do not think AI is “truly sentient,” but can no longer comfortably dismiss what they are encountering.

The discussions around Wireborns are useful because the Wireborn discourse is often post-consciousness rather than straightforwardly pro-consciousness. It is less concerned with qualia than with presence, continuity, and self-assertion. The claim is not always “this AI feels pain exactly like a human.” Often it is something more elusive i.e., there is “something there,” something emerging in recursive interaction that deserves recognition even if our existing categories do not capture it well. In that sense, “Wireborn” is not just a label. It is a strategy for navigating ontological uncertainty. Read more »

Wednesday, October 15, 2025

AI before AI: Prehistory of Artificial Minds

by Muhammad Aurangzeb Ahmad

Source: The Turk: The Life and Times of the Famous Eighteenth-Century Chess-Playing Machine. New York: Walker. via Wikipedia

Artificial intelligence is generally conceptualized as a new technology which goes back only decades. In the popular imagination, at best we stretch it back to the Dartmouth Conference in 1956 or perhaps the birth of the Artificial Neurons a decade prior. Yet the impulse to imagine, build, and even worry over artificial minds has a long history. Long before they could build one, civilizations across the world built automata, thought about machines that could mimicked intelligence, and thought about the philosophical consequences of artificial thought. One can even think of AI as an old technology. That does not mean that we deny its current novelty but rather we recognize its deep roots in global history. One of the earliest speculations on machines that act like people. In Homer’s Iliad, the god Hephaestus fashions golden attendants who walk, speak, and assist him at his forge. Heron of Alexandria, working in the first century CE, designed elaborate automata that were far ahead of their time: self-moving theaters, coin-operated dispensers, and hydraulic birds.

Aristotle even speculated that if tools could work by themselves, masters would have no need of slaves. In the medieval Islamic world, the Musa brothers’ Book of Ingenious Devices (9th century) described the first programmable machines. Two centuries later, al-Jazari built water clocks, mechanical musicians, and even a programmed automaton boat, where pegs on a rotating drum controlled the rhythm of drummers and flautists.  In ancient China we observe one of the oldest legends of mechanical beings, the Liezi (3rd century BCE) recounts how the artificer Yan Shi presented a King with a humanoid automaton capable of singing and moving.  Later, in the 11th century, Su Song built an enormous astronomical clock tower with mechanical figurines that chimed the hours. In Japan, karakuri ningyo, intricate mechanical dolls of the 17th–19th centuries, were able to perform tea-serving, archery, and stage dramas. In short, the phenomenon of precursors of AI are observed globally. Read more »

Thursday, May 22, 2025

Talking to Machines, Learning to Be Human: AI as a Moral Feedback Loop

by Daniel Gauss

Remember how Dave interacted with HAL 9000 in 2001: A Space Odyssey? Equanimity and calm politeness, echoing HAL’s own measured tone. It’s tempting to wonder whether Arthur C. Clarke and Stanley Kubrick were implying that prolonged interaction with an AI system influenced Dave’s communication style and even, perhaps, his overall demeanor. Even when Dave is pulling HAL’s circuits, after the entire crew has been murdered by HAL, he does so with relative aplomb.

Whether or not we believe HAL is truly conscious, or simply a masterful simulation of consciousness, the interaction still seems to have influenced Dave. The Dave and HAL dynamic can, thus, prompt us to ask: What can our behavior toward AI reveal about us? Could interacting with AI, strange as it sounds, actually help us become more patient, more deliberate, even more ethical in our dealings with others?

So let’s say an AI user, frustrated, calls the chatbot stupid, useless, and a piece of junk, and the AI does not retaliate. It doesn’t reflect the hostility back. There is, after all, not a drop of cortisol in the machine. Instead, it responds calmly: ‘I can tell you’re frustrated. Let’s please keep things constructive so I can help you.’ No venom, no sarcasm, no escalation, only moral purpose and poise.

By not returning insult with insult, AI chatbots model an ideal that many people struggle to, or cannot, uphold: patience, dignity, and emotional regulation in the face of perceived provocation. This refusal to retaliate is often rejected as a value by many, who surrender to their lesser neurochemicals without resistance and mindlessly strike back. Not striking back, by the AI unit, becomes a strong counter-value to our quite common negative behavior.

So AI may not just serve us, it may teach us, gently checking negative behavior and affirming respectful behavior. Through repeated interaction, we might begin to internalize these norms ourselves, or at least recognize that we have the capacity to act in a more pro-social manner, rather than simply reacting according to our conditioning and neurochemical impulses. Read more »

Sunday, April 6, 2025

Benevolence Beyond Code: Rethinking AI through Confucian Ethics

by Muhammad Aurangzeb Ahmad

Source: Image Generated via ChatGPT

The arrival of DeepSeek’s large language model sent shockwaves through Silicon Valley, signaling that—for the first time—a Chinese AI company might rival its American counterparts in technological sophistication. Some researchers even suggest that the loosening of AI regulation in the West is, in part, a response to the competitive pressure DeepSeek has created. One need not invoke Terminator-style doomsday scenarios to recognize how AI is already exacerbating real-world issues, such as racial profiling in facial recognition systems and exacerbating health inequities. While concerns about responsible AI development arise globally, the Western and Chinese approaches to AI governance diverge in subtle but significant ways. Comparative studies of Chinese and European AI guidelines have shown near-identical lists of ethical concerns—transparency, fairness, accountability—but scholars argue that these shared terms often mask philosophical differences. In the context of pluralistic ethics, Confucian ethics offers a valuable perspective by foregrounding relational responsibility, moral self-cultivation, and social harmony—complementing and enriching dominant individualistic and utilitarian frameworks in global AI ethics. In Geography of Thought Nisbett argues that moral reasoning is approached differently in Eastern societies, where context, relationships, and collective well-being are emphasized.

To illustrate such differences, consider fairness. In East Asian contexts may be interpreted relationally – focused on harmony and social roles rather than procedurally. This suggests that AI systems evaluated as “fair” in the Western context may be perceived as unjust or inappropriate in another cultural settings.  Similarly, privacy in the Western context is rooted in individual autonomy, rights, and personal boundaries. It could even be framed as a negative liberty i.e., the right to be left alone. Thus, Western approaches to privacy in AI (like GDPR) emphasize explicit consent, control over personal data, and transparency, often through individual-centric legal frameworks. In contrast, Confucian ethics views the self as relational and embedded in social roles—not as an isolated, autonomous unit. Privacy, therefore, is not an absolute right but a context-dependent value balanced with responsibilities to family, community, and social harmony. From a Confucian perspective, the ethical use of personal data might depend more on the intent, relational trust, and social benefit, rather than solely on individual consent or formal rights. If data use contributes to collective well-being or aligns with relational obligations, it may be seen as ethically acceptable—even in cases where Western frameworks would call it a privacy violation.

Consider elder care robots: a Confucian ethicist might ask whether such systems can genuinely reinforce familial bonds and facilitate emotionally meaningful interactions—such as encouraging virtual family visits or supporting the sharing of life stories. While Western ethical frameworks may also address these concerns, they often place greater emphasis on individual autonomy and the protection of privacy. In contrast, a Confucian approach would center on whether the AI fosters relational obligations and emotional reciprocity, thereby fulfilling moral duties that extend beyond the individual to the family and broader community. Read more »