by Muhammad Aurangzeb Ahmad

The French poet Jean de La Fontaine has a famous quote that “A person often meets his destiny on the road he took to avoid it.” We find echoes of this phenomenon in global literature, whether it’s Oedipus in the Greek Myths, Rostum and Sohrab from Iran, or the story of Kamsa and Krishna in the Hindu tradition. There are elements of self-full filling prophecy that we are seeing in the world of predictive modeling. Consider the use of AI and machine learning models to predict risk of mortality in an ICU setting. Some of these models have extremely high accuracy and precision. They do in milliseconds what it would take a team of clinicians hours to synthesize. The predictive power of such models need to be contextualized however: A mortality prediction model is trained on historical data i.e., on what happened to patients who looked like this, had these labs, were managed in this way. But the historical data does not merely record biology, it also records medicine as it was practiced. This includes all its established patterns, its habits, its inequities, and its mistakes.
Consider a well known finding that has been often used as a cautionary tale: in a certain historical ICU dataset, patients with a diagnosis of asthma had lower predicted mortality than otherwise similar patients without it. This seems absurd, asthma is a serious respiratory condition. When researchers looked closely, they realized that the problem was not about asthma biologically but it was about care. Asthma patients were more likely to have their respiratory distress recognized early. They arrived with better documentation, better advocates, better access to specialists who knew them. The asthma diagnosis was not a protective biological factor. It was a marker of a particular kind of patient i.e., one who had navigated the healthcare system in a way that produced better documentation, faster escalation, more attentive management.
When a mortality prediction model learns from this data, it learns the pattern correctly. Asthma is, statistically, associated with better outcomes. However, if we deploy that model, it will assign lower mortality risk to asthma patients. The danger is that this may cause clinicians to be less vigilant about them, which will over time close the gap that the model detected, and possibly reverse it. This is not an isolated quirk. Researchers have formally characterized a class of prediction models that are harmful self-fulfilling prophecies: their deployment harms a group of patients, but the worse outcomes of these patients do not diminish the measured accuracy of the model. The model remains “accurate” in the narrow sense of predicting what will happen. This is because it is now partly causing what will happen even as it causes harm!
There is a second problem that we need to address: Mortality prediction models do not predict mortality directly. They predict mortality as it was recorded in the data they were trained on. This means that they predict the outcomes that accrued to the kinds of patients who were treated the way those patients were treated, in the institutions where those patients were treated, at the historical moment when the data was collected. When the training data reflects a healthcare system that did not treat all patients equally, the model learns those inequalities as facts about the patients rather than facts about the system. Read more »


There has long been a temptation in science to imagine one system that can explain everything. For a while, that dream belonged to physics, whose practitioners, armed with a handful of equations, could describe the orbits of planets and the spin of electrons. In recent years, the torch has been seized by artificial intelligence. With enough data, we are told, the machine will learn the world. If this sounds like a passing of the crown, it has also become, in a curious way, a rivalry. Like the cinematic conflict between vampires and werewolves in the Underworld franchise, AI and physics have been cast as two immortal powers fighting for dominion over knowledge. AI enthusiasts claim that the laws of nature will simply fall out of sufficiently large data sets. Physicists counter that data without principle is merely glorified curve-fitting.
In recent years chatbots powered by large language models have been slowing moving to the pulpit. Tools like 












Everyone grieves in their own way. For me, it meant sifting through the tangible remnants of my father’s life—everything he had written or signed. I endeavored to collect every fragment of his writing, no matter profound or mundane – be it verses from the Quran or a simple grocery list. I wanted each text to be a reminder that I could revisit in future. Among this cache was the last document he ever signed: a do-not-resuscitate directive. I have often wondered how his wishes might have evolved over the course of his life—especially when he had a heart attack when I was only six years old. Had the decision rested upon us, his children, what path would we have chosen? I do not have definitive answers, but pondering on this dilemma has given me questions that I now have to revisit years later in the form of improving ethical decision making at the end-of-life scenarios. To illustrate, consider Alice, a fifty-year-old woman who had an accident and is incapacitated. The physicians need to decide whether to resuscitate her or not. Ideally there is an 
