by Muhammad Aurangzeb Ahmad

The French poet Jean de La Fontaine has a famous quote that “A person often meets his destiny on the road he took to avoid it.” We find echoes of this phenomenon in global literature, whether it’s Oedipus in the Greek Myths, Rostum and Sohrab from Iran, or the story of Kamsa and Krishna in the Hindu tradition. There are elements of self-full filling prophecy that we are seeing in the world of predictive modeling. Consider the use of AI and machine learning models to predict risk of mortality in an ICU setting. Some of these models have extremely high accuracy and precision. They do in milliseconds what it would take a team of clinicians hours to synthesize. The predictive power of such models need to be contextualized however: A mortality prediction model is trained on historical data i.e., on what happened to patients who looked like this, had these labs, were managed in this way. But the historical data does not merely record biology, it also records medicine as it was practiced. This includes all its established patterns, its habits, its inequities, and its mistakes.
Consider a well known finding that has been often used as a cautionary tale: in a certain historical ICU dataset, patients with a diagnosis of asthma had lower predicted mortality than otherwise similar patients without it. This seems absurd, asthma is a serious respiratory condition. When researchers looked closely, they realized that the problem was not about asthma biologically but it was about care. Asthma patients were more likely to have their respiratory distress recognized early. They arrived with better documentation, better advocates, better access to specialists who knew them. The asthma diagnosis was not a protective biological factor. It was a marker of a particular kind of patient i.e., one who had navigated the healthcare system in a way that produced better documentation, faster escalation, more attentive management.
When a mortality prediction model learns from this data, it learns the pattern correctly. Asthma is, statistically, associated with better outcomes. However, if we deploy that model, it will assign lower mortality risk to asthma patients. The danger is that this may cause clinicians to be less vigilant about them, which will over time close the gap that the model detected, and possibly reverse it. This is not an isolated quirk. Researchers have formally characterized a class of prediction models that are harmful self-fulfilling prophecies: their deployment harms a group of patients, but the worse outcomes of these patients do not diminish the measured accuracy of the model. The model remains “accurate” in the narrow sense of predicting what will happen. This is because it is now partly causing what will happen even as it causes harm!
There is a second problem that we need to address: Mortality prediction models do not predict mortality directly. They predict mortality as it was recorded in the data they were trained on. This means that they predict the outcomes that accrued to the kinds of patients who were treated the way those patients were treated, in the institutions where those patients were treated, at the historical moment when the data was collected. When the training data reflects a healthcare system that did not treat all patients equally, the model learns those inequalities as facts about the patients rather than facts about the system. Read more »

Set over a single weekend, Thammika Songkaeo’s novel 

The Debunking Handbook, 2020,
Artist not known. Panorama of Lucknow From The Gomti, 1821-1826. (Detail from a scroll 31 cm x 1128 cm.)







On Thursday this week I will join two of my colleagues—the mezzo Annina Haug and the pianist Edward Rushton—to present a program of poems by French authors to a private audience. We are staging our concert in Zurich, at the home of a descendant of one of those authors, the renowned Swiss-French clown and musician 



