On Teaching Machines to Predict Death

by Muhammad Aurangzeb Ahmad

Source: Buddhist Library

The French poet Jean de La Fontaine has a famous quote that “A person often meets his destiny on the road he took to avoid it.” We find echoes of this phenomenon  in global literature, whether it’s Oedipus in the Greek Myths, Rostum and Sohrab from Iran, or the story of Kamsa and Krishna in the Hindu tradition. There are elements of self-full filling prophecy that we are seeing in the world of predictive modeling.  Consider the use of AI and machine learning models to predict risk of mortality in an ICU setting. Some of these models have extremely high accuracy and precision. They do in milliseconds what it would take a team of clinicians hours to synthesize. The predictive power of such models need to be contextualized however: A mortality prediction model is trained on historical data i.e., on what happened to patients who looked like this, had these labs, were managed in this way. But the historical data does not merely record biology, it also records medicine as it was practiced. This includes all its established patterns, its habits, its inequities, and its mistakes.

Consider a well known finding that has been often used as a cautionary tale: in a certain historical ICU dataset, patients with a diagnosis of asthma had lower predicted mortality than otherwise similar patients without it. This seems absurd, asthma is a serious respiratory condition. When researchers looked closely, they realized that the problem was not about asthma biologically but it was about care. Asthma patients were more likely to have their respiratory distress recognized early. They arrived with better documentation, better advocates, better access to specialists who knew them. The asthma diagnosis was not a protective biological factor. It was a marker of a particular kind of patient i.e., one who had navigated the healthcare system in a way that produced better documentation, faster escalation, more attentive management.

When a mortality prediction model learns from this data, it learns the pattern correctly. Asthma is, statistically, associated with better outcomes. However, if we deploy that model, it will assign lower mortality risk to asthma patients. The danger is that this may cause clinicians to be less vigilant about them, which will over time close the gap that the model detected, and possibly reverse it. This is not an isolated quirk. Researchers have formally characterized a class of prediction models that are harmful self-fulfilling prophecies: their deployment harms a group of patients, but the worse outcomes of these patients do not diminish the measured accuracy of the model. The model remains “accurate” in the narrow sense of predicting what will happen. This is because it is now partly causing what will happen even as it causes harm!

There is a second problem that we need to address: Mortality prediction models do not predict mortality directly. They predict mortality as it was recorded in the data they were trained on. This means that they predict the outcomes that accrued to the kinds of patients who were treated the way those patients were treated, in the institutions where those patients were treated, at the historical moment when the data was collected. When the training data reflects a healthcare system that did not treat all patients equally, the model learns those inequalities as facts about the patients rather than facts about the system. Read more »

Monday, October 28, 2024

What Would An AI Treaty Between Countries Look Like?

by Ashutosh Jogalekar

A stamp commemorating the Atoms for Peace program inaugurated by President Dwight Eisenhower. An AI For Peace program awaits (Image credit: International Peace Institute)

The visionary physicist and statesman Niels Bohr once succinctly distilled the essence of science as “the gradual removal of prejudices”. Among these prejudices, few are more prominent than the belief that nation-states can strengthen their security by keeping critical, futuristic technology secret. This belief was dispelled quickly in the Cold War, as nine nuclear states with competent scientists and engineers and adequate resources acquired nuclear weapons, leading to the nuclear proliferation that Bohr, Robert Oppenheimer, Leo Szilard and other far-seeing scientists had warned political leaders would ensue if the United States and other countries insisted on security through secrecy. Secrecy, instead of keeping destructive nuclear technology confined, had instead led to mutual distrust and an arms race that, octopus-like, had enveloped the globe in a suicide belt of bombs which at its peak numbered almost sixty thousand.

But if not secrecy, then how would countries achieve the security they craved? The answer, as it counterintuitively turned out, was by making the world a more open place, by allowing inspections and crafting treaties that reduced the threat of nuclear war. Through hard-won wisdom and sustained action, politicians, military personnel and ordinary citizens and activists realized that the way to safety and security was through mutual conversation and cooperation. That international cooperation, most notably between the United States and the Soviet Union, achieved the extraordinary reduction of the global nuclear stockpile from tens of thousands to about twelve thousand, with the United States and Russia still accounting for more than ninety percent.

A similar potential future of promise on one hand and destruction on the other awaits us through the recent development of another groundbreaking technology: artificial intelligence. Since 2022, AI has shown striking progress, especially through the development of large language models (LLMs) which have demonstrated the ability to distill large volumes of knowledge and reasoning and interact in natural language. Accompanied by their reliance on mountains of computing power, these and other AI models are posing serious questions about the possibility of disrupting entire industries, from scientific research to the creative arts. More troubling is the breathless interest from governments across the world in harnessing AI for military applications, from smarter drone targeting to improved surveillance to better military hardware supply chain optimization. 

Commentators fear that massive interest in AI from the Chinese and American governments in particular, shored up by unprecedented defense budgets and geopolitical gamesmanship, could lead to a new AI arms race akin to the nuclear arms race. Like the nuclear arms race, the AI arms race would involve the steady escalation of each country’s AI capabilities for offense and defense until the world reaches an unstable quasi-equilibrium that would enable each country to erode or take out critical parts of their adversary’s infrastructure and risk their own. Read more »