Should AI Speak for the Dying?

by Muhammad Aurangzeb Ahmad

Everyone grieves in their own way. For me, it meant sifting through the tangible remnants of my father’s life—everything he had written or signed. I endeavored to collect every fragment of his writing, no matter profound or mundane – be it verses from the Quran or a simple grocery list. I wanted each text to be a reminder that I could revisit in future. Among this cache was the last document he ever signed: a do-not-resuscitate directive. I have often wondered how his wishes might have evolved over the course of his life—especially when he had a heart attack when I was only six years old. Had the decision rested upon us, his children, what path would we have chosen? I do not have definitive answers, but pondering on this dilemma has given me questions that I now have to revisit years later in the form of improving ethical decision making at the end-of-life scenarios. To illustrate, consider Alice, a fifty-year-old woman who had an accident and is incapacitated. The physicians need to decide whether to resuscitate her or not. Ideally there is an advance directive which is a legal document that outlines her preferences for medical care in situations where she is unable to communicate her decisions due to incapacity. Alternatively, there may be a proxy directive which usually designate another person, called a surrogate, to make medical decisions on behalf of the patient.

Given the severity of these questions, would it not be helpful if there was a way to inform or augment decisions with dispassionate agents who could weigh in competing pieces of information without emotions coming in the way? Artificial Intelligence may help or at least provide feedback that could be used as a moral crutch. It also has practical implications as only 20-30% percent of the general American population has some sort of advance directive. The idea behind AI surrogates is that given sufficiently detailed data about a person, an AI can act as a surrogate in case the person is incapacitated, making decisions that reflect what the person would have taken if they were not incapacitated. However, even setting aside the question of what data may be needed, data is not always a perfect reflection of reality. Ideally this data is meant to capture a person’s affordances, preferences, and more preferences, with the assumption that they are implicit in the data. This may not always be true, as people evolve, change their preferences, and update their worldviews. Consider a scenario where an individual provided an advance directive in 2015, yet later converted to Jehovah’s Witness—a faith that disavows medical procedures that involve blood transfusions. Despite this profound shift in beliefs, the existing directive would still reflect past preferences rather than current convictions. This dilemma extends to AI-trained models, often referred to as the problem of stale data. If conversational data from a patient is used to train an AI model, yet the patient’s beliefs evolve over time, data drift ensures that the AI’s knowledge becomes outdated, failing to reflect the individual’s current values and convictions.

Many of the challenges inherent in AI, such as bias, transparency, and explainability, are equally relevant in the development of AI surrogates. Read more »