Allison Parshall in Quanta:
Machine learning models are incredibly powerful tools. They extract deeply hidden patterns in large data sets that our limited human brains can’t parse. These complex algorithms, then, need to be incomprehensible “black boxes,” because a model that we could crack open and understand would be useless. Right?
That’s all wrong, at least according to Cynthia Rudin, who studies interpretable machine learning at Duke University. She’s spent much of her career pushing for transparent but still accurate models to replace the black boxes favored by her field.
The stakes are high. These opaque models are becoming more common in situations where their decisions have real consequences, like the decision to biopsy a potential tumor, grant bail or approve a loan application. Today, at least 581 AI models involved in medical decisions have received authorization from the Food and Drug Administration. Nearly 400 of them are aimed at helping radiologists detect abnormalities in medical imaging, like malignant tumors or signs of a stroke.
More here.