Breaking into the black box of artificial intelligence

Neil Savage in Nature:

In February 2020, with COVID-19 spreading rapidly around the globe and antigen tests hard to come by, some physicians turned to artificial intelligence (AI) to try to diagnose cases1. Some researchers tasked deep neural networks — complex systems that are adept at finding subtle patterns in images — with looking at X-rays and chest computed tomography (CT) scans to quickly distinguish between people with COVID-based pneumonia and those without2. “Early in the COVID-19 pandemic, there was a race to build tools, especially AI tools, to help out,” says Alex DeGrave, a computer engineer at the University of Washington in Seattle. But in that rush, researchers did not notice that many of the AI models had decided to take a few shortcuts.

The AI systems honed their skills by analysing X-rays that had been labelled as either COVID-positive or COVID-negative. They would then use the differences they had spotted between the images to make inferences about new, unlabelled X-rays. But there was a problem. “There wasn’t a lot of data available at the time,” says DeGrave.

More here.