The Bias in the Machine

Sydney Perkowitz in Nautilus:

In January, Robert Williams, an African-American man, was wrongfully arrested due to an inaccurate facial recognition algorithm, a computerized approach that analyzes human faces and identifies them by comparison to database images of known people. He was handcuffed and arrested in front of his family by Detroit police without being told why, then jailed overnight after the police took mugshots, fingerprints, and a DNA sample. The next day, detectives showed Williams a surveillance video image of an African-American man standing in a store that sells watches. It immediately became clear that he was not Williams. Detailing his arrest in the Washington Post, Williams wrote, “The cops looked at each other. I heard one say that ‘the computer must have gotten it wrong.’” Williams learned that in investigating a theft from the store, a facial recognition system had tagged his driver’s license photo as matching the surveillance image. But the next steps, where investigators first confirm the match, then seek more evidence for an arrest, were poorly done and Williams was brought in. He had to spend 30 hours in jail and post a $1,000 bond before he was freed.

What makes the Williams arrest unique is that it received public attention, reports the American Civil Liberties Union.1 With over 4,000 police departments using facial recognition, it is virtually certain that other people have been wrongly implicated in crimes. In 2019, Amara Majeed, a Brown University student, was falsely identified by facial recognition as a suspect in a terrorist bombing in Sri Lanka. Sri Lankan police retracted the mistake, but not before Majeed received death threats. Even if a person goes free, his or her personal data remains listed among criminal records unless special steps are taken to expunge it.

More here.