by Tim Sommers
My wife Stacey is irritated with the way Netflix’s machine learning algorithm makes recommendations. “I hate it,” she says. “Everything it recommends, I want to watch.”
On the other hand, I am quite happy with Spotify’s AI. Not only does it do pretty well at introducing me to bands I like, but also the longer I stay with it the more obscure the bands that it recommends become. So, for example, recently it took me from Charly Bliss (76k followers), to Lisa Prank (985 followers), to Shari Elf (33 followers). I believe that I have a greater appreciation for bands that are more obscure because I am so cool. Others speculate that I follow more obscure bands because I think it makes me cool while, in fact, it shows I am actually uncool. Whatever it is, Spotify tracks it. The important bit is that it doesn’t just take me to more and more obscure bands. That would be too easy. It takes me more and more obscure bands that I like. Hence, it successfully tracks my coolness/uncoolness.
The proliferation of AI “recommenders” seems relatively innocuous to me – although not to everyone. Some people worry about losing the line between when they just like what the AI recommends to them, and when they adapt to like what the AI says they should like. But that just means the AI is part of their circle of friends now, right? It’s the proliferation of AIs into more fraught kinds of decision-making that I worry about.
AIs are used to decide who gets a job interview, who gets granted parole, who gets most heavily policed, and who gets a new home loan. Yet there’s evidence that these AIs are systematically biased. For example, there is evidence that a widely-used system designed to predict whether offenders are likely to reoffend or commit future acts of violence – and, hence, to set bail, determine sentences, and set parole – exhibits racial bias. So, too several AIs designed to predict crime ahead of time, to guide policing (a pretty Philip K. Dickian idea already). Amazon discovered, for themselves, that their hiring algorithm was sexist. Sexist, racists, anti-LGBTQA+. anti-Semitic, and anti-Muslin language is endemic among large-language models. Read more »