Danah Boyd on Medium:
Consider, for example, what’s happening with policing practices, especially as computational systems allow precincts to distribute their officers “fairly.” In many jurisdictions, more officers are placed into areas that are deemed “high risk.” This is deemed to be appropriate at a societal level. And yet, people don’t think about the incentive structures of policing, especially in communities where the law is expected to clear so many warrants and do so many arrests per month. When they’re stationed in algorithmically determined “high risk” communities, they arrest in those communities, thereby reinforcing the algorithms’ assumptions.
Addressing modern day redlining equivalents isn’t enough. Statistically, if your family members are engaged in criminal activities, there’s a high probability that you will too. Is it fair to profile and target individuals based on their networks if it will make law enforcement more efficient?
Increasingly, tech folks are participating in the instantiation of fairness in our society. Not only do they produce the algorithms that score people and unevenly distribute scarce resources, but the fetishization of “personalization” and the increasingly common practice of “curation” are, in effect, arbiters of fairness.
The most important thing that we all need to recognize is that how fairness is instantiated significantly affects the very architecture of our society. I regularly come back to a quote by Alistair Croll:
Our social safety net is woven on uncertainty. We have welfare, insurance, and other institutions precisely because we can’t tell what’s going to happen — so we amortize that risk across shared resources. The better we are at predicting the future, the less we’ll be willing to share our fates with others. And the more those predictions look like facts, the more justice looks like thoughtcrime.
Read the rest here.