We often make bad choices. We eat sugary foods too often, we don’t save enough for retirement, and we don’t get enough exercise. Helpfully, the modern world presents us with a plethora of ways to overcome these weaknesses of our will. We can use calorie tracking applications to monitor our sugar intake, we can automatically have funds taken from our account to fund retirement schemes, and we can use our phones and smartwatches to make us feel bad if we haven’t exercised in a while. All of these might seem innocuous and relatively unproblematic: what is wrong with using technology to try and be a better, healthier, version of yourself?
Well, let’s first take a step back. In all of these cases what are we trying to achieve? Intuitively, the story might go something like this: we want to be better and healthier, and we know we often struggle to do so. We are weak when faced with the Snickers bar, and we can’t be bothered to exercise when we could be binging The Office for the third time this month. What seems to be happening is that our desire to do what all things considered we think is best is rendered moot by the temptation in front of us. Therefore, we try to introduce changes to our behaviour that might help us overcome these temptations. We might always eat before going shopping, reducing the chances that we are tempted by chocolate, or we could exercise first thing in the morning, before our brains have time to process what a godawful idea that might be. These solutions are based on the idea that we sometimes, predictably, act in ways that are against our own self-interest. That is to say, we are sometimes irrational, and these “solutions” are ways of getting our present selves to do what we determine is in the best interests of future selves. Key to this, though, is we as individuals get to intentionally determine the scope and content of these interventions. What happens when third parties, such as governments and corporations, try to do something similar?
Attempts at this kind of intervention are often collected under the label “nudging”, which is a term used to pick out a particular kind of behavioural modification program. The term was popularized by the now famous book, Nudge, in which Thaler and Sunstein argue in favour of “libertarian-paternalism”. Read more »
We produce data all the time. This is a not something new. Whenever a human being performs an action in the presence of another, there is a sense in which some new data is created. We learn more about people as we spend more time with them. We can observe them, and form models in our minds about why they do what they do, and the possible reasons they might have for doing so. With this data we might even gather new information about that person. Information, simply, is processed data, fit for use. With this information we might even start to predict their behaviour. On an inter-personal level this is hardly problematic. I might learn over time that my roommate really enjoys tea in the afternoon. Based on this data, I can predict that at three o’clock he will want tea, and I can make it for him. This satisfies his preferences and lets me off the hook for not doing the dishes.
The fact that we produce data, and can use it for our own purposes, is therefore not a novel or necessarily controversial claim. Digital technologies (such as Facebook, Google, etc.), however, complicate the simplistic model outlined above. These technologies are capable of tracking and storing our behaviour (to varying degrees of precision, but they are getting much better) and using this data to influence our decisions. “Big Data” refers to this constellation of properties: it is the process of taking massive amounts of data and using computational power to extract meaningful patterns. Significantly, what differentiates Big Data from traditional data analysis is that the patterns extracted would have remained opaque without the resources provided by electronically powered systems. Big Data could therefore present a serious challenge to human decision-making. If the patterns extracted from the data we are producing is used in malicious ways, this could result in a decreased capacity for us to exercise our individual autonomy. But how might such data be used to influence our behaviour at all? To get a handle on this, we first need to understand the common cognitive biases and heuristics that we as humans display in a variety of informational contexts. Read more »
In Islamic theology it is stated that for each human being God has appointed two angels (Kiraman Katibin) that record the good and the bad deeds that a person commits over the course of lifetime. Regardless of one’s belief or disbelief in this theology, a world where our deeds are recorded is in our near future. Instead of angels there will be algorithms that will be processing our deeds and it won't be God who would be judging but rather corporations and governments. Welcome to the strange world of scoring citizens. This phenomenon is not something out of a science fiction dystopia, some governments have already laid the groundwork to make it a reality, the most ambitious among them being China. The Chinese government has already instituted a plan where data from a person’s credit history, publically available information and most importantly their online activities will be aggregated and form the basis of a social scoring system.
Credit scoring systems like FICO, VantageScore, CE Score etc. have been around for a while. Such systems were initially meant as just another aid in helping companies make financial decisions about their customers. However these credit scores have evolved into definitive authorities on the financial liability of a person to the extent that the human involvement in decision making has become minimal. The same fate may befall social scoring systems but the difference being that anything that you post on online social networks like Facebook, microblogging website like Twitter, search and browsing behaviors on Google or their Chinese equivalents RenRen, Sina Weibo and Baidu respectively is being recorded and can potentially be fed into a social scoring model. As an example of how things can go wrong lets consider the case of the biggest country in the world – China. In that country the government has mandated that social scoring system will become mandatory by 2020. The Chinese government has also blocked access to non-Chinese social networks which leaves just two companies, Alibaba and Tencent, to literally run all the social networks in the country. This makes it all the more intriguing that the Social Credit Scoring system in China is being built by the help of these two companies. To this end the Chinese government has given the green light to eight companies to have their own pilots of citizen scoring systems.