From Nudge to Hypernudge: Big Data and Human Autonomy

by Fabio Tollon

We produce data all the time. This is a not something new. Whenever a human being performs an action in the presence of another, there is a sense in which some new data is created. We learn more about people as we spend more time with them. We can observe them, and form models in our minds about why they do what they do, and the possible reasons they might have for doing so. With this data we might even gather new information about that person. Information, simply, is processed data, fit for use. With this information we might even start to predict their behaviour. On an inter-personal level this is hardly problematic. I might learn over time that my roommate really enjoys tea in the afternoon. Based on this data, I can predict that at three o’clock he will want tea, and I can make it for him. This satisfies his preferences and lets me off the hook for not doing the dishes.

The fact that we produce data, and can use it for our own purposes, is therefore not a novel or necessarily controversial claim. Digital technologies (such as Facebook, Google, etc.), however, complicate the simplistic model outlined above. These technologies are capable of tracking and storing our behaviour (to varying degrees of precision, but they are getting much better) and using this data to influence our decisions. “Big Data” refers to this constellation of properties: it is the process of taking massive amounts of data and using computational power to extract meaningful patterns. Significantly, what differentiates Big Data from traditional data analysis is that the patterns extracted would have remained opaque without the resources provided by electronically powered systems. Big Data could therefore present a serious challenge to human decision-making. If the patterns extracted from the data we are producing is used in malicious ways, this could result in a decreased capacity for us to exercise our individual autonomy. But how might such data be used to influence our behaviour at all? To get a handle on this, we first need to understand the common cognitive biases and heuristics that we as humans display in a variety of informational contexts. Read more »



Monday, April 13, 2015

Do we really value thinking for oneself?

by Emrys Westacott

Why do we choose to do what we think is right even when it goes against our inclinations or interests? This is one of the oldest and toughest questions in moral psychology. Knowing the good clearly does not entail that we will do the good. So what carries us from the former to the latter? Imgres

One philosopher who wrestled with this question long and hard was Immanuel Kant (1724-1804). He considered it profoundly mysterious that we often choose to do overrid our interests or desires and do our duty purely because we consider ourselves dutybound. (Nietzsche expresses a similar sense of wonder when he asks, “How did nature manage to breed an animal with the right to make promises?”) Kant's explanation is that we are moved by what he calls moral feeling.[1] And he identifies two main kinds of moral feeling: respect for morality, and disgust for what is contrary to morality. Discussing these in his lectures on ethics, he says that you cannot make yourself or anyone else have these feelings. But you can inculcate them, or something that will serve the same purpose, in a child through proper training. The following passage is especially noteworthy:

We should instill an immediate abhorrence for an action from early youth onwards . . . we must represent an action, not as forbidden or harmful, but as inwardly abhorrent in itself. For example, a child who tells lies must not be punished, but shamed; we must cultivate an abhorrence, a contempt for this act, and by frequent repetition we can arouse in him such an abhorrence of the vice as becomes ahabitus with him.[2]

I imagine this bit of moral pedagogy will strike many readers as morally suspect. But why?

Read more »