Permeable: The Monetization of Human Fragility

by Muhammad Aurangzeb Ahmad

“Amongst Men” by Haroon Gunn-Salie

There is a particular moment of weakness that most of us recognize without needing to name it. You are lying in your bed, it is late at night. The light from the phone is brighter than the background in the room. The day has thinned your patience as well as your judgment. You click on posts and stories what you would otherwise have ignored. You may even be tempted to buy what you do not need. In other words, your guard is down. You read for a few seconds and then doom scroll to the next story and then the next.  You feel like going to sleep but then you lie to yourself, one more scroll.” Welcome to the world of attention economy. It is basically a system in which human focus is treated as a scarce resource to be captured, measured, and monetized by digital platforms. The attention economy began with a relatively simple goal i.e., capture and hold the gaze. Over time, that ambition evolved into not just focusing on what you like but also focusing on when you are least able to resist. Timing, not just taste, has become the new frontier that these algorithms focus on. While it is true that these platforms were informed by decades of research on human psychology, they do not need to understand us in any deep, human sense. What they do need to do is to predict when you are tired, lonely, bored, anxious, or depleted. It is precisely in these states of vulnerability that it is easier to influence people and steal their attention.

Behavioral science has long recognized that cognition is not constrained by what else is going on in one’s life. Decision fatigue is real. Cognitive depletion makes us more suggestible. Our skepticism varies with sleep, stress, and emotional load. What has changed is that our devices now emit a steady exhaust of behavioral signals that correlate with these states. The phone does not know that you are lonely. It does not need to. It only needs to register that your scrolling pattern has slowed, that you are lingering on certain kinds of content, that the hour is late, that your interactions look different from your baseline. Correlation at scale does not need understanding. The use of human vulnerability is not a rhetorical device, it is anchored in behavioral research. Read more »

Monday, September 28, 2020

From Nudge to Hypernudge: Big Data and Human Autonomy

by Fabio Tollon

We produce data all the time. This is a not something new. Whenever a human being performs an action in the presence of another, there is a sense in which some new data is created. We learn more about people as we spend more time with them. We can observe them, and form models in our minds about why they do what they do, and the possible reasons they might have for doing so. With this data we might even gather new information about that person. Information, simply, is processed data, fit for use. With this information we might even start to predict their behaviour. On an inter-personal level this is hardly problematic. I might learn over time that my roommate really enjoys tea in the afternoon. Based on this data, I can predict that at three o’clock he will want tea, and I can make it for him. This satisfies his preferences and lets me off the hook for not doing the dishes.

The fact that we produce data, and can use it for our own purposes, is therefore not a novel or necessarily controversial claim. Digital technologies (such as Facebook, Google, etc.), however, complicate the simplistic model outlined above. These technologies are capable of tracking and storing our behaviour (to varying degrees of precision, but they are getting much better) and using this data to influence our decisions. “Big Data” refers to this constellation of properties: it is the process of taking massive amounts of data and using computational power to extract meaningful patterns. Significantly, what differentiates Big Data from traditional data analysis is that the patterns extracted would have remained opaque without the resources provided by electronically powered systems. Big Data could therefore present a serious challenge to human decision-making. If the patterns extracted from the data we are producing is used in malicious ways, this could result in a decreased capacity for us to exercise our individual autonomy. But how might such data be used to influence our behaviour at all? To get a handle on this, we first need to understand the common cognitive biases and heuristics that we as humans display in a variety of informational contexts. Read more »

Monday, April 13, 2015

Do we really value thinking for oneself?

by Emrys Westacott

Why do we choose to do what we think is right even when it goes against our inclinations or interests? This is one of the oldest and toughest questions in moral psychology. Knowing the good clearly does not entail that we will do the good. So what carries us from the former to the latter? Imgres

One philosopher who wrestled with this question long and hard was Immanuel Kant (1724-1804). He considered it profoundly mysterious that we often choose to do overrid our interests or desires and do our duty purely because we consider ourselves dutybound. (Nietzsche expresses a similar sense of wonder when he asks, “How did nature manage to breed an animal with the right to make promises?”) Kant's explanation is that we are moved by what he calls moral feeling.[1] And he identifies two main kinds of moral feeling: respect for morality, and disgust for what is contrary to morality. Discussing these in his lectures on ethics, he says that you cannot make yourself or anyone else have these feelings. But you can inculcate them, or something that will serve the same purpose, in a child through proper training. The following passage is especially noteworthy:

We should instill an immediate abhorrence for an action from early youth onwards . . . we must represent an action, not as forbidden or harmful, but as inwardly abhorrent in itself. For example, a child who tells lies must not be punished, but shamed; we must cultivate an abhorrence, a contempt for this act, and by frequent repetition we can arouse in him such an abhorrence of the vice as becomes ahabitus with him.[2]

I imagine this bit of moral pedagogy will strike many readers as morally suspect. But why?

Read more »