by Michael Klenk
While some people voluntarily act out their private lives on the public stage, the vast majority tries to maintain some privacy – by drawing a firm distinction between their private and public lives. But at the same time, almost all of us are using connected technologies like mobile phones, wearable devices, social media, search engines, or web-shops. By using connected technologies, we leave a conspicuous trail of data traces, and to the trained eye, innocuous traces can tell exciting stories – no less (and perhaps more) revealing than the party-pictures some voluntarily share on Facebook.
For example, the New York Times recently used readily available phone-usage data to trace, amongst other things, someone frequently visiting roadside motels at night-time for an hour each. The Times could have easily revealed the name of that person and determined what exactly was going on at the motels, which illustrates that data scientists, modern-day trappers if you will, need less and less effort to read such intimate stories off these seemingly innocuous traces. Almost anyone is thus opening up about their private lives in the public domain.
In consequence, privacy may well be dead – killed by the ubiquitousness and necessity of using connected technologies – at least if we maintain an old conception of privacy that needs a distinction between the public and the private sphere. We may not find that distinction born out by the world anymore, and consequently, we might be looking for privacy in vain. And yet, most people do not conclude that privacy is dead – instead, they offer new interpretations of the concept of privacy in response to the new realities created by connected technologies.
Inside and outside academia, people are trying to re-think normative concepts like privacy, unconsciously driven by technology and explicitly aiming at making sense of it. They propose changes to existing concepts (e.g. coming up with a concept of privacy that does not depend on a sharp private/public distinction) or invent new ones to make sense of new realities created by connected technologies (e.g. the concept of a ‘filter bubble’). Technology may thus be disruptive in a sense that often goes unnoticed: it disrupts our conceptual apparatus.
Technology’s disruptive effect on privacy is but one example. It also extends to the normative concepts of, for instance, autonomy and manipulation. To illustrate, while philosophers often think of autonomy as, very roughly, the absence of external influence we may be hard-pressed to find anyone being autonomous in that sense given the nearly continuous and often subliminal force exerted on us by connected technologies. Similarly, philosophers conceptualise manipulation as the intentional influencing of people; that conceptualisation comes under pressure when we are unable to make sense of the intuition that intention-less machines can manipulate us, too, despite lacking intentions. Connected technologies, therefore, put pressure on our normative concepts like privacy, autonomy, and manipulation by changing the world so that our old concepts no longer apply and by pushing us to come up with new or revised concepts, creating conceptual confusion.
If our conceptual apparatus is out of whack, we face severe consequences. Changes in meaning confuse people and discussions fail when the same term means different things to different discussants. This effect occurs when concepts change not uniformly and at the same time. Then there is fear nourished by confusion. People fear things they don’t understand. They might resist using these new technologies when the nature and the effects of new technologies are difficult or impossible to grasp. New conceptual inventions sometimes also have political and social clout as they shape research agendas and technological development. So, choosing concepts becomes a social and political act, too. And even if we do not experience any direct adverse effects of conceptual reform, there is the possibility that using a different concept could have reaped us benefits that we are now forgoing. For example, conceptualising product endorsements on social media as harmful manipulation may make people abstain from using social media that way, which may increase digital well-being. Therefore, we need to be able to manage and adjust our normative concepts to avoid the pitfalls and reap the benefits of good concepts.
However, we are currently unable to deal with conceptual disruption because of philosophical uncertainty about how to legitimately change our normative concepts. We cannot just start throwing arbitrary terms around, hoping that they express some valuable concept and that they latch on in society. Instead, what we should be doing, and what most people seem to be trying, is to propose valueable concepts. But that task is obstructed by the abstract value problem. We have only a rudimentary idea of the value of concepts. Concepts are representational devices and, therefore, they serve epistemic goals. So far so good. However, that is insufficiently concrete and not broad enough. There is a plentitude of epistemic goals – from truth to understanding – that concepts may serve, and above we have already seen that they can serve non-epistemic purposes, too (e.g. to pursue a social aim). There is a deep philosophical puzzle here, one that requires us to say what we can reasonably want from our concepts.
Besides, there are also open empirical questions. For one, there is the concrete evaluation problem. We can know what a concept’s value is in the abstract (e.g. it is determined by factors x, y, and z) without knowing the concrete value of a particular concept like privacy. The concrete evaluation problem probably requires some empirical work because we need to measure both the epistemic and non-epistemic effects of concepts.
Finally, there is the mechanism problem because we understand only very little about how we can manage concept change, and some philosophers even think that managing concept change is downright impossible. We have no robust methodology that we can use to solve the problem of conceptual disruption or change our normative views.
An analogy with creating better tools is helpful to illustrate the problem. When we want better tools, we often have a pretty good idea of what ‘better’ means for that tool – at least in the context of a particular use-case. A better road hammer is lighter and sturdier, and a better car needs less fuel. Starting from the insight into the nature of value for tools, we can then determine the concrete value of particular things – like that hammer or that car – and we know the mechanical and organisational process for making better ones, at least in rough outline. With concepts, in contrast to tools, we lack almost all of this.
So, we must learn a few things to deal with the technological disruption of our normative concepts. We should begin by exploring the abstract nature of the value of concepts. Like we did in earlier efforts to understand the value of tools, we should consider what makes a concept good or valuable. An answer to that question can then guide our approach to answering the two empirical questions about the concrete value of some of our current concepts and our possibilities for changing our normative concepts.
When we solve the philosophical problem, we made a step toward addressing several of the societal issues that are caused by conceptual disruption. We will be able to identify the problematic effects of concepts, and we will be able to change them to remedy those effects. For example, we will be able to propose concepts that avoid conceptual confusion and also serve the goals of a good concept. The work required to sharpen our tools for conceptual engineering, however, is quite substantial and begins with deep questions aimed right at the heart of philosophical methodology.
Ultimately we may conclude that privacy ain’t dead yet – but any confident judgement on these matters depends on a deeper understanding of conceptual engineering in the first place.