Is online manipulation always hidden?

by Michael Klenk

Manipulation often seems to involve a hidden influence. Manipulators are pushing the emotional buttons of their unsuspecting victims, exploiting their subconscious habits and leading them astray. That view of manipulation explains a lot of the current moral outrage about digital technology and the companies behind it. Digital technologies provide the unprecedented potential for hidden influence, and, therefore, pose a manipulation threat, or so the argument goes.

Image by Hasan Cengiz @ thewallpaper.co

But the hidden influence view of manipulation is false. Manipulation neither requires hidden influence, nor is hidden influence sufficient for manipulation. For example, guilt-tripping is manipulative and also often clear as the day. When your partner who wants to go hiking, knowing that you don’t, has already packed the car with the beaming kids, it becomes hard to say no. You are being manipulated, but it is obvious to everyone what is going on. Conversely, hidden influence does not automatically make an interaction manipulative. For example, you may simply fail to attend to the actions of a nurse assisting in operation on you. So the nurse’s influence on you remains hidden, but that does not make it manipulative. Thus, the hidden influence view is inadequate to characterise and understand interpersonal manipulation. Manipulation may sometimes be hidden, but often it is not.

Therefore, we need a better understanding of manipulation. We cannot just rely on the fact that some interpersonal influence is hidden to determine whether we have a case of manipulation. Why care? Because understanding manipulation is crucial in the current critical debate about digital technologies in moral philosophy and related disciplines.

That debate has and (hopefully) continues to have a significant impact on policy-making about the development and use of digital technologies. For instance, academics involved in the debate sit on influential panels like the European Union’s High-Level Expert Group on AI. When we carry an insufficient understanding of manipulation into these debates, then the conclusions drawn from considerations about manipulation and digital technology risk being unwarranted. We may take a strong stance against manipulation, perhaps by severely curbing access to technologies so classified. We may also decide that manipulation is (sometimes) legitimate and thus accept, say, manipulative social media algorithms. In both cases, our grasp of the concept of manipulation will significantly influence the debate. We should help ourselves to a good start in our efforts to deal with digitalisation by understanding manipulation correctly.

Now, it is true that we are interacting to an unprecedented degree with intelligent machines, online and offline (see also this earlier post on 3Quarks Daily). The quantity of our interactions has tremendously increased. Every time your social media app of choice generates a personalised news feed for you, you are interacting with an intelligent machine. The machine, figuratively speaking, decides to offer you a selection of things to look at (this video of a cat, rather than that news-item about COVID-19). These machines (an algorithm in the social media example) are minimally ‘intelligent’ in the sense that they can react to your behaviour with flexibility and adjust it in line with a set goal. Humans engaged in very few such interactions with machines twenty years ago – now they are countless. The quality of our interactions with intelligent machines has tremendously increased, too. People interact with intelligent machines when they watch movies online, use smart replies in an email, order stuff through Amazon’s Alexa, cuddle robot pets, seek comfort with care- and sleep with sex-robots. So, machines influence us in more and more sensitive areas of our lives, and they have also long left the virtual world, meeting us face-to-face in the offline world. These machines have access to vast amounts of data that they can use to infer rather precise conclusions about the user that they are interacting with. That is why it pleases many people to interact with them, as the impressive user-statistics of social media demonstrate. As a descriptive account of the current situation, we should be with the proponents of the hidden influence view up until this point.

It is also plausible that this picture should worry us when we presuppose a strong link between manipulation and hidden influence. If lots of hidden influence suggests lots of manipulation, then we are having a problem now. Proponents of the hidden influence view often presuppose that manipulation creates a problem for personal autonomy. So, by noting how digital technologies have vastly increased in quality and quantity, and how that raised the potential for hidden influence, and how that raised the potential for manipulation, and how that risks our autonomy, we arrived at a pessimistic view on digital technology.

But that story is flawed. There is independent reason to doubt that there is a straightforward conceptual link between manipulation and autonomy (more on that here). More relevantly, for now, we have seen above that the very connection between hidden influence and manipulation fails.

We can thus tread along with the hidden influence view up to a point when it comes to the descriptive assessment of digital technologies But we should now part ways. Since the hidden influence view mischaracterises the nature of manipulation, we cannot rely on that view to explain what’s wrong or potentially problematic with digital technologies insofar as they are manipulative. The disagreement concerns our choice of conceptual tools (this is very figuratively speaking) in classifying it.

Unfortunately, we cannot merely reject the hidden influence view. That is because the hidden influence view gives us a very straightforward way to distinguish manipulation from other forms of interpersonal influence, like persuasion and coercion. We should try to recognise and see the subtle distinctions amongst different types of interpersonal interaction.

Consider the consequences of rejecting the hidden influence view without an alternative. Hidden influence no longer counts as a defining characteristic of manipulation. But then how does manipulation differ from, say, coercion? Coercion is the kind of interpersonal influence that proceeds by force, and it mostly operates out in the open. On most occasions, if you are being coerced, you will know. Is manipulation the kind of interpersonal influence that proceeds without force, so that we can keep it distinct from coercion? Partly, perhaps, but then we risk conflating manipulation with persuasion, another kind of interpersonal influence that does not proceed by force. Much more can be said here, but the general idea is that manipulation risks disappearing from our eyes between the poles of persuasion of manipulation.

There is a way to make sure that we do not confuse manipulation with other forms of interpersonal influence. We can lean on a key intuition behind the hidden influence view: Somehow, a manipulator does not allow his victim to fully understand what’s going on. Being manipulated feels like being a puppet on a string, like being played. But that’s not because the manipulator’s influence is necessarily hidden. It’s just that the manipulator does not care whether his victim fully understands what’s going on. The manipulator just ‘wants to make the puppet move’ and otherwise he is careless about the chosen means of influence. An efficient manipulator is one that can select those means of influence that most reliably get his victim to act in line with his wishes. Sometimes, appealing to the victim’s rational capacities is the most efficient way to get what the manipulator wants. For example, if one would try to manipulate the ultra-rational Commander Spock from Star Trek. Still, that’s manipulative because it is careless.

The view of manipulation as careless influence also saves the intuition that it is terrible to ‘not be in the know’ when you are concerned. In essence, it is only that the manipulator does not care to keep you in the know, rather than the fact that you are not in the know. By taking that lack of care as definite of manipulation, we can distinguish manipulation as careless influence from persuasion, where one is trying to make one’s interlocutor see what’s going on, and coercion, where one actively reduces victim’s ability to act.

So, (online) manipulation is not always hidden. It sometimes is – but it is always careless. In the critical debate about online technologies, we should, therefore, be on the lookout for careless interactions. And perhaps that means that we have to make sure that machines can ‘care’ about our reasons in the relevant sense to curb online manipulation wherever possible.