Surveillance Capitalism: How Big Tech went Rogue

by Martin Butler

Shoshona Zuboff’s “The age of Surveillance Capitalism: The fight for a human future at the new frontier of Powergives an impressive and comprehensive account of how the big tech companies gained their economic dominance, and why this is a problem.[1] We hear much about how these companies fail to pay their fair share of taxes, how they have become monopolies, and how they enable online abuse.[2] Zuboff’s concerns, however, are with a less obvious problem but one that is perhaps more insidious.

The main companies she has in mind are Google and Facebook, although the methods they have discovered are, she claims, spreading throughout the capitalist economy and giving rise to a new form of capitalism altogether, which she identifies as surveillance capitalism. This, she argues, is something so different from anything we have experienced before that we are completely unprepared to deal with it, and it has arisen so quickly and come to dominate so much about our lives that we are still like rabbits caught in the head-lights. Zuboff reminds us of a pivotal truth, easily forgotten, that despite the disingenuously user-friendly message they assiduously promote, these two companies are in essence platforms for advertising and this more than anything else determines what they do.

During the 1990’s when Sergey Brin and Larry Page came up with the powerful computer algorithms which allowed Google to become the most effective search engine on the internet, they really had no idea how to produce the financial rewards their investors expected. A small part of the organisation called ‘AdWords’ did place adverts which were linked to ‘keywords’ specified by the advertisers, but this was a very minor part of the project. At the time Brin and Page adopted a dismissive attitude towards advertising as a source of income. Zuboff quotes from their important 1998 paper:

“We expect that advertising funded search engines will be inherently biased towards the advertisers and away from the needs of the consumers. This type of bias is very difficult to detect but could still have a significant effect on the market… we believe the issue of advertising causes enough mixed incentives that it is crucial to have a competitive search engine that is transparent in the academic realm.”[3]

What they had failed to grasp at this time was that the advertisers were the consumers; those using the search engine were simply a resource to be mined. It was only when the financial pressure really began to mount in the early 2000s, and with the discovery of what Zuboff calls ‘surplus behaviour’, that the game changed radically. She points out that each Google search, in addition to the key words used, produces a raft of collateral data “such as the number and pattern of search terms, how the query is phrased, spelling, punctation, dwell times, click patterns and location.”[4] Initially, all the data collected by Google was used to enhance the experience of the user. She makes the point that at this stage “people were treated as ends-in-themselves, the subjects of a nonmarket, self-contained cycle that was perfectly aligned with Google’s stated mission ‘to organise the world’s information, making it universally accessible and useful’.”[5] It became clear, however, that in order to produce the ‘exponential’ profits required by impatient investors, the focus of the enterprise would have to radically reorient to deliver for paying advertisers. If it also delivered for users then that would be a happy side-effect. The ethical stance adopted in the 1998 paper was abandoned completely. Google could compile individual user profile information (or UPI) which can include browsing history, the effectiveness of previous adverts, likelihood that users would ‘click though’ an advert or make a purchase and so. For our purposes it is unnecessary to understand all the technical mechanisms involved, but Zuboff identifies the power that Google had acquired when she says, “With Google’s unique access to behavioural data it would now be possible to know what a particular individual in a particular time and place was thinking, feeling and doing.”[6] This might sound like overstatement, and probably would be for the early noughties, but with the development of the Android operating system, Google Maps, voice recognition, and the wrap-around online environment developed by Google, it is becoming very much a reality for a large proportion of Google users. And the purpose of all this? To attain the “holy grail of advertising,… to deliver a particular message to a particular person at just the moment when it might have a high probability of actually influencing his or her behaviour…”[7] Initially the aim was simply predicting behaviour, but as the sophistication of the methods improved and ambitions grew, it moved towards behaviour modification. And so, surveillance capitalism was born, extending now beyond advertising and into other areas such as insurance and gaming.[8]

I want to take a slightly different approach to Zuboff in identifying the key ethical problem with surveillance capitalism. We need to be clear, first of all, that no one is being forced to do anything they don’t want to do. This is consistent with many well-established definitions of free-will. J.S. Mill, for example, tells us that liberty “consists in doing what one desires”. If adverts appear at the right time, aimed at the right person when they are in the right mood and presented in a way customised to ‘fit’ that individual, and they then end up buying the product or service, surely they have simply done what they desire? One criticism levelled at this approach to free-will is that it takes no account of psychological problems such as addiction. The addict desires his or her fix very strongly but we would hardly call them free since they are ensnared by their desire for the drug. However, this is not at issue here. It would be difficult to argue that we are addicted to the adverts thrust upon us by the big tech companies.

To push the point further we might want to claim that all surveillance capitalism does is develop further what advertisers already do, that there is nothing new here and that Google and others are simply more effective at doing it. If I, a 66-year-old man, walked into a car salesroom wearing a tweed jacket, the salesperson would treat me quite differently than if I were a 25-year-old women wearing smart professional clothes. We belong to different ‘demographics’ so the sales pitch would be targeted accordingly. Google can just be far more fine-grained than a demographic category and can zoom right in to the individual level, but this is just a matter of degree. To counter this we might argue that unlike the demographic groups beloved by the marketing industry, the surveillance capitalists can access concrete knowledge (the UPI mentioned earlier), and it is this that is so valuable to the advertisers. But this is knowledge gained surreptitiously, and it is the surreptitious nature of this data collection which is the problem. If Mr Google, clipboard in hand, came round at the end of each day and asked us a set of detailed questions about our browsing history, where we have been, the items we have bought online, key words we had used while in ear shot of our Android phones, and so on, we would be outraged; but because all this data is collected impersonally, even if we know intellectually that it is being collected, it doesn’t seem quite the same. Perhaps a key difference is that it seems less invasive if it remains in the form of mere data. If converted to knowledge in a human mind, however, it takes on a more personal form.

This brings us to the key issue of privacy, but again, it’s not obvious that others knowing a lot of detail about our preferences, personality or behaviour is something new. In the past, when most people lived in small closely knit communities it was the norm that everyone knew everyone else’s business.  This, I’m sure, was often more extensive than anything the surveillance capitalist could achieve. In such communities no forms were signed allowing access to this knowledge. So, it is perhaps not the knowledge itself but the use to which it is put that is the problem – especially if those who control it are powerful organisations who do not necessarily have out best interests at heart.

To understand the problems arising from how this knowledge is used in relation to advertising, we need to distinguish between being convinced without manipulation, and being manipulated to do or believe something. There is no sharp line between the two, but the distinction is very real and important. In an idealised form, the process of being convinced without manipulation involves being exposed to a message with which you consciously engage though reason and emotion, knowing the source and context of the message. If the message is effective in producing within you a particular belief, or initiating an action, it will not be because of a threat or through lies or misleading information, it will be reasonably clear and open. The process of being convinced without manipulation should occur exclusively within what philosophers call ‘the space of reasons’. This is where, in good faith, we identify a reason, or set of reasons, which justify holding the belief or performing the action. We should note that the interaction between emotion and reason here is complicated. Emotion can be intelligent, but ideally we like to think that reason is the final arbiter when good decisions are made. The animal charity that advertises itself by showing pictures of starving puppies is to a degree manipulating our emotions but it’s plain for all to see. Ultimately, we know we ought to make a reasoned judgement about whether to give to the charity based on many factors. Intention also plays a key role here. To try to convince someone without manipulation is to take a certain attitude towards them. It is to communicate with them as a human being rather than to treat them as an object to be changed. Being manipulated, on the other hand, is something that is done to us. In its extreme form it involves using any means available to cause us to believe or do something. Nothing is off limits whether it be deceptive psychological techniques or lies and misinformation. This distinction has a long history. Plato would regard the rhetorical techniques used by the Sophists as too close to manipulation, because they were not interested in presenting the truth. Advertising has always trodden a fine line. The subliminal adverts of the 1950s were made illegal because they played on something outside of our conscious awareness and so could be described as manipulation. Most adverts try to get us to buy through presenting positive associations and using other psychological techniques, but as long as they don’t tell lies and it is reasonable to expect a good level of awareness of what is going on, they are the right side of the distinction. Manipulation can also come in a more benign form. It is difficult to deny that even the most caring parent will on occasion use methods which can only be described as manipulation on their young child, although the aim of the process of socialisation and education is to move them into a position where rational communication becomes the norm.

The problem with the likes of Google and Facebook, then, is that we are not fully aware of the data they have collected about us, or how they may use it in targeted advertising, even if, unlike the subliminal ones, the adverts themselves are straightforward enough. Imagine the sales person of my earlier example surreptitiously conducting detailed research about my life and preferences, previous cars I have owned and so on, before I enter the car showroom. Their sales pitch, unbeknownst to me, is then based on this research so I would not have an accurate grasp of its source and the context. This is analogous to how Google and Facebook operate, and could certainly be described as manipulation. (Zuboff uses the term ‘instrumentarianism’ to describe the process of detailed data collection, the sole aim of which is to predict and manipulate our behaviour.) So it should be clear that unacceptable manipulation can occur without addiction and while still meeting the definition for free-will given earlier.

For the most part, the psychological perspectives developed in the 20th century made no room for the distinction between being manipulated and being convinced without manipulation. This is because being convinced without manipulation is normative. Convincing reasons are good reasons to believe or act. Being manipulated, on the other hand, is, like psychology, concerned only with identifying the causes which produce behaviour or belief; there can be no good causes or bad causes, just effective causes. Zuboff draws parallels between the behaviourist psychologists of the 50s and 60s and the methods used by surveillance capitalism. B. F. Skinner in his classic book Beyond Freedom and Dignity argues that given the fact that human behaviour is determined by environmental ‘contingencies’, both positive and aversive, we might as well control this environment to produce the ‘right’ behaviour for humanity.[9] Of course, as far as surveillance capitalism is concerned, the ‘right’ behaviour is to respond positively to advertising. The irony of Skinner’s book is that rather than promoting his vision through positive reinforcement, he tries to convince us through reason and evidence. In other words, he communicates with his readers as rational beings who need to be convinced with good reasons, but his vision for society is one where the populace is manipulated through environmental contingencies. This exposes a key feature of all programmes of manipulation – a divide between those who control and those they attempt to control. The behaviourists, like the select few at the top of the big tech companies, use reason and evidence amongst themselves when discussing strategy. When it comes to everyone else, it’s all about manipulation. There is the ruling elite and the rest.

There is a fundamental principle, implicit in the early paper written by Brin and Page, which is breached here. This is that in communication we ought to treat each other as rational subjects who we can by all means attempt to convince but not to manipulate. This is in essence Kant’s second formulation of the categorical imperative.[10]

What responses might there be to the accusation of manipulation? The go-to answer of the libertarian is simply that we don’t have to use Google or Facebook, but unless you want a full blown wild-west version of capitalism, this is a very weak point. Google in particular is very difficult to avoid. It’s almost as if Google is the internet. In the 1990s when the internet was beginning to get established, the role it would come to play in our lives was unclear. I remember reading articles claiming that it was just a temporary fashion and wouldn’t catch on. But the internet has become a utility, like water or electricity, and like any utility it needs to be under democratic control. Despite attempts by governments to rein in the big tech companies, however, they have become more and more powerful. Regulation is proving slow and very difficult, but it is imperative. Zuboff documents in detail the methods these companies use to pretend that they have changed their ways.

Another response might be to claim that, provided we know that the companies collect this data, we can make a calculated decision to accept this in exchange for the benefits of using the services on offer. Privacy, however, is not a private matter, as Zuboff emphasizes. Because the effects on society are so far reaching it’s not something that can be left up to individuals making their own decisions, however ethical or informed. In any case, as we have seen, such decisions can never be fully informed since we will not know the detail of the data that is collected about us.

Zuboff also discusses what she calls ‘inevitablism’; the tendency to believe that the rise of the big tech companies and the power and influence they have acquired is unstoppable. She convincingly describes how the internet was developed in the context of a particular set of contingent circumstances that led to the rise of these largely unregulated companies. Rather than being a benefit to the whole of society under democratic control, it has been captured by a small group of companies who have in effect manipulated it to serve their own narrow interests. However, there is nothing inevitable about this. And perhaps the mood is beginning to change.

Finally, it’s worth asking a question that Zuboff does not ask. To what extent is the targeted advertising from which the internet companies derive their billions actually effective? After all, it’s in their interests for us to believe that it is effective for advertisers. Adam Curtis uses the analogy of people visiting a pizza restaurant and being handed an advertising leaflet as they enter.[11] Everyone who enters will have a leaflet so if you didn’t know where it came from, it might look like a case of effective advertising. In fact those with the leaflet were coming into the restaurant anyway. Whether effective or not, people dislike the feeling that others are trying to manipulate them, so ad blockers and other strategies to push back against surveillance capitalism will no doubt become more common. Ultimately, however, its’s a problem that can never be overcome by individual action. Societies as a whole need to take back control of the internet.

[1] Zuboff, S., 2019. The age of Surveillance Capitalism: The fight for a human future at the new frontier of Power. London: Profile

[2] An excellent more general critique is to found in Rana Faroohar’s, Don’t be evil: The case against Big Tech.

[3] Brin & Page, 1998, cited in Zuboff, 2019, p71

[4] Zuboff, 2019, p67

[5] ibid, p70

[6] ibid, p78

[7] Ibid, p77

[8] Zuboff includes an interesting discussion on how Pokemon Go was monetised, ibid p308.

[9] Skinner, B.F., 1971. Beyond Freedom and Dignity. Harmondsworth: Penguin

[10] “Act in such a way that you always treat humanity, whether in your own person or in the person of any other, never simply as a means, but always at the same time as an end.”

[11] Curtis, A., 2021. Talking Politics; Adam Curtis [Podcast] [Accessed 19/4/21] Available from:

https://www.bbc.co.uk/iplayer/episodes/p093wp6h/cant-get-you-out-of-my-head