by Filippo Santoni de Sio, along with Marianna Capasso, Rockwell F. Clancy, Matthew Dennis, Juan Manuel Durán, Georgy Ishmaev, Olya Kudina, Jonne Maas, Lavinia Marin, Giorgia Pozzi, Martin Sand, Jeroen van den Hoven, Herman Veluwenkamp
Abstract: This article, written by the Digital Philosophy Group of TU Delft is inspired by the Netflix documentary The Social Dilemma. It is not a review of the show, but rather uses it as a lead to a wide-ranging philosophical piece on the ethics of digital technologies. The underlying idea is that the documentary fails to give an impression of how deep the ethical and social problems of our digital societies in the 21st century actually are; and it does not do sufficient justice to the existing approaches to rethinking digital technologies. The article is written, we hope, in an accessible and captivating style. In the first part (“the problems”), we explain some major issues with digital technologies: why massive data collection is not only a problem for privacy but also for democracy (“nothing to hide, a lot to lose”); what kind of knowledge AI produces (“what does the Big Brother really know”) and is it okay to use this knowledge in sensitive social domains (“the risks of artificial judgement”), why we cannot cultivate digital well-being individually (“with a little help from my friends”), and how digital tech may make persons less responsible and create a “digital Nuremberg”. In the second part (“The way forward”) we outline some of the existing philosophical approaches to rethinking digital technologies: design for values, comprehensive engineering, meaningful human control, new engineering education, and a global digital culture.
The Social dilemma, the bigger picture
As citizens of the world in the 21st century, we should all be grateful to active digital critics who speak up in Netflix’s The Social Dilemma. Tristan Harris, Shoshana Zuboff and the others have put again the spotlight on the incredible power of the US social network companies. However, the one and a half hour documentary and the experts it features tell a little piece of a bigger story. They focus on the power of Google/Facebook/Twitter/Instagram and the problems of addiction and manipulation and only briefly touches upon the broader ethical, political and economic dimension of digital technologies. So, if you found the documentary depressing or shocking, it is important to realise that things are not as bad as they look – in many ways they are much worse. But there is also good news. Whereas the last ten minutes of the documentary give a vague impression that something can be done about it, it falls short of doing justice to some existing constructive proposals that have been put forward to rethink the digital systems. It also fails to give an impression of how deep the ethical and social problems of our digital societies in the 21st century actually are. Here is a modest attempt at filling these two gaps: giving a richer sense of the problems we are facing and proposing some general ways to address them.
Nothing to hide, a lot to lose
Let’s start with the bad news. One big merit of The Social Dilemma is showing once more that the big issue with Google, Facebook & co is not simply privacy, it’s power. The talk of privacy seems to point to a problem of user’s control over private data: companies or governments should not check what I am doing on my computer. This may not appear as a big problem: many feel they have “nothing to hide”, and may have even have a lot to gain by trading their boring personal data for way more exciting services. So let people be free to share their data, if they want. But collection of personal data generates a different kind of harm when they take the form of collection of big data about big groups. You as an individual are not so important, but companies, governments or criminals may have bigger plans. For instance, they may want to influence elections, as some political actors may have done in 2016 with the Brexit referendum and the US election, by using the massive amount of user data collected via an apparently innocuous Facebook app and then sold via the infamous consultancy firm Cambridge Analytica. Harvesting data at scale makes us collectively vulnerable in ways that go beyond breaches of individual privacy. This is a new and nasty problem: even when individual rights are formally considered (e.g., via privacy consent forms), the consequences for society may be very harmful. It is a form of ‘accumulative harm’, not unlike what we see with environmental pollution. A couple of plastic bags in the sea is not a problem but a steady stream of them day in day out over decades may destroy the ecosystem. You may think you have “nothing to hide”, but we all have something to lose by allowing big data collections.
The new masters
The Social Dilemma clearly shows how manipulative and addictive social media platforms are, and highlights other big societal risks created by them: political polarisation which makes democratic debate difficult, fake news and harm to truth. These are all symptoms of a deeper problem. The problem is not what big tech companies actually do, but the very fact that they are in the position to do whatever they want without being subject to any control: economic, legal, political, social. Facebook has remained in the top ten most valuable companies despite the Facebook-Cambridge Analytica scandal, Twitter can decide to silence the President of the United States, Google may decide to leave an entire country without their services. This unaccountable, dominating position of Big Techs is a present-day exemplification of the master-slave dynamic, often referred by so-called neo-republican political philosophers: Even if a master would not harm his slave and contributes to his wellbeing, he could have chosen to do so. The serious moral problem is being at the mercy of someone who could wield power over you arbitrarily.
In the pre-digital world, few people would have found acceptable that a few private multinational companies would control, say, most roads, shops, restaurants, public buildings, TVs, telephones, hospitals etc. and that they wouldn’t even care to disclose how they would run their business, what would be the next space they would buy or occupy, and that they would even exploit their position to make experiments on the people, possibly with the goal of manipulating their behaviour. But this is what is happening now with Big techs in the less visible digital infrastructures. Not just social media platforms, but for instance all kinds of e-commerce platforms (Uber, Amazon etc.) leverage their privileged, sometimes quasi-monopolistic, intermediary positions to perpetuate information asymmetries, keeping their business practices opaque and collecting vast amounts of data from their users and customers. This in turn enables systematic manipulation through various personalisation and hyper-personalisation techniques exploiting the best available psychological knowledge. Such practices may be less effective on an individual level than shown in The Social Dilemma, but the sheer scale and systematic character of these practices is the real reason for concerns. We do not want to be so crucially dependent on systems on which we have no power and who see us solely as objects for nudging and manipulation and exploitation. This is not only an undesirable scenario, it’s a moral and political dystopia.
How much does the “Big Brother” know?
Speaking of dystopias, it has been in fact often argued that we live in a “surveillance society”, tech companies are the ultimate Big Brother and have finally realised full control on the population via the creation of a “Panopticon”, the perfect prison in which a central tower can see every inmate at all times, without them being able to see back inside the tower. At the same time, others fear that we are going more towards a Terminator scenario, in which technology itself and in particular recent forms of Artificial Intelligence may go out of human control. This is because the automated processes of analysis and elaboration data are becoming so complex and opaque that even experts and programmers may have a hard time understanding and explaining their behaviour and decisions that depend on them. This has been called the “black box” problem of AI. So, which dystopia is becoming reality — do the new masters know more and more or less and less about us? The answer is not simple: They certainly have more and more data about us, and these data are constantly turned into information, predictions, and responses from their technical systems. But it is very debatable if this should be considered real knowledge, as the algorithmic processing is often not accessible and understandable from a human perspective. This is a fascinating deep philosophical question: should AI change our century-long understanding of what should count as knowledge? But there are also more urgent practical questions.
The risks of artificial judgement
Knowledge generated by AI on the basis of “big data” appears to be more reliable and more solid since it derives from automated algorithmic processes understood as neutral mathematical procedures. But this is not the case. In the UK an AI system approximately 39% of A-level results were downgraded by exam regulator Ofqual’s algorithm. Disadvantaged students were the worst affected as the algorithm copied the inequalities that exist in the U.K.’s education system. These “algorithmic biases” are cropping up everywhere in healthcare, HR and and criminal justice systems, as we are starting to experiment with AI applications in these sectors. If this is concerning in relation to social networks, the problem becomes even more urgent in other domains. As a matter of fact, AI-based systems play an increasingly significant role in many highly sensitive decision-making contexts: judges may use them to assess the likelihood of a defendant’s recidivism in a criminal procedure, doctors may use them in their medical diagnosis and treatment recommendations, politicians and military commanders may rely on them for strategic decisions and operations in warfare.
Should we be ready to trust the prediction of systems whose internal logic remains undisclosed to the persons using them and being affected by them? And even if these tools turned out to be statistically effective and thus good enough to make a good business out of it, would it be fair and morally, legally, politically acceptable to base our decisions on these sorts of predictions, in law, education, medicine, warfare?
With a little help from my friends
The social dilemma leaves us with an unanswered question: How can we live a good life with today’s online technologies? You may be tempted, as with privacy, to dismiss the problem as one of individual wisdom: use technology responsibly, keep focusing on what really matters. But can you? Interestingly, some of the popular advice on how to be indistractable and mindfully regulate your relationship with digital platforms comes from the same people who helped the companies to get the users hooked in the first place, designing “sticky” apps to promote user engagement. The same science that we are supposed to occasionally look at to regulate our individual lives is systematically used by multibillionaire companies and their army of experts to make this self-regulation very difficult. Few of us can win this game individually. Moreover, one’s own digital well-being is intimately connected to that of others. In a digital environment, one individual can have a disproportionate effect on everyone’s well-being. The actions of a “troll” can poison the entire digital well-being ecosystem. This means that keeping the online ecosystem healthy is paramount. Individual digital well-being can only emerge through collective digital well-being. We can flourish on digital platforms only if they are designed to allow and promote caring and meaningful human relations. Digital relations maybe be very important. The life of many of us, not only that of adolescents, would be dramatically impoverished without them. But healthy digital relations need a healthy digital environment.
A digital Nuremberg?
The protagonists of the Social Dilemmas are some of the creators of Google/Facebook/Twitter most successful and problematic technologies. They are portrayed as ordinary, clean-faced, empathetic, well-intentioned people caught in a bigger game. It wouldn’t be surprising if, after watching the documentary, you feel that you should not be too harsh in your judgement on them, and they may be even excused for the evil they have helped to cause. Who could have known? Is their recurring regretful comment. Be that as it may (well, someone probably could, more on this below), if the developers are not responsible, then who is? For it is even more implausible to hold the users, or the algorithms themselves, responsible. As for company managers, they are just ambitious, often greedy entrepreneurs like many others, who respond to economic and market incentives. Thus, one might be left with a sense of a world dominated by impersonal forces with no space for human action. But this is not the case. Technology is certainly the result of complex social processes, but these processes are not governed by mysterious inscrutable forces. So-called machine learning-systems may be “autonomous” and complex and part of a big economic game, but they serve pretty well the all too human values and interests of the big corporates. Denial of human responsibility within complex processes and organizations is an old and well-known phenomenon, think of the infamous “I just obeyed orders” of the Nazis at the Nuremberg trials and Sartre’s diagnosis of the ineliminable tendency of humans to deflect questions about responsibility and deny that they had a choice. It is also prevalent in quite common talk about systems and robots with which we ‘collaborate’ as ‘team-mates’, who ‘share’ a workspace with us, who ‘reply’ to our questions. This leads in natural way to thinking that we can also off-load some of our responsibility to machines. But that is all wrong: Ethics is about dr Frankenstein, not his monster.
The way forward
Ethics and technology
And now for some good news. First, there is a lot of academic knowledge available about ethics of digital technology, and the field is constantly growing. When one of the protagonists of Netflix’s Social Dilemma presents himself as a “design ethicist” campaigning for humane technology this may have sounded as mind-blowing to many viewers. As a matter of fact, Ethics of technology became famous to the big public only some five years ago with the fictional “social dilemmas of self-driving cars” crashes (this may be why the title of the documentary sounded familiar). Many influential engineers and computer scientists have discovered the ethics of (digital) technology only in the past few years, and recently even tech giants themselves have done – often dubious – attempts to create ethical boards and ethics research groups.
However, a special issue on computer ethics was run by the journal Meta-philosophy already in 1985. The journal Ethics and Information Technology was established in 1999. For what it matters, also our department ethics and philosophy of technology at TU Delft in The Netherlands was established in the early 1990s, and it counts more than 30 persons doing research in ethics of technology. The history of philosophy and sociology of technology more broadly conceived is even older. It goes back at least to the 1950s, when philosophers and sociologists like Martin Heidegger, Jacques Ellul, Herbert Marcuse, Hans Jonas, among many others, from very different cultural backgrounds and with different languages and intentions, reflected on the way in which the combination of scientific and technological research performed under the push of industrial and market drives would turn ever-existed human technical activities into a totally different game, one with a deep, long-term impact on humanity. We have failed to pick up these weak signals for decades as were too busy to wallow in the short-term blessings of neo-liberal market ideologies, morally blind silicon-valley style entrepreneurship of moving fast, breaking things first and apologizing later. It’s time to capitalise on the accumulated wisdom of ethics of technology.
Design for Values
Forty years ago, in 1980, another early philosopher and sociologist of technology, Langdon Winner, explained how even the dumbest technical artefact, a road overpass, may be racist, if designed with the intention, or the effect of preventing poor ethnical minorities to reach a public beach by public transport. And iconically declared: artefacts do have a politics. It should not come as a surprise that today we might rightfully criticise voice assistants like Alexa or Siri, female voices that can never say no to you and are always available, for being male chauvinist. Scholars in the field of Science and Technology Studies have raised attention to the facts that also non-digital, apparently stupid “things” have never been only tools passively executing our goals, but also always shaping our perception of the world, our capacity (or incapacity) to act, and, eventually, our values and goals. Taking stock of this tradition, and in a more pragmatic and design-oriented fashion, the methods of “value sensitive design” or “design for values” explicitly support reflection on ethical considerations and moral values at early stages in the development of technology, especially in terms of design and research. They aim at ensuring that ethical, societal, and legal values and principles – fairness, privacy, responsibility etc. – are systematically embedded in the design of systems through a “democratic” interaction between engineers, users, stakeholders, experts in law, social science, philosophy. In the end, many people today seem to have no problem in accepting this perspective in relation to for instance the safety of engineering infrastructures or medicines, or, at least in theory, in relation to the environmental sustainability of products. We need to do the same in relation to many other ethical and societal dimensions of (digital) systems. The European Union has already endorsed the Responsible Innovation approach to technology. One big challenge is making this model of technological development a reality, and making it matter also on the global scale, where other models seem at the moment dominant: the market-driven US, the State-controlled China.
Technological systems do not exist in separation from the human and social context in which they will operate. So, if we want to have a chance to have an impact, it is this complex interaction – the so-called socio-technical systems – that we need to re-design. We need to move towards “Comprehensive engineering”: a holistic view of the ‘systems of systems’ comprised of moral reason, institutions, incentive structures, and market orderings, procedures, and individual persons with their own mental states who act in these contexts. By pursuing this approach, we can escape the false perception of inevitability of centralised monopolistic tech platforms, whether they are social media, market platforms, or media distribution platforms. Comprehensive engineering reject top-down platforms designs prescribing certain behaviour to their participants and can propose solutions aiming to achieve fair distribution of risks, costs, and benefits in these large-scale systems. In combination with design for values approaches, comprehensive engineering invites us to think outside of the current techno-economical envelope and to contemplate post-platform digital society built on open, democratic, and fair technical infrastructures.
Meaningful human control
Also the concept of human control should be rethought. What it means to be “in control” of a technology that potentially learns, changes, adapts to different users’ behaviour and circumstances, is yet another big philosophical question, with big practical implications. Many still think of control in terms of devices seamlessly responding to the behaviour of some users or controllers. But what if designated users are not able to properly interact with their devices: people getting addicted to social networks, overstrusting self-driving cars – or autonomous weapon systems? What if those who have the power over the technology are not willing or able to steer technology in more desirable directions? “Meaningful human control” requires having complex, evolving human-machine systems remaining aligned to the reasons and capacities of all the relevant persons in the systems. How to achieve this via better technology but also better politics, organization, and legislation is yet another big practical challenge for the next decades.
New engineering education
It is striking seeing the protagonists of the Social Dilemma, former insiders of the tech-industry, being flabbergasted by social media’s recent developments. Who knew? We had such a fantastic plan, and look at it now…. That such a naive view of technology has been at the core of the Silicon Valley in the crucial years of the development of present-day dominating technologies is even more concerning than surprising. It means that key players in this historical turn simply ignored the basics of engineering ethics education: technologies are ambivalent. This is a painful alarm ring. Students of engineering must be aware that the technologies they design and create will foster some values while potentially undermining or hampering others. In the Netherlands where we work, for instance, the four leading technical universities (Delft, Eindhoven, Twente, Wageningen) have taken this to the heart of their educational vision and they strive to make ethics education for engineers a mandatory part of their curriculum. Starting on the premise of ambivalence, this educational vision aims at fostering students’ moral imagination to resolve value conflicts and at raising their moral sensitivity to acknowledge the potential hazards of technology and ways to mitigate or control them, but also to exploit the full potential of technology has a socially beneficial enterprise. If engineering students are taught these lessons before they take over enormous social responsibilities, we can support them at least from falling for the naive view that simplifies the creation of complex socio-technical systems and sees their success as mere result of good will and technical competence.
The global perspective
Finally, also the good intentions of us authors of this piece are probably not enough. Concerns regarding digital technologies become complicated when considering their global dimensions, that digital platforms operate across different cultures and countries. Facebook, Instagram, and the like originated in the US but are now used globally. Criticisms of social media have come primarily from people like us: WEIRD (Western Educated Industrialized Rich and Democratic), outliers on a variety of behavioral and psychological dimensions. Are the ethical issues raised here representative? Do they reflect the ways the global public thinks about social media? High levels of cooperation and rule of law characteristic of but somewhat unique to the WEIRD world are the result of technologies that have evolved and been refined over thousands of years. Since individuals from Asia and the global south comprise an ever-larger percentage of internet users, it is very important to also consider not only the ethical but also the cultural dimensions of technology ethics. One helpful direction has been to use non-Western philosophies to consider ethical questions, for example, how Confucian conceptions of selfhood inform views about reproductive technologies or how notions of collective responsibility can be applied to artificial intelligence. Another direction is developing more global ethics education, for instance, incorporating findings from cultural and moral psychology, helping practitioners to better understand and respond to challenges they are likely to encounter in the increasingly cross-cultural, international environments of contemporary technology. In all cases, this means venturing beyond the conceptual confines and assumptions of WEIRD philosophy, psychology, and culture alone.
 All authors are member of the Digital Philosophy Group at the Section Ethics and Philosophy of Technology, Delft University of Technology: https://www.tudelft.nl/en/tpm/about-the-faculty/departments/values-technology-and-innovation/sections/ethicsphilosophy-of-technology