by Michael Klenk
When academics and journalists criticise technology today, they often assume the perspective of a bitter and desperate lover: intimately acquainted with the failings of technology, and vocal in pointing them out, but also too invested and unable to perceive the world without it.
That critical perspective on technology is important and increasingly mainstream, but it myopically focuses on the wrong question. It presupposes technology, and merely plug on ethics as a constraint. An adequate, non-myopic ethics of technology must start with the question of why we need tech in the first place. A very brief sketch of the history of the ethics of technology in two stages, and a case study of digital contact tracing help us see why.
First came technology: Our hominoid forebears used stone tools to butcher dead animals long before the first Homo Sapiens walked the earth. Since then, technology has empowered humanity and propelled us to be the dominant species on this planet. From this perspective, technology was often useful, frequently inevitable, and mostly seen as something beyond the purview of ethical considerations. From that perspective, technology is an eminently helpful and value-neutral tool. But then came ethics: A critical perspective on technology is almost as old as our use of technology.
Socrates warned against the perils of writing (!), and there have been comparable ‘techlashes’ about, for example, the printing press, radio, television, and recently the Internet and social media. Though it is easy to dismiss these warnings as naïve forms of conservatism, they are part of a valuable critical perspective of technology that became increasingly influential in the 20th century. Realising the immense impact of technology on our lives and the planet made eerily clear that we need a critical perspective on technology. At the very latest, the detonation of the first atomic bomb and technology-caused environmental disasters brought technology back under the ethical lens in the broader debate about the desirability and the nature of technological progress. That critical perspective on technology is now the mainstream view.
So, now, journalists, academics, and consumers routinely ask critical questions about the permissibility of technology. Entire sub-fields in the humanities and social sciences are devoted to the study of the ethical repercussions of technology. For example, as digital technologies are revolutionising our lives at increasing speed, the field of digital ethics has developed alongside, documenting a host of ethical concerns with these new technologies. Privacy concerns, questions about data ownership and informed consent, biases in algorithms, and trust erosion are just a few of the crucial topics. These perspectives are meaningful, and the fact that many now recognise ethical constraints on the use of technology is progress.
Nevertheless, today’s critical perspective on technology suffers from a myopic focus that locks out the most fundamental questions. The current mainstream view uncritically assumes or accepts that there should be technology to solve our problems, whose all-too exorbitant caprioles must then be reined in by ethics. In doing so, it fashions itself narrowly, as an ethics of technology that starts with technology and then puts some ‘ethics’ atop to constrain it. In consequence, it fixates so tensely on identifying ethical problems with a given technological solution that it skips over a more fundamental question: should we have the technology in the first place?
Asking the wrong question about the ethics of technology is costly in many ways. Often, it is inefficient. When you want to add 17 and 25, you don’t assume that technology is the solution, and so you do not go about programming a calculator. You use your head, and you solve the problem without technology. Why? Because the problem that needs solving does not require technology as a solution (performing very many of those calculations might require technology – but that would be a different problem). It would be a failing in the light of efficiency to use technology for a simple task like this.
Also, asking the wrong question is not very smart. When you face a problem, want to avoid two things: To constrain yourself too much in the definition of the problem and the possible solutions. But the narrow perspective tends to do that. It closes out from the beginning alternative ways to frame the problem and possible solutions. It does not pause to ask: Is that the problem we should solve? To illustrate, when a co-worker suggests you reach out to a former client during COVID-19 lockdown, you can a) don’t do it, because it is an unpromising venture, b) figure out ways to do it. But it would be best if you didn’t straight jump to Google to search for the most optimal teleconferencing solution without considering alternatives, which may include not reaching out to your former client in the first place.
Relatedly, the critical perspective also does not ask: if this is the problem we should solve, should we use technology for it? There are countless problems that we could and should address. But only some require technology. For example, societies may have reason to prolong people’s lives. Should we aim for high-tech interventions to slow the ageing process for a select few (as some tech-ethicists suggest)? Often, non-technological alternatives may be preferable. Life expectancy and quality of life, for example, can be increased by distributing education, health-care, and wealth more equally.
Relatedly, the critical fails to recognise relations between the question of what problems we should address and the question of whether we should use technology for it. If we have a problem we should solve, and find that technology is required, then identifying ethical problems with the technology (which is the what the current perspective excels at) may give us reason to change our mind about the first two questions (a more detailed elaboration of this point by the philosopher Luciano Floridi is here).
Finally, asking the wrong question can be a moral failing. You don’t ask about permissible ways to murder someone or justified methods of torture. That’s because you should not commit murder nor torture, and doing right means doing the right thing (not doing something right). The myopic perspective on the ethics of technology sometimes fails because it tries to evaluate in an ethical light something that, ethically speaking, should not be there in the first place. It ignores the possibility that some problems should not be tackled at all (at least not right now). Making faster cars requires technology, but we should not address that problem right now. Asking how to make faster cars in an ethically permissible way represents a myopic focus.
Therefore, an appropriate, non-myopic ethics of technology must not uncritically accept technology as a given to a particular problem. Whether a particular problem requires the use of existing technology or the invention of new technology is an open question. That question should be the beating heart of a broad, appropriate, non-myopic ethics of technology (as Martin Sand and I argue in much more detail in a forthcoming essay). The ethics of technology, properly conceived, is an ethics of problem-solving: What are the problems that we ought to solve? Can technology aid the cause in permissible ways? Taking this view alerts us to the possibility that some issues are best resolved by non-technological means. So, the ethics of technology must not begin with the question of ‘What technology?’ Instead, it must ask ‘Why technology?’ first. That question requires answering ‘What problem should we solve (now, with our limited resources, given the very many other problems out there)?’ and ‘Is technology required to solve that problem?’
Of course, in many cases, the technology is already there, and the best we can practically do may be to limit its negative effects. But it must be part of any critical engagement with the respective technology to consider whether the world may not be better without it. Also, there are many cases where we can still have a say in framing the problem that needs solving, and whether or not a technology ought to be used to solve it. The debate about digital contact tracing to combat the COVID-19 pandemic illustrates this well.
First came a problem: The COVID-19 pandemic created painful choices between ‘saving lives’ (i.e. protecting people from the virus) and ‘saving livelihoods’ (i.e. protecting people from the economic, social, and psychological effects of combating the virus). Most governments had to resort to some form of population-wide quarantine to curb the spread of the virus, but they are now looking for safe ways to ease the lockdown safely. Because SARS-CoV-2 spreads too quickly, traditional means to mitigate virus-spread like contact tracing would not work. The biggest problem is that individuals are infectious before the onset of symptoms, as shown, for example, by Ferretti et al. 2020).
Then came technology: Digital contact tracing may be a solution to the problem of saving lives while saving livelihoods. It would register people’s contacts and allow for much faster identification and quarantine of people that came in contact with infectious individuals. Instead of having a human contact tracer trying to identify and contact all contacts of an infectious person (from up to seven days back) in painstaking and slow detective work, the digital tracing app could achieve all this at an instant with the touch of a button. Let’s assume that enough people would use the app and that it works reliably in registering contacts. Then, the app may indeed save lives (because it would help to stop the spread of the virus) while saving livelihoods (because it would free much more people from quarantine than the population-wide lockdown, as shown, e.g. by Hinch et al. 2020).
Then technology was constrained by ethical considerations: Of course, the legitimacy of digital contact tracing depends both on its efficiency and ethical aspects. Many academics and journalists were quick to point out that we should be very sceptical about either. There are severe privacy concerns (see e.g. this article by Casey Newton) and mission creep (see e.g. this blog by Ross Anderson). These concerns were important and apt, assuming that we will eventually have the technology and that we can – at best – somehow constrain it from an ethical perspective.
However, the focus on how to implement digital contact tracing safely reflects the myopic focus that starts with technology rather than evaluating a different frame of the problem. The right ethical question would be to assess whether or not we need such a technology in the first place, which depends on the kind of problem that needs solving. Once we take a step back and consider the fundamental question ‘Why technology?’, we are forced to ask what problem digital tracing is supposed to solve. Once we do that, we can see that the problem that digital tracing is supposed to tackle is ill-conceived. It is not that merely that we must save lives while saving livelihoods. Instead, the question that we should address is whether we can save lives while fairly saving livelihoods. To illustrate, it would be a bad solution to the COVID-19 pandemic to free the wealthiest 1% from the lockdown, even if that reduces the number of people in quarantine. It would also be a bad solution to have some groups benefit from mitigation measures that are paid by others, especially if that increases already existing inequalities (Hein Duijf, Christian Engels, and I discuss this in much more detail in this recent research paper. A brief blog by Hein illustrates the problem here).
There are thus important questions about how to allocate the costs of fighting the pandemic fairly. By buying into a technological solution too quickly, we are often already buying an increase of the existing inequalities for us. That should be avoided. The case of digital contact tracing is compelling because we had a say in framing the problem. Asking the right question about the ethics of technology may have helped us avoid a premature hope for a ‘technological fix.’ That means that asking the correct question would have helped us better live up to our moral responsibilities, and therefore to better solve our problems.
For tech ethics, this means that its scope can easily, and with morally problematic consequences, be misunderstood as accepting or presupposing that technological solutions are inevitable. That would not be an optimal outcome and a wasted opportunity for an informed perspective on the matter.
We should invoke the wide perspective on the ethics of technology in practice, teaching, and research. In practice, individuals and groups should ask what problems they ought to solve and consider technology as a possible but by no means presupposed solution-option. In teaching, we should not merely teach our students to fit some ethical corset on some technology that they are fascinated with, but ignite them to ask whether that technology solves a worthy purpose. In research and writing, we should ask whether technology has earned its keep as a solution to a worthy problem and whether it can still play its role when ethical constraints are met. We should, in short, keep an open mind to ask ‘Why technology?’ and be bold enough to accept it when the answer is ‘No technology.’