by Fabio Tollon
In the media it is relatively easy to find examples of new technologies that are going “revolutionize” this or that industry. Self-driving cars will change the way we travel and mitigate climate change, genetic engineering will allow for designer babies and prevent disease, superintelligent AI will turn the earth into an intergalactic human zoo. As a reader, you might be forgiven for being in a constant state of bewilderment as to why we do not currently live in a communist utopia (or why we are not already in cages). We are incessantly badgered with lists of innovative technologies that are going to uproot the way we live, and the narrative behind these innovations is overwhelmingly positive (call this a “pro-innovation bias”). What is often missing in such “debates”, however, is a critical voice. There is a sense in which we treat “innovation” as a good in itself, but it is important that we innovate responsibly. Or so I will argue.
What exactly is innovation? For the purposes of this article, I will define the concept quite loosely, and highlight a few important aspects of innovation: firstly, it has a distinctly creative dimension, in which a new idea is brought forward as a solution to a given issue. Secondly, and relatedly, this idea needs to be commercialized, and made available for consumers to purchase (this differentiates innovations from inventions). Thirdly, innovation can have both technological and social dimensions, as whether a given technology will meet with commercial success often depends on various sociological factors. Moreover, there is also a sense in which innovations may be social innovations. In this article, however, I will focus on the technological aspects of innovation, but it should be kept in mind that this choice is pragmatic and not principled. Technological innovation is not the only type of innovation. Now that we have a handle on innovation, what is responsible innovation?
It is becoming increasingly clear to policymakers (at least in the EU) that unfettered innovation is not only undesirable but can even be detrimental to social and economic progress. This is expressed in detailed reports which specify the ways in which innovation ought to be aligned with socially desirable outcomes. For example, there has been a push in the EU more generally to make the electricity grid more efficient. In the Netherlands, an attempt to reach this sustainability goal came in the form of a proposal to install smart electricity meters in all Dutch households. However, opposition to this technology grew steadily over the years, eventually resulting in the proposal being rejected in the upper house due to privacy concerns. The concerns stemmed from the fact that the device was viewed as an infringement on the privacy of individual citizens, as it could take “snapshots” of electricity consumption every seven seconds, store information in a database operated by electricity companies, and provide a glorious amount of information about the inner functioning of Dutch households.
It is clear that the installation of this innovative technology would have resulted in a more efficient Dutch electricity grid, leading to cost reductions and the achievement of sustainability goals. However, it seems privacy concerns were not sufficiently addressed in the design phase of the proposed intervention, resulting in the emergence of a value conflict. The right to privacy, in this case, seems to have trumped sustainability concerns.
Outcomes such as this are not inevitable, and there were ways in which this particular case could have ended differently. For example, if engineers, from the beginning, had explicitly incorporated and considered privacy in their design of the system, the problems outlined above might not have come to fruition. In other words, the innovation might have been successful had it been sensitive to and driven by the appropriate values right from the beginning. This would have involved viewing privacy as a non-functional requirement of the system, and not merely an optional add-on to be addressed at the end once the system has already been deployed or trying to convince consumers that the invasion of their privacy would be offset by an increase in some other value.
Too often when it comes to ethical reflection on technological innovation such reflection only occurs as a response to various harms. Instead, truly responsible innovation requires us to anticipate and preempt potential moral concerns that may arise from a given technology, and to guard against these harms by ensuring that product design is value driven from the earliest stage of research and development.
René von Schomberg, who is a policy leader in the field of responsible innovation, defines it as follows:
“Responsible Research and Innovation is a transparent, interactive process by which societal actors and innovators become mutually responsive to each other with a view on the (ethical) acceptability, sustainability and societal desirability of the innovation process and its market- able products (in order to allow a proper embedding of scientific and technological advances in our society).”
This kind of approach is useful because it picks out key values that ought to be involved in the process of innovation (such as social desirability, sustainability, etc.). In terms of value identification, then, this definition can act as a principled guide. A definition of this kind aims to put forward a set of roughly objective values that are decided in advance, and then used to evaluate various project proposals and attempt to anticipate their moral impact. Because they are set out in advance engineers can embed the aforementioned values into the architecture of their systems, avoiding the reactionary tendencies alluded to earlier.
This it not to say that this approach does not have drawbacks. For example, what exactly is the nature of a “value”? When identifying values (in the way that von Schomberg does), it invites us to think of them as stable over time, as having shared meanings across cultures, and therefore being easy to cash out in cases of value disputes. In practice, however, things are often far messier. Values are a product of our interaction with the world and are therefore necessarily co-shaped by our (material or social) environment. As noted by Boenink and Kudina
“the core meaning of values is constantly being (re-)experienced and worked out within these practices. These activities of finding and/or giving meaning, however, are overlooked when thinking and talking about values as readymade entities.”
On this (pragmatic) account it is argued that values are living, interactive, and dynamic. As such, we should not expect to find values out there in the world, nor should expect them to be readily understandable. Rather, considerable hermeneutical work is required in order to figure out the specific meaning of a value, in a given context.
Values are living in the sense that they are experienced as valuable by individual agents, and as such they are importantly related to action. Our values allow us to orientate ourselves so that our actions (at least some of the time) correspond to what we value.
Values are interactive in that what we come to value is not done in isolation: our socio-technical environment is a significant factor in our valuation practices. Technology can come to change the ways we see ourselves and the world around us, changing what we value. This leads to the final element of this pragmatic proposal.
Lastly then, values are dynamic. As noted above, while it is of course the case that we are the ones creating technology, we should not exclude the fact that our technologies also come to change and shape us. New technologies make possible new means of valuation (just think of the effects of various forms of contraception, and how these have changed and subverted traditional sexual hierarchies over the last 50 years). To put it simply: technology and morality are no longer (or perhaps they never were) entirely distinct. These two aspects of our lives evolve together, shaping one another in an interactive dance. In this dance, it is often the case that “our morals co-evolve with the technologies they are supposed to guide”. The interactive nature of our relationship to new technologies gives rise to this dynamism.
In sum, then, responsible innovation is not a given, and nor is it easy. Our tendency toward the pro-innovation bias means we are predisposed to interpret innovations in favourable terms. What I have shown above is that in order to guard against this, we need to take seriously the role of values in technological and social development. This involves not only identifying which values we are aiming for as a society, but also explicitly embedding these values (by design) into the artifacts themselves. Moreover, we must be sensitive to the fact that our values should be understood as practices, and as such present a moving target for research. We should not despair at this result, as it is in this fact that the possibility of real social progress emerges.