Technology: Instrumental, Determining, or Mediating?

by Fabio Tollon

DALL·E generated image with the prompt "Impressionist oil painting disruptive technology"
DALL·E generated image with the prompt “Impressionist oil painting disruptive technology”

We take words quite seriously. We also take actions quite seriously. We don’t take things as seriously, but this is changing.

We live in a society where the value of a ‘thing’ is often linked to, or determined by, what it can do or what it can be used for. Underlying this is an assumption about the value of “things”: their only value consists in the things they can do. Call this instrumentalism. Instrumentalism, about technology more generally, is an especially intuitive idea. Technological artifacts (‘things’) have no agency of their own, would not exist without humans, and therefore are simply tools that are there to be used by us. Their value lies in how we decide to use them, which opens up the possibility of radical improvement to our lives. Technology is a neutral means with which we can achieve human goals, whether these be good or evil.

In contrast to this instrumentalist view there is another view on technology, which claims that technology is not neutral at all, but that it instead has a controlling or alienating influence on society. Call this view technological determinism. Such determinism regarding technology is often justified by, well, looking around. The determinist thinks that technological systems take us further away from an ‘authentic’ reality, or that those with power develop and deploy technologies in ways that increase their ability to control others.

So, the instrumentalist view sees some promise in technology, and the determinist not so much. However, there is in fact a third way to think about this issue: mediation theory. Dutch philosopher Peter-Paul Verbeek, drawing on the postphenomenological work of Don Ihde, has proposed a “thingy turn” in our thinking about the philosophy of technology. This we can call the mediation account of technology. This takes us away from both technological determinism and instrumentalism. Here’s how. Read more »

How Can We Be Responsible For the Future of AI?

by Fabio Tollon 

Are we responsible for the future? In some very basic sense of responsibility we are: what we do now will have a causal effect on things that happen later. However, such causal responsibility is not always enough to establish whether or not we have certain obligations towards the future.  Be that as it may, there are still instances where we do have such obligations. For example, our failure to adequately address the causes of climate change (us) will ultimately lead to future generations having to suffer. An important question to consider is whether we ought to bear some moral responsibility for future states of affairs (known as forward-looking, or prospective, responsibility). In the case of climate change, it does seem as though we have a moral obligation to do something, and that should we fail, we are on the hook. One significant reason for this is that we can foresee that our actions (or inactions) now will lead to certain desirable or undesirable consequences. When we try and apply this way of thinking about prospective responsibility to AI, however, we might run into some trouble.

AI-driven systems are often by their very nature unpredictable, meaning that engineers and designers cannot reliably foresee what might occur once the system is deployed. Consider the case of machine learning systems which discover novel correlations in data. In such cases, the programmers cannot predict what results the system will spit out. The entire purpose of using the system is so that it can uncover correlations that are in some cases impossible to see with only human cognitive powers. Thus, the threat seems to come from the fact that we lack a reliable way to anticipate the consequences of AI, which perhaps make us being responsible for it, in a forward-looking sense, impossible.

Essentially, the innovative and experimental nature of AI research and development may undermine the relevant control required for reasonable ascriptions of forward-looking responsibility. However, as I hope to show, when we reflect on technological assessment more generally, we may come to see that just because we cannot predict future consequences does not necessary mean there is a “gap” in forward looking obligation. Read more »