by Fabio Tollon
You’ve heard about AI. Either it’s coming for your job (automation), your partner (sex robots), or your internet history (targeted adverts). Each of the aforementioned represent very serious threats (or improvements, depending on your predilections) to the economy, romantic relationships, and the nature of privacy and consent respectively. But what exactly is Artificial Intelligence? The “artificial” part seems self-explanatory: entities are artificial in the sense that they are manufactured by humans out of pre-existing materials. But what about the so-called “intelligence” of these systems?
Our own intelligence has been an object of inquiry for human beings for thousands of years, as we have been attempting to figure out how a collection of bits of matter in motion (ourselves) can perceive, predict, manipulate and understand the world around us. Artificial Intelligence, as eloquently surmised by Keith Frankish and William Ramsey, is
“a cross-disciplinary approach to understanding, modeling, and replicating intelligence and cognitive processes by invoking various computational, mathematical, logical, mechanical, and even biological principles and devices.”
AI attempts not just to understand intelligent systems, but also to build and design them. Interestingly enough, it may be the case that we could in fact design and build an “intelligent” system without understanding the nature of intelligence at all. Important to keep in mind here is the distinction between narrow (or domain specific) and broad (or general) AI. Narrow AI are systems that are optimized for performing one specific task, whereas broad AI systems aim at replicating (and surpassing) many (if not all) capacities associated with human intelligence.
An easy example is a calculator: it is programmed in a certain way, and once we have provided the input and hit the equals sign an output is provided. This output, if generated by a human being, would constitute a reason to view the person as intelligent. Does this mean the calculator is intelligent? Perhaps in this narrow sense the calculator could be viewed as intelligent, but certainly not in the broad sense we associate with human intelligence. But this example allows for the possibility that such narrow conceptions of intelligent behavior in artifacts is possible, irrespective of whether we understand what general intelligence is.
But this cuts both ways: as artificial systems designed and proclaimed to be “intelligent” may not be so at all. Consider the historical bias that was uncovered with Amazon’s automatic recruitment screening system. The system was found to discriminate against women, which indicates that the historical data fed to the system had this bias to begin with (this system was discontinued in early 2017). What emerges here is that while we may create narrowly “intelligent” systems without fully understanding “general” intelligence, we can also create “idiotic” systems that recreate injustices of the past, and we have a pretty bad track record as a species in this regard. There is a danger, therefore, in over-ascribing responsibility to systems that we do not fully understand, and this danger varies from the very trivial (your Roomba driving over your foot) to the existential (a superintelligent AI eliminating our species, a bit more on that later).
But perhaps there is a way to draw the line between the intelligence of machines and that of humans: one could argue that the capacity to be held morally responsible is reserved for entities that have an embodied form of intelligence. Human beings are embodied in the sense that we do not encounter our bodies out there in the world. Rather, our bodies are the very mechanism through which we come to experience the world. Such a distinct form of embodiment currently eludes AI, as these systems (at present) do not exhibit meaningful experientially valenced interactions with the world. Perhaps our lived-experience or corporeality is an essential part of any undertaking that seeks to examine our cognitive capacities. Moreover, when we evaluate moral situations, we tend to think in terms of giving moral stakeholders their due: giving them what they deserve based either on how they have behaved or whether they have been harmed. Does it make sense to give an unfeeling predator drone “what it deserves”? Surely it cannot be “deserving” of anything, as it is not embodied, and so cannot justifiably be of any moral relevance (in the specific sense of being held morally responsible, as it is indeed inevitable that such drones will be casually efficacious in the production of moral harms).
Worryingly, however, recent trends in AI research have been geared toward the creation of artifacts that act in ways that are increasingly autonomous, adaptive and interactive, which may eventually lead to these entities performing actions which are entirely independent from human beings. Indeed, the explicit goal of advanced drone weapons systems is that they would be able to select and attack targets without intervention by a human operator. Common sense perhaps dictates that we should consider the failings of even the most autonomous artificial system not as a failure of the system itself, but as a failing of those who created, designed, funded, and made use of it.
Considerable work therefore needs to be done on outlining and guarding against the widespread hubris on the part of humanity with respect to our artificial companions. There is a growing sense of unease among scholars working in AI related fields about the potential of a “superintelligence” emerging. The worry is that such an intelligence would very quickly surpass human levels of intelligence and may come to pose an existential risk to the future of humanity, in that it might not necessarily have anthropomorphic goals. Just consider the fact that the most efficient neurons in our body can communicate with each other at a languid 430km/h. Contrast this with the potential of computers, who can engage in internal communication at close to the speed of light (that’s 300 000 km per second). Moreover, there is a substantial constraint to how big human brains can get: for the moment, they are stuck within our craniums (see here where I argue that our minds could in fact be extended in this sense). For a computer, size is not an issue, as it is theoretically plausible to turn the entire west coast of America into a supercomputer (although doing so would probably disrupt tech innovation substantially, or perhaps one could argue the west coast is already such a supercomputer).
Embedded in this line of reasoning is the fact that current and future technologies will be able to outperform human beings at certain (narrowly specified, for the moment) tasks, and thus could reach levels of perfection not even imaginable by flesh and blood moral agents. A perhaps counter-intuitive implication of this view might be that human beings become archaic vestiges of what the ideal moral agent may have been and are unable to keep pace with the moral development of AI. In light of this, we might actually be better off creating a species of robots unencumbered with our evolutionary baggage who would constitute a massive moral improvement compared to us. We should, however, be sensitive to the fact that benevolence and intelligence need not go hand in hand: just because an AI is superintelligent does not necessarily imply that it will adopt a view of morality that is congruous with human survival.
To bring this back to the opening remarks on intelligence, recall that in order to make a narrowly intelligent system one does not necessarily have to understand what general intelligence is. In light of this, and the potential dangers associated with the development of AI, we need to tread carefully. There is the chance we create a system that is intelligent in ways we do not understand, and so we may find ourselves closing the stable door after the horse has already bolted.