“I would like to understand things better,
but I don't want to understand them perfectly.”
~ Douglas Hofstadter, Metamagical Themas
A few weeks ago I went to an evening of presentations by startups working in the artificial intelligence field. By far the most interesting was a group that for several years had been quietly working on using AI to create a new compression algorithm for video. While this may seem to be a niche application, their work in fact responds to a pressing need. As demand for video streaming, first in high definition and increasingly in formats such as 4K, hopelessly outruns the buildout of new infrastructure, there is a commensurate need for ever-greater ratios of compression of video data. It is the only viable way to keep up with the reqirements of video streaming, and companies such as Netflix are willing to pay boatloads of cash for the best technologies. But the presentation also crystallized some interesting and important aspects of AI that go well beyond not just niche applications, but the alarmist predictions of people like Steven Hawking, Elon Musk and Bill Gates. What are we really creating here?
This startup, bankrolled by a former currency trader who, as founder and CEO, was the one giving the talk, has engaged in a three-step development program. The first step involved feeding their AI – charmingly named Rita – with every single video compression algorithm already in use, and having it (her?) cherry-pick the best aspects of each. The ensuing Franken-algorithm has already been tested and confirmed to provide lossless compression at a rate of 75%, which is already best in its class. The second step in their program, which is currently in development, charges Rita with the taking the results of everything learned in the first step, and creating its own algorithm. The expectation is that they will reach up to 90% compression, which is really rather extraordinary.
So far, so good. The final step of the program – one which expects to yield a mind-boggling 99% compression ratio – is where things get really interesting. For Rita's creators are now ‘entrusting her' (I know, the more you talk about AI, the more hopeless it is to attempt avoiding anthropomorphization) with the task of creating her own programming language that will be solely dedicated to video compression. There was an appreciative gasp in the room when the CEO outlined this brave next step, and during the Q&A I wanted him to explain more about what this meant.
The exchange went something like this:
Me: Ok, so I understand the first two steps. People have been using techniques of fitness selection to evolve algorithms in ways that humans could not design or even anticipate. Also, there is no reason why an AI couldn't evolve its own algorithm, given a well-defined outcome and enough inputs. But this last step – the creation of an entirely new, purpose-built language, for one purpose only – is this a language that will then be available to human programmers via some sort of interface?
CEO: No. It will be a black box. We won't know how Rita came to design what she did, or how it works. Just that it does what it needs to do.
Me, stammering: But…but…how do you feel about that?
Random guy in the audience: How does he feel about it? He feels pretty good! After all, he's a shareholder.
At which point the entire room erupted in laughter.
It became quickly apparent that the intent of my question was misconstrued, however, since the discussion then turned to what always seems to be the elephant in the room when it comes to AI research: What are the moral implications of surrendering our agency, of which this seemed to be a prime example? The usual suspects were trotted out – Skynet, the Matrix, HAL9000, blah blah blah. (They could have also included Colossus: The Forbin Project, a 1970 sci-fi thriller along the same lines, whose stills I include here). But my point wasn't about whether or how we ought best welcome our new robotic overlords. Rather, it was about legibility. What happens when we create things, that then go ahead and create other things that we don't understand, or even have access to? More to the point, what is lost?
*
Arguably, this signifies an inversion of what is understood as ‘progress', at least in an epistemological sense. For example, plant and animal breeders have refined and elaborated breeds to bring out desirable traits (drought resistance, hunting skills, cuteness) for hundreds, if not thousands of years, without knowing the underlying genetic principles. The identification of DNA as the enabling epistemological substrate of this program has rapidly accelerated these activities, but this has only added to the general illumination of these previously known processes. Genetically modified organisms fall into this category, even if the eventual consequences do not. What AIs such as Rita are empowered to effect, on the other hand, is a deliberately sponsored obfuscation of these processes of knowing. The implied trajectory is that we are willing to create tools that will help us do more things in the world, but that in the process we strike a somewhat Faustian bargain, pleased to arrive at our destination but forfeiting the knowledge of how we got there.
Now, I want to be clear that I am not at all interested in making a moral argument. Unlike what Hollywood would have us believe, there seems little point in arguing whether AIs will turn out to be good or evil. Such anxieties are more redolent of our narcissistic desire to feel threatened by apocalypses of our own manufacture (eg, nuclear war) than a genuine willingness to think through what it might mean for a machine intelligence to be authentically evil, or good, or – which is much more likely – something in between. And the above exchange with the startup's CEO illustrates the blithe manner in which capital will always perform an end-run around these considerations. “Being a shareholder” is sufficient justification for the illegibility of the final outcome, with the further implication that we should all be so lucky as to be shareholders in such enterprises.
Rather, any moral argument should be understood as a proxy for how alien any given technology may seem to us. Perhaps our tendency to assign it a moral status is more indicative of how unsure we are about the role it may play in society. The operational inscrutability of an AI (and not, I should emphasize, its ‘motivations') make the possible consequences so unpredictable that we may seek to legislate its right to exist, and the easiest means for enabling a legislative act is to locate it on a moral continuum.
The use of the word ‘legislate' is appropriate here, since what we are attempting to do is to, quite literally, make the technology and its action in society legible to us. Linguistically, both words share the same Latin root, legere. And if we cannot make the phenomenon of AI legible, then we may at least quarantine its actions and sphere of influence. In William Gibson's novel Neuromancer, this was the remit of the Turing Registry, which enforced an uneasy peace between AIs, the corporations that run them, and the world at large:
The Turing Registry, named after the father of modern computing, operates out of Geneva. Turing is technically not a megacorp, but instead a registry, and the closest thing to a body of government as far as artificial intelligences are concerned. The Turing Registry exists to keep corporations who use AIs and the AIs themselves in check. Every AI in existence, whether directly connected to the matrix or not, must be registered with Turing to enjoy the full rights of an AI. AIs registered with Turing enjoy Swiss citizenship, though the hardware itself that contains the 'soul' is connected to enough explosives to incapacitate the being. Any AI suspected of attempting to remove this device, escape Turing control, or enhance itself without proper Turing approval is controlled immediately.
Aside from the delicious detail that AIs are Swiss citizens (hey, it's not just corporations that can be people), what Gibson indicates to us is that the battle for legibility, in an epistemological sense, is already lost. Pre-emptively quarantining and, failing that, blowing up miscreant AIs is the best that the inhabitants of Neuromancer can hope for. Of course, the narrative arc of the novel concerns precisely this: the protean manner in which an AI attempts to transcend this restricted state. And Gibson implies that humanity, with its toxic mix of curiosity, greed and anthropomorphizing tendencies, is all too willing to be enlisted in this task.
*
And yet, to a large extent AI as the container par excellence for these anxieties is just a red herring, for this kind of illegibility is already rampant. Superficially, we seem to require a locus – a concrete something to which we can point and say “That's an AI” – that then becomes the appointed site for these anxieties. In this sense we are content to believe that, when we saw Watson clobbering his fellow contestants on Jeopardy!, the AI is ‘located' behind a lectern, with his hapless human competitors standing side-by-side behind their own lecterns: a level playing field if there ever was one. Our imagination does not accede to the notion that Watson is a large bank of computers located off-stage, in a different state, even, and ministered to by a team of highly trained scientists and engineers.
In fact, AI is not at all needed to fulfill the anxieties of illegibility. It certainly ‘embodies' those anxieties successfully, despite its own distinct lack of embodiment, by playing on the idea that an AI is something that is kind of like us, but isn't us, but perhaps wants to become more like us, until in the end it becomes something decidedly not like us at all, at which point it will already be too late (see: Hollywood). Except the traces of illegibility are already ubiquitous, in the form of algorithms that may not fall under the rubric of AI but certainly instigate a cascade of events that correspond to what we would identify as AI-like consequences.
Consider this 2011 talk by developer and designer Kevin Slavin (you can get the Cliffs Notes version in his TED Talk): the fact that, at the time, about 70% of all stock trading was driven by algorithms buy and selling shares to other algorithms. Sure, computer scientists would tweak things here and there, but the cumulative effect of unassisted trading has led to some extraordinary outcomes. Most dramatically, the Flash Crash of 2010, which saw the Dow Jones Industrial Average plunge about 9% in a matter of minutes and on no news at all, was likely precipitated by a few rogue algorithms. In the absence of substantive regulation, the markets have learned to live with daily flash crashes.
The financial markets do not hold a monopoly on unintended consequences, however. Slavin also gives further examples of Algorithms Gone Wild with a funny anecdote concerning a biology textbook that was listed on Amazon initially at $1.7 million, only to have the price rise, in a few hours, to $23.6 million, which was odd because the book is out of print, and therefore no one was either selling or buying it. To Slavin, these are “algorithms locked in loops with each other”, engaging in a form of silent combat. Critical to this point is that, while these developments occur at lightning speeds, the disambiguation, if humans even choose to pursue it, takes much longer. In the case of the Flash Crash, it took the SEC five months to issue its report, which was heavily criticized. To this day, there is no consensus on what actually happened in the markets that day. As for the biology textbook, it lives on merely as an anecdote for TED audiences.
So the consequences of an AI-like world are, in fact, here already. To invite AIs into the party is more or less beside the point. Our world has become so deeply driven by software that our capacity to ‘read' what we have created is already substantially, and, in all likelihood, permanently eroded. That this has happened only gradually and in subtle, nearly invisible ways has made it that much more dificult to realize. In this sense, AI, or at least a certain way of thinking about AI, may provide an interesting counterpoint.
If one goes back to its roots, AI research sought to understand intelligence as it existed in the world already, and take that learning and bring it in silico. That this has so far failed – despite substantial progress in the brain sciences – is uncontroversial and well understood. In parallel, the precipitous decline in the costs of computing, bandwidth and storage have enabled the rise of probabilistic approaches to intelligence, rather than behavioral ones, hence the primacy of the algortihm. As Ali Minai, professor at the University of Cincinnati, writes:
AI, invented by computer scientists, lived long with the conceit that the mind was ‘just computation' – and failed miserably. This was not because the idea was fundamentally erroneous, but because ‘computation' was defined too narrowly. Brilliant people spent lifetimes attempting to write programs and encode rules underlying aspects of intelligence, believing that it was the algorithm that mattered rather than the physics that instantiated it. This turned out to be a mistake. Yes, intelligence is computation, but only in the broad sense that all informative physical interactions are computation – the kind of ‘computation' performed by muscles in the body, cells in the bloodstream, people in societies and bees in a hive.
Minai goes on to equate intelligence with ‘embodied behavior in a specific environment'. What I find promising about this line of inquiry is its modesty, but also its ambition. If we begin from the premise that life has done a pretty fine job in not just evolving behavioral intelligence, but in doing so sustainably, this is a paradigm that leads us to a certain way of looking at not just the kind of work machine intelligence can do, but the place that it also ought to occupy, in relation to all the things that are already in the world. This is simply due to the fact that this kind of intelligence is can only exist based on embodiment. In contrast, the bare algorithms running around in financial markets or anywhere else are much more akin to viruses.
I do not know if it is possible to actually create a machine intelligence based on these principles – after all, this is something that has eluded computer and cognitive scientists for decades and continues to do so. But I do believe that such an intelligence will be more legible to us, even if its internal workings remain inscrutable, because our relationship to it will be based on behavior. If Minai's school of thought has merit, this may well be a saving grace. On the other hand, if there is any substantial danger posed by AI, it comes from an utter lack of constraint or connection to the physical world. The issue is whether we as a society will offer ourselves any choice in the matter.