Welcome To Alphaville

by Misha Lepetic

“The secret of my influence has always been
that it remained secret.”
~ Salvador Dalí

Alphaville_0Last month I looked at the short and ignominious career of @TayandYou, Microsoft's attempt to introduce an artificial intelligence agent to the spider's parlor otherwise known as Twitter. Hovering over this event is the larger question of how best to think about human-computer interaction. Drawing on the suggestion of computer scientist and entrepreneur Stephen Wolfram, I put forward the concept of 'purpose' as such a framework. So what was Tay's purpose? Ostensibly, it was to 'learn from humans'. But releasing an AI into the wild leads to unexpected consequences. In Tay's case, interacting with humans was so debilitating that not only could it not achieve its stated purpose, but neither could it achieve its real, unstated goal, which was to create a massive database of marketing preferences of the 18-24 demographic. (As a brief update, Microsoft relaunched Tay and it promptly went into a tailspin of spamming everyone, replying to itself, and other spasmodic behaviors more appropriate to a less-interesting version of Max Headroom).

People have been releasing programs into the digital wild for decades now. The most famous example of the earlier, pre-World Wide Web internet was the so-called Morris worm. In 1988, Robert Tappan Morris, then a graduate student at Cornell University, was trying to estimate the size of the Internet (it's more likely that he was bored). Morris's program would write itself into the operating system of a target computer using known vulnerabilities. It didn't do anything malicious but it did take up valuable memory and processing power. Morris's code also included instructions for replication: specifically, every seventh copy of the worm would instantiate a new copy. More importantly, there was no command-and-control system in place. Once launched, the worm was completely autonomous, with no way to change its behavior. Within hours, the fledgling network of about 100,000 machines had nearly crashed, and it took several days of work for the affected institutions – mostly universities and research institutes – to figure out how to expunge the worm and undo the damage.

This is a good example of how the frictionless nature of information technology serves to amplify both purpose and consequence. And the consequences of Morris's worm went far beyond slowing down the Internet for a few days. As Timothy Lee noted in the Washington Post on the occasion of the worm's 25th anniversary:

Before Morris unleashed his worm, the Internet was like a small town where people thought little of leaving their doors unlocked. Internet security was seen as a mostly theoretical problem, and software vendors treated security flaws as a low priority. The Morris worm destroyed that complacency.

This narrative of innocence lost has remained relevant to our experience with technology. Granted, the Internet was small and chummy back in 1988 – after all, the invention of the web browser was still about five years away – but the fact that 99 lines of code could launch an entire industry is worth contemplating. That is, until you realize that if it hadn't been Morris's 99 lines, it would have been someone else's. Now the internet is many orders of magnitude larger and more essential to our society, but I contend that the same dynamic of purpose and consequence remains at work. There is a clear lineage that can be drawn from Morris to Microsoft's Tay. We think we expect one thing to happen, and while that thing may indeed come to pass, a whole lot of other things also come into play.

*

Alphaville_2This brings me to another recent development in AI that's somewhat more serious than Tay, namely the emergence of AlphaGo, an artificial intelligence schooled in the ancient Chinese strategy game Go. As has been widely reported, AlphaGo beat the world #1, Lee Se-dol, by a decisive margin of four games to one in South Korea. AlphaGo accomplished this through an extensive training regimen that included playing another version of itself several million times (The Verge extensively covered the series here).

In the case of AlphaGo, the purpose seems to be clear. Win at Go – which it did, and handily. But we don't get the deeper context, or, in the parlance of clickbait titles, the “You won't believe what happens next”. This is partly the fault of the way the mainstream media constructs its reporting today. Another opportunity to crow about how machines will soon overtake us, and then on to the next shiny object that commands the news cycle's attention. In fact, AlphaGo is but a step in a long, iterative process begun decades ago by DeepMind's founder and CEO, Demis Hassabis. In fact, he lays it all out quite clearly in this lecture at the British Museum.

The larger purpose of this process, of which AlphaGo is merely a symptom, is, in Hassabis's own words, “to solve intelligence, and then use that to solve everything else”. Obviously we could spend quite a bit of time unpacking what he means by any of the key terms in that mission statement: What is intelligence? How do you know when you've solved it? What is everything else, and who gets to decide that? Seen within this larger context, the idea of an AI winning at Go goes from one of the holy grails to a digital cairn, marking an event on the way to something much greater, and more ambiguous.

As an example consider Watson, IBM's Jeopardy-winning juggernaut. Perhaps because Jeopardy is a game that seems intrinsically more human, the impact on our popular consciousness was more substantial than AlphaGo's feat. But what is Watson doing today? Is it, to borrow a classic dig, “currently residing in the ‘where are they now' file”? Not at all. Watson is an active revenue stream for IBM, although exactly how much is unknown, since the actual numbers are, for the time being, rolled up into the company's larger Cognitive Solutions division. Watson's involvement is remarkably eclectic, including “helping doctors improve cancer treatment at Memorial Sloan Kettering and employers analyze workplace injury reports.” Also, Watson is looking forward to providing insight into case law. And this is all in addition to applying its talents to the kitchen.

What else is Watson up to? Going back to Stephen Wolfram's discussion of AI that I referenced last month, I was struck by his vague disinterest in certain applications. For example, he says

I was thinking the number one application was going to be customer service. While that's a great application, in terms of my favorite way to spend my life, that isn't particularly high up on the list. Customer service is precisely one of these places where you're trying to interface, to have a conversational thing happen. What has been difficult for me to understand is when you achieve a Turing test AI-type thing, there isn't the right motivation. As a toy, one could make a little chat bot that people could chat with.

This is, in fact, exactly one of the businesses that Watson is in. Any sufficiently open-minded entrepreneur could rattle off a dozen opportunities where he or she could really use a conversant machine intelligence. And the larger the scale, the greater the opportunity. Just as Tay could talk to millions of millennials, Watson can talk to millions of customers. Meet IBM Watson Engagement Advisor, which is replacing entire call centers as we speak.

Alphaville_3Moreover, Watson is not just a disembodied voice on the other end of a phone line. One of the great lines of technological convergence we have already begun to witness is the unification of AI with robotics. And this crosses AI over into embodiment, which is another ball game entirely. Witness this exchange between a Pepper robot, plugged into Watson and a bank customer. (Obviously, this is a promotional video, but I am slightly disoriented by the fact that IBM is hip enough to be using using words like ‘bummer' when describing the risks of an adjustable-rate mortgage.) It is not difficult to imagine thousands of these robots, with their aww-shucks attitude, all connected to a central AI that is constantly learning and refining itself based on inputs provided by humans. In fact, this not some Alpha-60-style speculation; this is already happening.

These examples illustrate the big takeaway concerning how Watson is being deployed. Watson is no sacred cow. IBM views it as a utility that other aspects of its business can and should leverage, hence the fact that Watson is being used not only in its Cognitive Solutions division, but also in the much larger Global Business Solutions division. The general application of AI is exactly that: general, and the more general the better. IBM's managers and executives would much rather have a tool, or suite of tools, that they can apply promiscuously to any market opportunity that presents itself.

*

There is no reason as to why AlphaGo, which is owned by Google, will approach its further development any differently. This is especially true if we are to take CEO Demis Hassabis's words seriously: “to solve intelligence, and then use that to solve everything else”. But as the ongoing integration of Watson into a business context shows us, ‘everything else' is really a proxy phrase for ‘everything where the money is'. I'll hasten to add that there is nothing inherently objectionable about this, but the fact is that there is no guaranteed nobility in the future of these technologies, either. They will be used to chase profits wherever they may be found. This is the dilution, the ambiguation of purpose. In a very definite sense, we approach what Foucault was trying to teach us about power: its diffuse nature, its functioning at a remove.

Finally, an argument has been made in some quarters that all this AI stuff is really going to be fine, since what we are really after is not artificial intelligence per se, but augmented intelligence. On the surface, the difference is promising, since it perpetuates the idea that machines will continue to be our servants, helping us see the world in new and different ways, enriching our experience of the things that motivate us in the first place. But the question that I have for these optimists is simple: Who gets to be that person?

Alphaville_1For example, Garry Kasparov, the chess champion whose 1997 defeat at the hands of IBM's Deep Blue heralded the beginning of the current era of man versus machine, proceeded to incorporate play against Deep Blue as an essential part of his training regimen. In fact, it was this additional training that was a factor in his ability to maintain a monopoly on the chess world for many years.

Likewise, Fan Hui, the European Go champion who was defeated by AlphaGo in the run-up to the matches against Lee Se-dol, joined the AlphaGo team as an advisor, once again lending resonance to the old saw “if you can't beat 'em, join 'em”. As a recent Wired article noted:

As he played match after match with AlphaGo over the past five months, he watched the machine improve. But he also watched himself improve. The experience has, quite literally, changed the way he views the game. When he first played the Google machine, he was ranked 633rd in the world. Now, he is up into the 300s. In the months since October, AlphaGo has taught him, a human, to be a better player. He sees things he didn't see before. And that makes him happy. “So beautiful,” he says. “So beautiful.”

Kasparov and Fan are rare birds, however, with the expertise and fame that provided them with the opportunity to attach themselves, lamprey-like, to the fast-swimming phenomenon that machine intelligence is becoming. But what about ordinary people – perhaps someone who recently lost their job to automation instigated by the same AI? Will they really have the opportunity to engage it in a didactic or even pleasurable capacity? Or will they be too busy job hunting to care? To quote Godard's all-powerful computer in 'Alphaville', “All is linked, all is consequence”.