Philip Ball in Prospect Magazine:
Most AI systems used today—whether for language translation, playing chess, driving cars, face recognition or medical diagnosis—deploy a technique called machine learning. So-called “convolutional neural -networks,” a silicon-chip version of the highly-interconnected web of neurons in our brains, are trained to spot patterns in data. During training, the strengths of the interconnections between the nodes in the neural network are adjusted until the system can reliably make the right classifications. It might learn, for example, to spot cats in a digital image, or to generate passable translations from Chinese to English. Although the ideas behind neural networks and machine learning go back decades, this type of AI really took off in the 2010s with the introduction of “deep learning”: in essence adding more layers of nodes between the input and output. That’s why DeepMind’s programme AlphaGo is able to defeat expert human players in the very complex board game Go, and Google Translate is now so much better than in its comically clumsy youth (although it’s still not perfect, for reasons I’ll come back to).
In Artificial Intelligence, Melanie Mitchell delivers an authoritative stroll through the development and state of play of this field. A computer scientist who began her career by persuading cognitive-science guru Douglas Hofstadter to be her doctoral supervisor, she explains how the breathless expectations of the late 1950s were left unfulfilled until deep learning came along. She also explains why AI’s impressive feats to date are now hitting the buffers because of the gap between narrow specialisation and human-like general intelligence. The problem is that deep learning has no way of checking its deductions against “common sense,” and so can make ridiculous errors. It is, say Marcus and Davis, “a kind of idiot savant, with miraculous perceptual abilities, but very little overall comprehension.” In image -classification, not only can this shortcoming lead to absurd results but the system can also be fooled by carefully constructed “adversarial” examples. Pixels can be rejigged in ways that look to us indistinguishable from the original but which AI confidently garbles, so that a van or a puppy is declared an ostrich. By the same token, images can be constructed from what looks to the human eye like random pixels but which AI will identify as an armadillo or a peacock.
More here.