Understanding (Artificial) Intelligence

Ali Minai in Barbarikon:

IMG_1396This piece in the Atlantic from a few months ago is a wonderful profile of Douglas Hofstadter and a timely exposition of an issue at the core of the artificial intelligence enterprise today.

I read Doug Hofstadter's great book, Goedel, Escher, Bach (or GEB, as everyone calls it) in 1988 as a graduate student working in artificial intelligence – and, as with most people who read that book, it was a transformative experience. Without doubt, Hofstadter is one of the most profound thinkers of our time, even if he chooses to express himself in unconventional ways. This piece captures both the depth and tragedy of his work. It is the tragedy of the epicurean in a fast food world, of a philosopher among philistines. At a time when most people working in artificial intelligence have moved on to the “practical and possible” (i.e., where the money is), Hofstadter doggedly sticks with the “practically impossible”, in the belief that his ideas and his approach will eventually recalibrate the calculus of possibility. The reference to Einstein at the end of the piece it truly telling.

My main concern, however, is the deeper point made in the Atlantic article: The degree to which the field of artificial intelligence (AI) has abandoned its original mission of replicating human intelligence and swerved towards more “practical” applications based on “Big Data”. This point was raised vociferously by Fredrik deBoer in a recent piece, and much of this post is a response to his critique of the current state of AI.

deBoer begins with a simplistic dichotomy between what he terms the “cognitive” and the “probabilistic” models of intelligence. The former, studied by neuroscientists and psychologists – grouped together under the term “cognitive scientists” – was the original concern of AI, which sought to first understand and then replicate human intelligence. Instead, what dominates today is the latter approach which seeks to achieve practical capabilities such as machine translation, text analysis, recommendation, etc., through the application of statistics to large amounts of data without any attempt to “understand” the processes in cognitive terms. deBoer sees this as a retreat for AI from its original lofty goals to mere praxis driven, in his opinion, by the utter failure of cognitive science to elucidate how real intelligence works.

More here.