Alan Turing, moralist

Turing

Scott Aaronson over at Shtetl-Optimized:

Strong AI. The Turing Test. The Chinese room. As I’m sure you’ll agree, not nearly enough has been written about these topics. So when an anonymous commenter told me there’s a new polemic arguing that computers will never think — and that this polemic, by one Mark Halpern, is “being blogged about in a positive way (getting reviews like ‘thoughtful’ and ‘fascinating’)” — of course I had to read it immediately.

Halpern’s thesis, to oversimplify a bit, is that artificial intelligence research is a pile of shit. Like the fabled restaurant patron who complains that the food is terrible and the portions are too small, Halpern both denigrates a half-century of academic computer science for not producing a machine that can pass the Turing Test, and argues that, even if a machine did pass the Test, it wouldn’t really be “thinking.” After all, it’s just a machine!

(For readers with social lives: the Turing Test, introduced by Alan Turing in one of the most famous philosophy papers ever written, is a game where you type back and forth with an unknown entity in another room, and then have to decide whether you’re talking to a human or a machine. The details are less important than most people make them out to be. Turing says that the question “Can machines think?” is too meaningless to deserve discussion, and proposes that we instead ask whether a machine can be built that can’t be distinguished from human via a test such as his.)

More here.