Computers don’t give a damn: The improbability of genuine thinking machines

ISTANBUL, TURKEY – MAY 06: People view historical documents and photographs displayed in a high tech art installation at Salt Galata on May 6, 2017 in Istanbul, Turkey. The “Archive Dreaming” installation by artist Refik Anadol uses artificial intelligence to visualize nearly 2 million historical Ottoman documents and photographs from the SALT Research Archive. Controlled by a single tablet in the center of a mirrored room the artist used machine learning algorithms to combine historical documents, art, graphics and photographs to create an immersive installation allowing people to scroll, read and explore the archives. The SALT Galata archives include around 1.7 million documents ranging from the late-Ottoman era to the present day. The exhibition is on show at SALT Galata art space through till June 11, 2017. (Photo by Chris McGrath/Getty Images)

Tim Crane in the TLS:

The achievements of actual AI – that is, the kind of technology that makes your smartphone work – are incredible. These achievements have been made possible partly by developments in hardware (in particular the increased speed and miniaturization of microprocessors) and partly because of the access to vast amounts of data on the internet – both factors that neither Simon nor Dreyfus could have predicted. But it means that enthusiastic predictions for AI are still popular. Many believe that AI can produce not just the “smart” devices that already dominate our lives, but genuine thinking machines. No one says that such machines already exist, of course, but many philosophers and scientists claim that they are on the horizon.

To get there requires creating what researchers call “Artificial General Intelligence” (AGI). As opposed to a special-purpose capacity – like Deep Blue’s capacity to play chess – AGI is the general capacity to apply intelligence to an unlimited range of problems in the real world: something like the kind of intelligence we have. The philosopher David Chalmers has confidently claimed that “artificial general intelligence is possible … There are a lot of mountains we need to climb before we get to human-level AGI. That said, I think it’s going to be possible eventually, say in the 40-to-100-year time frame”. The philosophers John Basl and Eric Schwitzgebel are even more optimistic, claiming it is “likely that we will soon have AI approximately as cognitively sophisticated as mice or dogs”.

More here.