Palpable Knowledge Of Things: A Meditation

by Mark R. DeLong

Human beings thought with their hands. It was their hands that were the answer of curiosity, that felt and pinched and turned and lifted and hefted. There were animals that had brains of respectable size, but they had no hands and that made all the difference. (Isaac Asimov, Foundation’s Edge)

Eugene Russell, a piano tuner interviewed by Studs Terkel in Working, said with satisfaction that the computer wouldn’t be replacing him anytime soon, even though he mentioned electronic devices—“an assist,” he said, that helps tuners. Eugene’s wife Natalie felt otherwise, saying at one point in their conversation, “It’s an electronic thing now. Anyone in the world can tune a piano with it. You can actually have a tin ear like a night club boss.”

Eugene mixed elements of beauty and delight with the technical complexity of piano tuning, recalling how he would “hear great big fat augmented chords that you don’t hear in music today” and that he would come home and say, “I just heard a diminished chord today!” Once he was tuning a piano in a hotel ballroom during “a symposium of computer manufacturers. One of these men came up and tapped me on the shoulder. ‘Someday we’re going to get your job.’ I laughed. By the time you isolate an infinite number of harmonics, you’re going to use up a couple billion dollars worth of equipment to get down to the basic fundamental that I work with my ear.”

The piano tuner feels and practices the tune, which is hardly reducible to formulae, perhaps because it is one of those things in life that’s approximated, but not unambiguously achieved. At best, tuning a piano is a compromise: “The nature of equal temperament makes it impossible to really put a piano in tune,” Eugene explained. “The system is out of tune with itself. But it’s so close to in tune that it’s compatible.” Read more »



Monday, August 31, 2015

Fearing Artificial Intelligence

by Ali Minai

ScreenHunter_1341 Aug. 31 10.48Artificial Intelligence is on everyone's mind. The message from a whole panel of luminaries – Stephen Hawking, Elon Musk, Bill Gates, Apple founder Steve Wozniak, Lord Martin Rees, Astronomer Royal of Britain and former President of the Royal Society, and many others – is clear: Be afraid! Be very afraid! To a public already immersed in the culture of Star Wars, Terminator, the Matrix and the Marvel universe, this message might sound less like an expression of possible scientific concern and more a warning of looming apocalypse. It plays into every stereotype of the mad scientist, the evil corporation, the surveillance state, drone armies, robot overlords and world-controlling computers a la Skynet. Who knows what “they” have been cooking up in their labs? Asimov's three laws of robotics are being discussed in the august pages of Nature, which has also recently published a multi-piece report on machine intelligence. In the same issue, four eminent experts discuss the ethics of AI. Some of this is clearly being driven by reports such as the latest one from Google's DeepMind, claiming that their DQN system has achieved “human-level intelligence”, or that a robot called Eugene had “passed the Turing Test“. Another legitimate source of anxiety is the imminent possibility of lethal autonomous weapon systems (LAWS) that will make life-and-death decisions without human intervention. This has led recently to the circulation of an open letter expressing concern about such weapons, and it has been signed by hundreds of other scientists, engineers and innovators, including Musk, Hawking and Gates. Why is this happening now? What are the factors driving this rather sudden outbreak of anxiety?

Looking at the critics' own pronouncements, there seem to be two distinct levels of concern. The first arises from rapid recent progress in the automation of intelligent tasks, including many involving life-or-death decisions. This issue can be divided further into two sub-problems: The socioeconomic concern that computers will take away all the jobs that humans do, including the ones that require intelligence; and the moral dilemma posed by intelligent machines making life-or-death decisions without human involvement or accountability. These are concerns that must be faced in the relatively near term – over the next decade or two.

The second level of concern that features prominently in the pronouncements of Hawking, Musk, Wozniak, Rees and others is the existential risk that truly intelligent machines will take over the world and destroy or enslave humanity. This threat, for all its dark fascination, is still a distant one, though perhaps not as distant as we might like.

In this article, I will consider these two cases separately.

Read more »