Huw Price in the New York Times:
In Copenhagen the summer before last, I shared a taxi with a man who thought his chance of dying in an artificial intelligence-related accident was as high as that of heart disease or cancer. No surprise if he’d been the driver, perhaps (never tell a taxi driver that you’re a philosopher!), but this was a man who has spent his career with computers.
Indeed, he’s so talented in that field that he is one of the team who made this century so, well, 21st – who got us talking to one another on video screens, the way we knew we’d be doing in the 21st century, back when I was a boy, half a century ago. For this was Jaan Tallinn, one of the team who gave us Skype. (Since then, taking him to dinner in Trinity College here in Cambridge, I’ve had colleagues queuing up to shake his hand, thanking him for keeping them in touch with distant grandchildren.)
I knew of the suggestion that A.I. might be dangerous, of course. I had heard of the “singularity,” or “intelligence explosion”– roughly, the idea, originally due to the statistician I J Good (a Cambridge-trained former colleague of Alan Turing’s), that once machine intelligence reaches a certain point, it could take over its own process of improvement, perhaps exponentially, so that we humans would soon be left far behind. But I’d never met anyone who regarded it as such a pressing cause for concern – let alone anyone with their feet so firmly on the ground in the software business.
I was intrigued, and also impressed, by Tallinn’s commitment to doing something about it.
More here.