Greg Ross interviews Douglas Hofstadter in American Scientist:
There’s a popular idea currently that technology may be converging on some kind of culmination—some people refer to it as a singularity. It’s not clear what form it might take, but some have suggested an explosion of artificial intelligence. Do you have any thoughts about that?
Oh, yeah, I’ve organized several symposia about it; I’ve written a long article about it; I’ve participated in a couple of events with Ray Kurzweil, Hans Moravec and many of these singularitarians, as they refer to themselves. I have wallowed in this mud very much. However, if you’re asking for a clear judgment, I think it’s very murky.
The reason I have injected myself into that world, unsavory though I find it in many ways, is that I think that it’s a very confusing thing that they’re suggesting. If you read Ray Kurzweil’s books and Hans Moravec’s, what I find is that it’s a very bizarre mixture of ideas that are solid and good with ideas that are crazy. It’s as if you took a lot of very good food and some dog excrement and blended it all up so that you can’t possibly figure out what’s good or bad. It’s an intimate mixture of rubbish and good ideas, and it’s very hard to disentangle the two, because these are smart people; they’re not stupid.
Ray Kurzweil says 2029 is the year that a computer will pass the Turing test [converse well enough to pass as human], and he has a big bet on it for $1,000 with [Lotus Software founder Mitch Kapor], who says it won’t pass. Kurzweil is committed to this viewpoint, but that’s only the beginning. He says within 10 or 15 years after that, a thousand dollars will buy you computational power that will be equivalent to all of humanity. What does it mean to talk about $1,000 when humanity has been superseded and the whole idea of humans is already down the drain?
More here.