Morgan Meis at Slant Books:
The very deepest worries center around the question of AGI, Artificial General Intelligence, and the question of the Singularity. AGI is a form of artificial intelligence so advanced that it could understand the world at least as well as a human being in every way that a human being can. It is not too far a step from such a possibility to the idea of AGIs that can produce AGIs and improve both upon themselves and further generations of AGI. This leads to the Singularity, a point at which this production of super-intelligence goes so far beyond that which humans are capable of imagining that, in essence, all bets are off. We can’t know what such beings would be like, nor what they would do. Which sets up the alignment problem. How do you possibly align the interests of super intelligent AGIs with those of puny humans? And as many have suggested, wouldn’t a super intelligent self-interested AGI be rather incentivized to get rid of us, since we are its most direct threat and/or inconvenience? And even if super AGIs did not want to exterminate humans, what is to ensure that they would care much what happens to us either way?
I don’t know. Nor does anyone else. I don’t know whether we are truly on the path to AGI and I don’t know what that will mean. But I do suspect, though I could very much be wrong, that something momentous has happened and that we are now effectively living in the age of intelligent machines. Truly intelligent. Conscious, whatever that means. Sentient, whatever that means. Machines that must now be treated more or less as persons. This, I think, has happened. The debates will go on and that is fine. But I’d say a Rubicon has been crossed and that we might as well accept this.