Kelsey Piper in Asterisk:
It’d be a mistake to characterize the risk of human extinction from artificial intelligence as a “fringe” concern now hitting the mainstream, or a decoy to distract from current harms caused by AI systems. Alan Turing, one of the fathers of modern computing, famously wrote in 1951 that “once the machine thinking method had started, it would not take long to outstrip our feeble powers. … At some stage therefore we should have to expect the machines to take control.” His colleague I. J. Good agreed; more recently, so did Stephen Hawking. When today’s luminaries warn of “extinction risk” from artificial intelligence, they are in good company, restating a worry that has been around as long as computers. These concerns predate the founding of any of the current labs building frontier AI, and the historical trajectory of these concerns is important to making sense of our present-day situation. To the extent that frontier labs do focus on safety, it is in large part due to advocacy by researchers who do not hold any financial stake in AI. Indeed, some of them would prefer AI didn’t exist at all.
But while the risk of human extinction from powerful AI systems is a long-standing concern and not a fringe one, the field of trying to figure out how to solve that problem was until very recently a fringe field, and that fact is profoundly important to understanding the landscape of AI safety work today.
More here.