Francis Fukuyama at Persuasion:
As I’ve learned more about what the future of AI might look like, I’ve come to better appreciate the real dangers that this technology poses. There were always two ways in which AI could be misused. The first is happening now: AI technologies like deep fakes are already widely in circulation. My Instagram feed is full of videos of things I am sure never happened like catastrophic building collapses or MAGA celebrities explaining how wrong they were. It is, however, nearly impossible to verify whether or not they are real. This kind of manipulation is going to further undermine trust in institutions and exacerbate polarization. There are plenty of other malign uses to which sophisticated AI can be put, like raiding your bank account and launching devastating cyber-attacks on basic infrastructure. Bad actors are everywhere.
The other kind of fear, which I always had trouble understanding, was the “existential” threat AI posed to humanity as a whole. This seemed entirely in the realm of science fiction. I did not understand how human beings would not be able to hit the “off” switch on any machine that was running wild. But having thought about it further, I think that this larger threat is in fact very real, and there is a clear pathway by which something disastrous could happen.
More here.
Enjoying the content on 3QD? Help keep us going by donating now.
