Erik Hoel in The Intrinsic Perspective:
Last week, The New York Times published the transcript of a conversation with Microsoft’s Bing (AKA Sydney) wherein over the course of a long chat the next-gen AI tried, very consistently, and without any prompting to do so, to break up the reporter’s marriage and to emotionally manipulate him in every way possible. I had been up late the night before researching reports coming out of similar phenomena as Sydney threatened and cajoled users across the globe, later arguing that same day in “I am Bing, and I am evil” that it was time to panic about AI safety. Like many, while I knew that current AIs were capable of these acts, what I didn’t expect was Microsoft to release one that was so obviously unhinged and yet, at the same time, so creepily convincing and intelligent.
Since then, over the last week almost everyone has given their takes and objections concerning AI safety, and I’ve compiled and charted a cartography of them. What follows a lay of the land of AI safety, as well as a series of replies to common objections over whether AI safety is a real concern, and sane and simple things we can do to promote AI safety. So if you want to know in detail why an AI might kill you and everyone you know, and what can be done to prevent that, it’s worth checking out.
More here.