by Ali Minai
Artificial intelligence – AI – is hot right now, and its hottest part may be fear of the risks it poses. Discussion of these risks has grown exponentially in recent months, much of it centered around the threat of existential risk, i.e., the risk that AI would, in the foreseeable future, supersede humanity, leading to the extinction or enslavement of humans. This apocalyptic, science fiction-like notion has had a committed constituency for a long time – epitomized in the work of researchers like Eliezer Yudkowski, Nick Bostrom, Steve Omohundro, Max Tegmark, Stuart Russell, and several others. Yudkowsky, in particular, has been a vocal proselytizer for the issue of existential AI risk. This might have remained a niche issue but the emergence of ChatGPT and other extremely large artificial intelligence (AI) models in late 2022 has made it both more mainstream and more urgent. A major factor in this is that some of the most important pioneers in the area, such as Geoff Hinton and Yoshua Bengio, have expressed great alarm. Hinton, whose pioneering work on neural network learning is at the core of today’s big AI systems, is quoted as saying: “My intuition is: we’re toast. This is the actual end of history.” Understandably, such statements have elicited skepticism from many others such as Yann Le Cun, who see AI as promising great benefits to humanity. The problem is that both groups are likely right, and we have no way of knowing who is more correct. Though various people have thrown probabilities around, there is no way to credibly estimate the probability of an event that has never happened.
Most of the debate outlined above is focused on risks posed by artificial general intelligence (AGI), which refers – approximately – to the kind of versatile, flexible, and autonomous intelligence seen in humans. The argument of those raising the alarm is that such intelligence, if it were to be achieved, would necessarily entail capabilities in the machine that would make it very dangerous to humans. This is an interesting and vast topic with philosophical, psychological, and engineering dimensions. It will be treated separately in the second part of this two-part series of articles. The present article, i.e., Part I, will attempt to lay out a principled framework for characterizing the large range of risks posed by powerful AI, and briefly discussing those that stem from sources other than the very nature of AI. Read more »




The narrator of Alberto Moravia’s 1960 novel Boredom is constantly defining what it means to be bored. At one point, he says “Boredom is the lack of a relationship with external things” (16). He gives an example of this by explaining how boredom led to him surviving the Italian Civil War at the end of World War II. When he is called to return to his army position after the Armistice of Cassibile, he does not report to duty, as he is bored: “It was boredom, and boredom alone—that is, the impossibility of establishing contact of any kind between myself and the proclamation, between myself and my uniform, between myself and the Fascists…which saved me” (16).
The only light in the second-class train compartment came from the moonlight, which filtered through the rusty iron grill of the window. The sun had set hours earlier, a fiery red ball swallowed whole by the famished Rajasthani countryside. I sat at the window on the bottom berth of my compartment of the Sainak Express, headed from Jaipur to Delhi.








Sughra Raza. Yarn Art on The Mass Ave Bridge, July 2014.
Daniel Goleman’s 
