While a sci-fi-style AI apocalypse is not impossible, more immediate risks to both security and democracy must be addressed

From Project Syndicate:

We all know the trope: a machine grows so intelligent that its apparent consciousness becomes indistinguishable from our own, and then it surpasses us – and possibly even turns against us. As investment pours into efforts to make such technology – so-called artificial general intelligence (AGI) – a reality, how scared of such scenarios should we be?

According to MIT’s Daron Acemoglu, the focus on “catastrophic risks due to AGI” is excessive and misguided, because it “(unhelpfully) anthropomorphizes AI” and “leads us to focus on the wrong targets.” A more productive discussion would focus on the factors that will determine whether AI is used for good or bad: “who controls [the technology], what their objectives are, and what kind of regulations they are subjected to.”

Joseph S. Nye, Jr. that, whatever might happen with AGI in the future, the “growing risks from today’s narrow AI,” such as autonomous weapons and new forms of biological warfare, “already demand greater attention.” China, he points out, is already betting big on an “AI arms race,” seeking to benefit from “structural advantages” such as the relative lack of “legal or privacy limits on access to data” for training models.

More here.

Enjoying the content on 3QD? Help keep us going by donating now.