Stuart Russell, Anthony Aguirre, Ariel Conn and Max Tegmark in IEEE Spectrum:
Paul Scharre’s recent article “Why You Shouldn’t Fear ‘Slaughterbots’” dismisses a video produced by the Future of Life Institute, with which we are affiliated, as a “piece of propaganda.” Scharre is an expert in military affairs and an important contributor to discussions on autonomous weapons. In this case, however, we respectfully disagree with his opinions.
We have been working on the autonomous weapons issue for several years. We have presented at the United Nations in Geneva and at the World Economic Forum; we have written an open letter signed by over 3,700 AI and robotics researchers and over 20,000 others and covered in over 2,000 media articles; one of us (Russell) drafted a letter from 40 of the world’s leading AI researchers to President Obama and led a delegation to the White House in 2016 to discuss the issue with officials from the Departments of State and Defense and members of the National Security Council; we have presented to multiple branches of the armed forces in the United States and to the intelligence community; and we have debated the issue in numerous panels and academic fora all over the world.
Our primary message has been consistent: Because they do not require individual human supervision, autonomous weapons are potentially scalable weapons of mass destruction (WMDs); essentially unlimited numbers can be launched by a small number of people. This is an inescapable logical consequence of autonomy. As a result, we expect that autonomous weapons will reduce human security at the individual, local, national, and international levels.
More here.