Alex Amadori, Gabriel Alfour, Andrea Miotti, and Eva Behrens at AI Scenarios:
We model national strategies and geopolitical outcomes under differing assumptions about AI development. We put particular focus on scenarios with rapid progress that enables highly automated AI R&D and provides substantial military capabilities.
Under non-cooperative assumptions—concretely, if international coordination mechanisms capable of preventing the development of dangerous AI capabilities are not established—superpowers are likely to engage in a race for AI systems offering an overwhelming strategic advantage over all other actors.
If such systems prove feasible, this dynamic leads to one of three outcomes:
- One superpower achieves an unchallengeable global dominance;
- Trailing superpowers facing imminent defeat launch a preventive or preemptive attack, sparking conflict among major powers;
- Loss-of-control of powerful AI systems leads to catastrophic outcomes such as human extinction.
Middle powers, lacking both the muscle to compete in an AI race and to deter AI development through unilateral pressure, find their security entirely dependent on factors outside their control: a superpower must prevail in the race without triggering devastating conflict, successfully navigate loss-of-control risks, and subsequently respect the middle power’s sovereignty despite possessing overwhelming power to do otherwise.
More here.
Enjoying the content on 3QD? Help keep us going by donating now.
