AI Risk in the 2nd Trump Era

by Malcolm Murray

What does the election of Trump mean for risks to society from advanced AI? Given the wide spectrum of risks from advanced AI, the answer will depend very much on which AI risks one is most concerned about.

The AI risk spectrum can be drawn from the near-term, high-certainty risks such as bias and discrimination to longer-term, more speculative risks such as loss of control over agentic AIs. In between those endpoints, there is a range of risks where we believe advanced AI will have an impact, but it is hard to know how much and how soon. This includes everything from AI enabling terrorists to create more deadly weapons, more persuasive AI-enabled disinformation as well as AI-driven disruptions of the labor market. The impact of Trump v2 will likely very greatly between the different points in this spectrum.

Foreseeing the impact of Trump on AI is hard. Four years is an eternity in AI land. AI looks nothing like what it did four years ago. The first Trump administration had inherent high levels of uncertainty. And many different factors will influence the Trump administration over the coming years. But there seems to be some fundamental elements that will likely significantly impact the Trump administration’s actions on AI.

First, the hawkish China stance. A Trump administration seems highly likely to be very focused, and very hawkish, on China. It will likely see AI as an important weapon in that  competition. There will likely be increased funding for military use of AI and there is talk of Trump initiating a “Manhattan Project for AI” with a new Executive Order. It will also likely mean continuing export controls on chips and doubling down on building data centers domestically in the US. Interestingly, there might be some awareness of the capabilities and dangers of very advanced AI – Ivanka tweeted about Leopold Aschenbrenner’s AGI report and Trump himself talked about “super-duper AI” and its power in an interview.

Second, the influence of Silicon Valley libertarianism. Through Vice President Vance, Silicon Valley libertarians such as Marc Andreesen and Peter Thiel seem likely to have an influence on Trump with their deregulatory views. Vice-President Vance is also against “Big Tech” and will likely try to rein in the power of the large tech firms. In addition, we have Musk as a wild card, who seems to, at least for now, have a very large influence on Trump. This should overall mean a sharp turn toward less regulation. Trump has already said he will repeal Biden’s Executive Order on AI from last year. So the chances of any federal regulations on AI go down drastically. There may be some regulation against AI use for specific purposes such as terrorism, but we should not expect anything broad-scoped. In terms of very advanced AI, Musk is an advocate for AI safety, who did support SB1047 in the end, so potentially there could be some impact of his views on AI safety on Trump. Michael Kratsios, who is rumored to handle tech policy for Trump, runs an AI company and is also knowledgeable about advanced AI.

Third, the zero-sum thinking in global engagement. As during the first Trump administration, the US will likely turn inwards, with more antagonistic relationships with other countries. This will mean less engagement with other countries, neither for sharing AI benefits nor for harmonizing approaches to managing its risks. This is problematic for global risks such as AI which will easily transcend national boundaries and require global solutions.

Fourth, a unified Republican government. If, as seems likely, the Republicans gain hold of the House and hold all three branches of government, we should see a less deadlocked Congress that can take quicker action. This could be beneficial for AI risks, since they can be “emergent” and not foreseen until certain model capability levels are reached.

What do these factors mean for the various points along the AI risk spectrum? On the near-term side, with risks of bias and discrimination, it can probably unequivocally be said that risk will increase. Rallying against the “woke virus” was a winning strategy for the Republicans, so they likely feel that have a mandate to dismantle efforts here, starting with repealing Biden’s Executive Order.

On the opposite side, the advanced AI side of the spectrum, a Trump administration might on balance be potentially positive. A Trump administration will likely not mean slowing down AI development. If trend lines continue as they have (although lately Gary Marcus seem to have some increased evidence that they may not), we will likely soon have very advanced AI. Ideally, humanity should have more time to figure out how to manage a technology as transformative as that.

However, if I had to pick one from the potential advanced AI worlds, a US government-led one would be preferable to both a US corporate-led one and a China government-led one. So slowing down development of advanced AI in China to not have autocratically-controlled advanced AI could be a good thing. If Vance’s efforts against Big Tech or Musk’s input on advanced AI result in a lower likelihood of one of the large US corporations developing advanced AI, that is likely also a good thing since that power should not be concentrated in private hands.

Finally, for the risks in the middle of the spectrum, the overall effect is likely one of increased uncertainty. For some of the risks, such as those of terrorists using AI to develop biological or cyber weapons, a Trump administration might have a positive impact. A unified government should be good for taking rapid action if there is a sudden increase in any of the “malicious user” AI risks. For disinformation, a Trump administration is clearly negative given that “alternative facts” are part of its DNA, so the post-truth world is not going anywhere. For labor market disruption, an “America First”-strategy might blunt the impact of AI in the US, as investments are made in domestic production. In other countries, AI impact on the labor market might also be slowed by the Trump administration seeing AI as a weapon and a competitive advantage led by US firms.

So, on balance, given the many conflicting factors and the wide range of AI risks, it seems there is no one answer whether Trump is “good or bad” for AI risk. All we can say with certainty is that uncertainty will increase.

Enjoying the content on 3QD? Help keep us going by donating now.