by Malcolm Murray
The world does not lend itself well to steady states. Rather, there is always a constant balancing act between opposing forces. We see this now play out forcefully in AI.
To take a step back, the balancing act is present whether we look at the micro or the macro level. On the micro level, in our personal lives, we have seen how almost everything is good in moderation, but excess of anything can be fatal. When it comes to countries’ economic system, history has provided enough examples for us to be fairly certain that capitalism leads to prosperity and communism leads to stagnation. However, we have also seen how unconstrained capitalism leads to a race to the lowest common denominator and can lead to decrease in quality of life on non-GDP measures for the majority of people. When it comes to political system, we can admire China in the short term, in awe of how autocracies get more things done than democracies, but we have also seen how unchecked power inevitably leads to human rights degradations for the majority of people.
It is the same at the meso level, with new technologies. The two big technology trends of the 2010s followed this pattern. Social media started out being a force for freedom, overturning dictators and connecting old friends, but left unchecked and unregulated, it deteriorated into a click-baiting attention maximizer, driving children to suicide. The gig economy started as an environmentalist utopia, with “collaborative consumption” of idle resources, but quickly deteriorated into the creation of a new proletariat, living on below-subsistence earnings.
So it is no surprise that we now face the same challenge with AI. As a society, we need to balance the opportunities with the risks. We need some AI regulation, but at the same time, not too much so that it stifles innovation. We know that AI offers the promise of tremendous upside as well as downside. On the upside, we have seen recent examples such as the release of the new GenCast AI model from Google DeepMind for weather prediction. This could help foresee extreme weather events better than existing models, saving lives in the process. But on the flipside, we keep seeing examples of AI used indiscriminately leading to severely negative outcomes. The recent shooting of the UnitedHealthcare CEO seems to have been partly motivated by AI-driven processes, and Amazon recently reported a huge uptick in cyber threats due to AI.
The tricky part is that the needle is always moving. At any given point in time, we may be either under-controlling or over-controlling a given risk. This means either letting AI developers and deployers run rampant with the risks, or drowning them in red tape. Either allowing for too many adverse outcomes in society, or conversely, depriving society of large potential benefits.
At this point in time, my best guess is that we are still under-controlling the risks from AI. For a technology with such potentially large transformative powers and such wide-ranging risks, there is very little regulation yet and AI developers can still largely do as they please. Although neither risks not benefits really are widely at display in society yet, we actually at the moment arguably have a better sense of the risks than the benefits. The CEOs of the AI developers tout potential benefits in their manifestos. In a recent blog post, Dario Amodei, CEO of Anthropic, painted a rosy picture of the future AI-enabled world, for example, but the benefits are often presented quite hazily (“AI will cure cancer”). For the risks, as mentioned, we are already seeing the harms on low levels, and we are starting to have quite precise threat models for how AI could inflict large-scale harm. It is notable that at a recent conference, I was asked why I was advocating for more quantitative measuring in AI risk management. The questioner argued that since we don’t know the benefits, how can we start to discuss the risks. My reply was obviously that if we don’t know the benefits, and we do know the risks, then why on earth are we in gung-ho deployment mode.
There is of course the danger that we will over-adjust when regulations do start kicking in and AI will then become overcontrolled. Nick Bostrom, one of the first to bring the idea of the risks of advanced AI to the attention of policymakers with his book Superintelligence in 2014, has started warning of the risks with the pendulum swinging too far in the other direction.
However, so far, the signs are promising. SB1047, for example, the California AI regulation that was eventually vetoed by Governor Newsom, was the subject of heated debate and much hyperbole. However, it was actually a quite reasonable piece of legislation. It would only have inflicted a regulatory burden on the developers or fine-tuners of very large models (10^26 FLOPS or >$100 million) (although noting that it had some less ideal vagueness in its language). In relation to the large training costs these developers incur, the relative costs of regulatory compliance are trivial.
Similarly, the process in the EU also seems to be proceeding in quite a balanced fashion. While there were many complaints of the EU AI Act itself being too restrictive, the recent release of the first draft of the Code of Practice that will guide companies’ compliance with the Act was quite positive. This is being drafted by a wide range of very experienced external experts, with input from a large number of stakeholders. As one of those stakeholders, I was pleasantly surprised by the quality of the first draft. While still high-level, it seems to reflect the severity of some of the potential risks from AI from the most advanced models, while still being very conscious of not inflicting a regulatory burden on companies that are developing less advanced models, that pose less risks to society.
However, given how little we currently can specify the exact benefits and risks of AI, the truth is we simply don’t know whether we are over- or under-controlling the risk. In an ideal world, we would be able to measure both the risks and the benefits, and know when we are straying too far either in the direction of over-controlling the risk, or under-controlling it. But we are of course still far from that. The key imperative for the near term is therefore the need to better understand the risks and benefits of AI and start on the journey of making them more measurable, more quantifiable and more comparable with each other.
Enjoying the content on 3QD? Help keep us going by donating now.