The Shifting Nature of AI Risk

by Malcolm Murray

As AI evolves, so do the risks it poses on society. The risks of AI today are already unrecognizable from those of a few years ago. As a dual-use technology, similar to nuclear power, the capabilities of AI bring great benefits as well as great risks. As opposed to nuclear power, however, the risks from AI are borne by society as a whole even as the benefits may only accrue to parts of society. This means we need to start focusing on societal AI risk management.

AI capabilities are set to only continue to evolve in the years to come. Naturally, there is considerable uncertainty regarding how exactly they will evolve. However, massive investments have already been made and are continuing to be made. By some estimates, there have already been hundreds of billions of dollars invested in AI, and future discrete projects are now in the $100 billion range, such as the rumored Microsoft and OpenAI Stargate project. The scaling laws that have been observed over the past years have meant that money alone (in the form of compute and data) has inexorably led to capability advances. The significantly different capabilities between GPT-4 and GPT-3 were not due to size alone, but it played a large role.

Therefore, even if no further conceptual breakthroughs were to happen (which is unlikely), capabilities will continue to advance. Given what we already know about the risks involved, it is incumbent on us as a society to manage these risks so that society as a whole can reap the benefits from AI and not suffer from the risks. This necessitates broad, interdisciplinary efforts under the banner of societal AI risk management.

The first aspect in risk management is understanding the risk, i.e. risk assessment. In order to be able to begin to manage the risks, we need to understand what they are, how likely they are, how large they are and what factors drive them. Some AI risk assessment efforts have started to take place. Most of it has, however, been focused on model capability evals – the systematic prompting of models to see if dangerous capabilities can be elicited. Examples of this can be seen in e.g. the system card OpenAI released for GPT-4.

This is good and necessary work, but it does not suffice in terms of risk assessment, since the capabilities of the AI models are only one part of the puzzle. A risk scenario starts with a hazard – in this case a model possessing dangerous capabilities – but it also contains actors, incentives, resources and harms. For example, a risk assessment would contain an analysis of the opportunities and incentives for an actor to cause a risk event from the hazard. Further, there should be an analysis of the type of harm that would ensue from the risk event (e.g. fatalities or economic damage).

Over the past year, I have been leading a study to conduct more comprehensive assessments of AI risks. Results are not ready yet, but we are starting to see interesting patterns. The bad news emerging from the study is that most risks do increase when AI is used compared to when it is not. The good news is that the risks are not yet changing in their nature.

Assessing risk can be seen as building an understanding of the “risk space” in which risk scenarios play out. In that risk space, there are many factors that influence how a risk scenario plays out. For example, there may be malicious actors that could initiate risk events, as well as accident risks that could occur as a function of the system itself. There are the resources a risk actor can leverage. There are the controls in place to mitigate the risk, aiming to lower the likelihood of a risk event happening or its severity if it does occur. Importantly, there are also the end recipients of harm, which includes people, economic assets and intangible societal structures, such as democracy. Some of these are measured as quantities – e.g. how many terrorist groups are there that have access to AI models. Others are measured in probabilities – e.g. what is the likelihood that a given terrorist group could get their hands on a pathogen to use for a biological weapon.

What we are seeing in this study is that AI risk is still a function of “quantity” and not “quality”. For example, in the case of disinformation risk, the number of disinformation campaigns is estimated to go up when LLMs are used, but disinformation campaigns using LLMs are not seen as becoming more efficient at persuading people. In other words, AI is still mostly automation, albeit automation of more complex processes. This is good news, as it means we are still operating within the same risk space, with a similar set of risk factors. We are still able to analyze and understand the risks.

However, this is likely to soon change, for two reasons. First, there are also early signs of “quality” factors increasing. Our study shows some small increases for some risk factors that could lead to increased quality, for example in creating spear phishing campaigns. These factors are increasing much less than the “quantity” factors for now. However, they are forecast to increase further in the coming year as a new generation of models is released, and especially as open-sourced models, that malicious actors can access without restrictions, approach the state-of-the-art.

Second and more importantly, much of the focus in AI is currently on building AI agents. As can be seen in comments from Sam Altman and Andrew Ng, this is seen as the next frontier for AI. When intelligent AI agents are available at scale, this is likely to change the risk landscape dramatically.

Intelligence underlies everything in this world. It is the driving force behind the dominance of humans over other animals, and all human achievements. The creation of new intelligences in the world, therefore, is a monumental shift. Creating new intelligences that can act autonomously increases uncertainty tremendously. Risk assessment is at its core an exercise in reducing uncertainty about the world. Intelligent agents are a major factor in creating uncertainty as they add to the complexity of systems. Risk assessments of nuclear power plants focused on mechanical pieces only for decades. When human factors were included in the 1980s, the complexity increased significantly.

Introducing agentic intelligences could mean that we lose the ability to analyze AI risks and chart their path through the risk space. The analysis can become too complex since there are too many unknown variables. Note that this is not a question of “AGI” (Artificial General Intelligence), just agentic intelligences. We already have AI models that are much more capable than humans in many domains (playing games, producing images, optical recognition). However, the world is still recognizable and analyzable because these models are not agentic. The question of AGI and when we will “get it”, a common discussion topic, is a red herring and the wrong question. There is no such thing as one type of “general” intelligence. The question should be when will we have a plethora of agentic intelligences operating in the world trying to act according to preferences that may be largely unknown or incomprehensible to us.

For now, what this means is that we need to increase our efforts on AI risk assessment, to be as prepared as possible as AI risk continues to get more complex. However, it also means we should start planning for a world where we won’t be able to understand the risks. The focus in that world needs to instead be on resilience.

It can be fruitful to compare to the risk of climate change. In that field, adaptation has long been a dirty word since people are concerned it would impact incentives. Now, however, society is slowly coming around to accepting that we will not be able to contain global temperatures below an increase of 1.5 degrees or more regardless of our future actions. It is in a sense built in. Therefore, it is necessary to work on adaptation in parallel.

Hopefully, when it comes to the other big risk of the 21st century, AI risk, we don’t need to wait as long for society to understand that changing AI risk is similarly already built in. Resilience and adaptation will inevitably be necessary regardless of our future actions, and we should start AI-proofing society.