by Malcolm Murray
Over the past year, there has been significant movement in AI risk management, with leading providers publishing safety frameworks over the past year that function as AI risk management. However, the problem is that these are not actually proper risk management when you compare them to established practice in other high-risk industries.
In other industries, risk management typically has 5 components. These are all interconnected and important. They are: Risk Identification, Risk Assessment, Risk Evaluation, Risk Treatment and Risk Governance. These go by different names depending on the framework, and are sometimes also grouped differently. For example, in the International AI Safety Report, we call them Identification, Analysis/Evaluation, Mitigation and Governance. In a recently published paper that I worked on, Open Problems in Frontier AI Risk Management, they are instead Risk Planning, Risk Identification, Risk Analysis, Risk Evaluation and Risk Mitigation.
The exact wording and grouping does not matter, what matters is the activity. I have found that a good way to explain what the key activities are can be done using the concept of the map and the territory. Basically, first, we want to understand which territory we are in – this is risk identification. Then we create a map of that territory – that is risk assessment. Third, we determine alternative paths to proceed through the territory. This can be compared to risk evaluation. We then need to choose a path and pursue it. This is risk treatment. And finally, the risk governance equivalent aspect can be expressed as stopping once in a while to get a second pair of eyes on the compass, to determine that we’re still going in the right direction.
However, in what has become the norm in current AI risk management, only a limited subset of these are actually being done. What typically happens is an assessment of capabilities, and then, if the capabilities reach a specific threshold, a corresponding implementation of predetermined safeguards.
This means there are several missing areas where risk management is not happening. Read more »


C. Thi Nguyen’s The Score: How to Stop Playing Somebody Else’s Game (Penguin, 2026;
How can we possibly approach the world today without being in a constant stage of rage? Philosopher and psychoanalyst Josh Cohen’s 




No one sells out anymore. The first pages of
A thought has been nagging at me lately. Are most shitty people not very bright?

Kipling Knox: Thanks, Philip. Yes, that’s true—both books share a world with common characters. But that wasn’t my original intent. Between publishing these two, I started two other novels, with different settings. I put them both aside because I found myself drawn back to Middling. The story “Downriver,” in particular, ended so ambiguously that I was curious to know what would happen to its characters, Morgan and Arthur, and how their mystery would play out. It’s a difficult trade-off—sticking with one fictional world versus exploring others. When you write a book, you are deliberately not writing others, and there can be a sense of loss in that. But it’s very gratifying to explore a world you’ve built more deeply. I think of how a drop of ocean water contains millions of microorganisms, each with their own story, in a sense. So the world of Middling County (and also, in my second book, Chicago) has infinite potential for stories!


Sughra Raza. Finding Color. Boston, January, 2026.
