by Malcolm Murray
If you in any way follow AI policy, you will likely have heard that the EU AI Act’s Code of Practice (CoP) was released on July 10. This is one of the major developments in AI policy this year. 2025 has otherwise been fairly negative for AI safety and risk – the Paris AI summit in February was all about investments rather than safety, OpenAI now releases model with allegedly only days for testing and we almost had a 10-year moratorium on any US state AI legislation.
This is why it was a relief and a happy surprise to see that the final CoP, and I am here focused on the Safety and Security chapter, which is my domain, ended up being a really, really good document. There are three main reasons why I am excited about the final CoP: the baseline it sets, its risk management nature and the democratic process by which it was created.
An Unavoidable (Positive) Elephant in the Room
It is too early to tell whether we will see the same kind of “Brussels effect” for the AI Act that we saw for other EU legislation, such as GDPR. However, by producing a very strong CoP, the EU has set a very strong foundation. The existence of the CoP now provides an unavoidable baseline to which all future AI regulation and policy will be compared. It introduces an elephant in the room (in a good way), one that companies and countries can’t avoid referencing whenever AI policy is discussed.
The CoP is technically voluntary, but it seems likely that companies will want to sign it, since it is the most straightforward way to comply with the Act and removes much legal uncertainty. The EU has also signaled that they will provide a grace period for signatories before enforcement starts in August 2026, providing another key benefit.
The news that both Mistral and OpenAI plan to sign is a strong signal in this direction. Mistral, the French AI company, has been vocally against regulation and notably was one of the companies that did not publish a Frontier Safety Framework in February despite having commitments to do so. OpenAI initially argued for regulation, but has over the past year changed their tune and has also been very vocal in their fight against regulation.
A Surprisingly Sound Risk Management Basis
The CoP, and I am here talking about the Safety and Security chapter, is firmly based in sound risk management principles. While importantly staying close to existing practice in frontier AI, exemplified by the Frontier Safety Frameworks published earlier this year, the CoP is in practice a classic risk management framework. It contains measures to take for risk identification, risk analysis, risk acceptance determination and risk mitigation. This makes it adhere closely to risk management frameworks such as the one we at SaferAI published last year and key industry frameworks such as NIST’s AI Risk Management Framework. This is a very positive outcome and one that was not certain given the many stakeholders and contributors.
I was also very happy to see a very solid risk governance section. This is an area where I myself was a contributing stakeholder to the Code as a member of the working group, and this section ended up containing all the relevant components of risk governance. These include building a strong risk culture, having independent assessments from internal and external audit, and having a Chief Risk Officer and a central risk team that is well-resourced and enabled to properly analyze AI risk.
An Admirably Democratic Process
The third thing worth highlighting is how successful the process of creating the CoP was. This was a highly ambitious process, having 13 independent AI experts create a code with input from more than 1,000 stakeholders, from industry as well as academia and civil society, during a short period of 9 months, using three drafts. The fact that the end result is not only coherent, but in fact both sound as well as streamlined, is testament to the tremendous work by the chairs and vice-chairs. Having an end result that shows both the fingerprints of large AI companies such as Meta as well as contributions from independent experts such as myself, is quite a feat.
Remaining Question Marks
Of course, there are remaining question marks. We don’t know yet which parts of the code that might open up legal loopholes. One that is potentially concerning is that of “similarly safe” models. An AI provider can circumvent some of the requirements by pointing to models already on the market that are similarly safe. It remains to be seen how this will be applied. AI companies also have a lot of leeway in how rigorously they implement certain elements of the code, such as the setting of risk tiers. And of course, it needs to be stressed again that the code is voluntary, and some fears remain that large US companies will not sign the code, either withdrawing from the EU market or simply eating the fines. The Trump administration seemed to encourage this earlier in the year.
Even with these question marks in place, the release of the Code represents a major step forward in AI policy and AI safety and is a cause for celebration.
Enjoying the content on 3QD? Help keep us going by donating now.
