by Malcolm Murray
There is a strawman factory smack in the middle of the AI risk debate. Given the complexity of AI risk, we are seeing a lot of weak arguments, focusing only on one aspect of AI risk or AI risk management. In fact, AI risk has become a bit of a “strawman factory”, since it is so complex that it is very easy to only zero in on one aspect and knock that down, while neglecting the rest.
The debate over the California AI bill SB-1047 showed how easily these strawmen take over. Andreessen Horowitz was particularly effective at churning out strawmen, such as the idea that the benefits of AI are so great (correct) so that we have a moral reason to ignore the risks (incorrect).
In order to fend off all these strawmen walking around, we can make use of three underappreciated aspects of AI risk and risk management – “the Spectrum”, “the System” and “the Stack”. Let me explain what I mean by those.
The “Spectrum”
Given the complexity of AI risk, it is easy to zero in on one aspect and point out that an AI risk assessment technique would not work there. But this neglects the wide spectrum of AI risks.
The risks to society from AI fall along a wide spectrum and people tend to underestimate how wide it is. The spectrum can be visualized in terms of levels of velocity and uncertainty, i.e. how long it will take a risk to materialize and how uncertain we are about its effects. On one hand, we have risks for which we have high certainty and that are already here, such as bias and discrimination. We know that by default, since AI has been trained on the sum total of human knowledge, and that human knowledge is inherently biased, that the AI models have biased data in their training set.
On the other hand, we have risks such as loss of control risks. Read more »