Ezra Klein in The New York Times:
Among the many unique experiences of reporting on A.I. is this: In a young industry flooded with hype and money, person after person tells me that they are desperate to be regulated, even if it slows them down. In fact, especially if it slows them down. What they tell me is obvious to anyone watching. Competition is forcing them to go too fast and cut too many corners. This technology is too important to be left to a race between Microsoft, Google, Meta and a few other firms. But no one company can slow down to a safe pace without risking irrelevancy. That’s where the government comes in — or so they hope.
A place to start is with the frameworks policymakers have already put forward to govern A.I. The two major proposals, at least in the West, are the “Blueprint for an A.I. Bill of Rights,” which the White House put forward in 2022, and the Artificial Intelligence Act, which the European Commission proposed in 2021. Then, last week, China released its latest regulatory approach. Let’s start with the European proposal, as it came first. The Artificial Intelligence Act tries to regulate A.I. systems according to how they’re used. It is particularly concerned with high-risk uses, which include everything from overseeing critical infrastructure to grading papers to calculating credit scores to making hiring decisions. High-risk uses, in other words, are any use in which a person’s life or livelihood might depend on a decision made by a machine-learning algorithm.