California governor vetoes controversial AI safety law, tells everyone to start over
Newsom doesn't want Golden State to lose its golden goose
California Governor Gavin Newsom has vetoed a controversial AI bill, tho don't assume it was necessarily a final win for the tech industry.
On Sunday, Newsom (D) returned California Senate Bill 1047 to the legislature unsigned, explaining in an accompanying statement [PDF] that the bill doesn't take the right approach to ensuring or requiring AI safety. That said, the matter isn't concluded: Newsom wants the US state's lawmakers to hand him a better bill.
"Let me be clear - I agree with the [bill's] author - we cannot afford to wait for a major catastrophe to occur before taking action to protect the public," Newsom said.
"I do not agree, however, that to keep the public safe, we must settle for a solution that is not informed by an empirical trajectory analysis of AI systems and capabilities."
Newsom's criticism of the bill centers on the sort of AI models it regulates - namely, the largest ones out there. Smaller models are exempt from enforcement, which he said is a serious policy gap.
Smaller, specialized models may emerge as equally or even more dangerous than models targeted by SB 1047
"By focusing only on the most expensive and largest-scale models, SB 1047 establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology," Newsom said.
"Smaller, specialized models may emerge as equally or even more dangerous than models targeted by SB 1047 … Adaptability is critical as we race to regulate a technology still in its infancy."
Newsom is also concerned that the bill failed to account for where an AI system was deployed, whether it was expected to make critical decisions, or how systems used sensitive data.
"Instead, the bill applies stringent standards to even the most basic functions - so long as a large system deploys it," he said. "I do not believe this is the best approach to protecting the public from real threats posed by the technology."
Thanks, but go back to the drawing board and try again, in other words. Legislators and the lobbyists.
The proposed law, which passed the state senate and house, is considered controversial as while it had its supporters, it was also fought against by AI makers and federal-level politicians who basically thought it was just a bad bill. The wording of the legislation was amended following feedback from Anthropic, a startup built by former OpenAI staff and others with a focus on the safe use of machine learning, and others, before being handed to the governor to sign – and he refused.
Newsom has previously stated that he was worried about how SB 1047 and other potential large-scale AI regulation bills would affect the continued presence of AI companies in California, which he mentions again in the signing statement. That might be the case, but Newsom's letter makes it clear he wants AI innovation to remain in the Golden State, but he also desires a sweeping AI safety bill like SB 1047.
As he's previously claimed, 32 of the world's 50 leading Al companies are said to be located in the West Coast state.
- California's Governor Newsom is worried AI will be smothered in regulation
- California trims AI safety bill to stop tech heads from freaking out
- Someone had to say it: Scientists propose AI apocalypse kill switches
- California proposes government cloud cluster to sift out nasty AI models
Dean Ball, a research fellow at free-market think-tank the Mercatus Center, told The Register that Newsom's veto was the right move for all the same reasons the governor said.
"The size thresholds the bill used are already going out of date," Ball said. "[They're] almost certainly below the bill's threshold yet undoubtedly have 'frontier' capabilities."
Some key points about SB 1047
- Developers of models covered by the law must put in controls at a technical and organization level to prevent their neural networks from creating or using weapons of mass destruction; causing at least $500 million in damages from cyberattacks; committing crimes that a human would be tried for, including murder; and causing other "critical harms" from occurring.
- AI houses must also slap a kill switch on covered models that can shut them down immediately, including training as well as inference.
- There must be cybersecurity mechanisms in place to prevent the unauthorized use or misuse of powerful artificial intelligence.
- Developers must submit to auditing, develop and implement safety protocols, and produce reports on their efforts in this area.
- Workers aren't allowed to be banned from blowing the whistle on non-compliance. And much more.
- Models covered by the law include those requiring $100 million or more to develop and needing at least 1026 FLOPS to train. Fine-tuned versions and other derivatives may also be covered.
California state senator Scott Wiener (D-11th district), the author of the bill, described Newsom's veto in a post on X as a "setback for everyone who believes in oversight of massive corporations."
"This veto leaves us with the troubling reality that companies aiming to create an extremely powerful technology face no binding restrictions from US policymakers," Wiener said. "This veto is a missed opportunity to once again lead on innovative tech regulation … and we are all less safe as a result."
Ball, on the other hand, doesn't seem to see things as so final, opining that California legislators will likely take action on a similar bill in the next session - one that could pass. "This is only chapter one in what will be a long story," Ball said. ®
Bootnote
Newsom also refused to sign a law bill requiring new vehicles sold in California to be fitted with a warning system to alert drivers if they go 10 MPH or more over the speed limit.
But he did approve AB 2013, which will require developers of generative AI systems to publish, from January 1, 2026, a “high-level summary” of the datasets used to train such technologies. That will reveal exactly where those models got their info from.