California lawmakers pretend to regulate AI, create a pile of paperwork

LLM makers have to file a steady stream of reports in the name of transparency

A year after vetoing a tougher bill, California Gov Gavin Newsom has signed the nation's first AI transparency law, forcing big model developers to publish frameworks and file incident reports, but critics argue it's more paperwork than protection.

Newsom signed California Senate Bill 53, the Transparency in Frontier Artificial Intelligence Act, into law on Monday. The law largely does what it says on the tin, placing a number of transparency requirements on the frontier developers. Large AI firms, defined by the bill as those with annual gross revenue in excess of $500 million, including affiliates, must publish and update frontier AI frameworks, include added disclosures in their transparency reports, report critical safety incidents to the state Office of Emergency Services, and not retaliate against whistleblowers, among other requirements.

Safety incidents, per the law, include unauthorized access or compromise of a model "that results in death or bodily injury," harm resulting from a "catastrophic risk" (defined by the bill as use of an AI model that causes death or harm to more than 50 people or over $1 billion in property damage), "loss of control" of a frontier model, or a model being able to deceive its developers to subvert its own restrictions. 

Newsom signed SB 53, authored by California State Senator Scott Wiener, a year after he vetoed a similar but tougher bill from the same senator, SB 1047. Newsom agreed with Wiener's efforts, but said he was unhappy with limitations in the bill that restricted management of AI models to only the largest ones. 

"By focusing only on the most expensive and largest-scale models, SB 1047 establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology," Newsom said last year. 

Newsom convened a working group following the veto in order to figure out a better way forward on an AI transparency bill. The group released a draft report in March that Wiener explained on Monday played a key part in developing the new version of his bill. 

"I'm grateful to the Governor for his leadership in convening the Joint California AI Policy Working Group, working with us to refine the legislation, and now signing it into law," Wiener said.

Still contentious

Despite those committee-led changes in the newly signed law, the core of SB 53 and who it applies to remains unchanged: it still targets developers of 'frontier models' trained with more than 10²⁶ operations, with extra obligations for firms pulling in over $500 million a year.

That, says Chamber of Progress policy manager Aden Hizkias, leaves the bill falling short of the mark needed to actually create meaningful progress on regulating AI. 

"This [10²⁶ FLOPS] threshold is an arbitrary proxy for capability and risk," Hizkias wrote in a July description of SB 1047. "It fails to capture the nuances of model behavior, deployment content or intent. Some smaller-scale models could still pose serious real-world risks, while some large-scale models may present very low risk."

The Chamber of Progress' criticism of the bill (note that the group's tech industry partners include Amazon, Google, NVIDIA, and others) goes beyond the model size threshold. Per Hizkias, SB 53's biggest change from SB 1047 is its shift from enforcing a "do no harm" mandate to a "show your work" approach. 

"Penalties and injunctions are now tied to paperwork failures, for example, missed deadlines or misleading statements, rather than actual harm," Hizkias wrote. "A single misfiled report or overzealous disclosure can trigger injunctions, fines, or reputational harm even if the model never causes damage." 

This, the CoP policy analyst said, is a massive change from SB 1047, which required AI developers to certify that their new models didn't pose a risk of exacerbating critical harms like mass casualty events, development of deadly weapons and cyberattacks. Most penalties would only have occurred under SB 1047 after such incidents; SB 53, she argued, only creates "a compliance minefield without clear standards." 

"SB 53 does not impose any meaningful safety duty and instead enforces a burdensome transparency regime requiring exhaustive disclosures and reporting," Hizkias warned.

Anthropic, which isn't a member of the CoP and helped work with Wiener to draft the new bill, has a different take. 

"Governor Newsom's signature on SB 53 establishes meaningful transparency requirements for frontier AI companies without imposing prescriptive technical mandates," Anthropic cofounder and policy chief Jack Clark told The Register in an email. Clark doesn't want AI regulation to stop there, however. 

"While federal standards remain essential to avoid a patchwork of state regulations, California has created a strong framework that balances public safety with continued innovation," Clark added. 

Microsoft had no comment on the passage of the bill; none of the other AI companies that meet SB 53's enforcement threshold responded to questions. Wiener's office declined to comment beyond its press release on passage of the bill. 

"California has long been a leader in technology and innovation," Newsom said in a signing statement accompanying the bill. "We are once again demonstrating our leadership, by protecting our residents today while pressing the federal government to act on national standards." 

One thing everyone's happy with: A public AI cloud

The accountability rules might still be contentious, but there's one thing SB 53's critics and proponents seem to agree on: Its provision creating a publicly available computing cluster dubbed "CalCompute" that's available for AI startups and researchers to use as an alternative to spinning up their own hardware. 

"Despite the flaws of the amended SB 53, we strongly support CalCompute," Hizkias said. CalCompute was proposed as part of SB 1047 as well. 

"The creation of a public option for computing power through CalCompute will democratize access to critical AI infrastructure," said Teri Olle, director of Economic Security California Action, a sponsor of SB 53. 

CalCompute, per the bill, will be led by a consortium within the state's Government Operations Agency (GOA) and aims to provide resources to the public for development of AI "that is safe, ethical, equitable, and sustainable." 

The provisions of SB 53 take effect on January 1, 2026, but extra time is being provided to stand up CalCompute - the GOA doesn't even have to submit a report on what it'll take to create the cluster until the beginning of 2027. ®

More about

TIP US OFF

Send us news


Other stories you might like