California Gov. Gavin Newsom signed into law regulations that will require artificial intelligence developers to make their safety protocols public, marking a first-of-its-kind effort to regulate AI.
Dubbed the Transparency in Frontier Artificial Intelligence Act – or S.B. 53 – the law requires large AI developers to provide detailed reports on the safety measures taken while creating their models, and makes those companies report the greatest risks posed by their technology.
Whistleblower protections for employees who provide warning of risks posed by AI technologies are bolstered under the law, while the act also creates a new consortium – CalCompute – to create a state-run computing cluster framework aiming to advance “safe, ethical, equitable, and sustainable” AI.
“California has proven that we can establish regulations to protect our communities while also ensuring that the growing AI industry continues to thrive. This legislation strikes that balance,” said Gov. Newsom in a statement.
“AI is the new frontier in innovation, and California is not only here for it – but stands strong as a national leader by enacting the first-in-the-nation frontier AI safety legislation that builds public trust as this emerging technology rapidly evolves,” he continued.
State Senator Scott Wiener, D-San Francisco, said he proposed the legislation to create regulations to “understand and reduce risk.”
“With this law, California is stepping up, once again, as a global leader on both technology innovation and safety,” said Sen. Wiener. “[Gov. Newsom’s] partnership helped this groundbreaking legislation promote innovation and establish guardrails for trust, fairness, and accountability in the most remarkable new technology in many years.”
The legislation is unique in some of its requirements. Unlike the EU’s AI Act, which focuses on documentation and internal reporting, California’s law requires developers to publicly disclose cyber incidents carried out by AI systems and instances where models display deceptive or misleading behavior.
The measure aims to make the risks of advanced AI more transparent, even when those actions occur without direct human control.
It also uses a revenue bar for when the regulations kick in, instead of relying on things such as use-cases, it sets that bar at over $500 million in annual revenue – generally covering only the largest and most advanced AI companies.
California will also not require technologies to undergo third-party testing, instead focusing on disclosures and reporting, including civil penalties of up to $1 million per violation for those who don’t comply.
Gov. Newsom’s administration said that under the law, the California Department of Technology would recommend updates to the law every year “based on multistakeholder input, technological developments, and international standards.”
How states should regulate AI has been a hot topic in Congress, where Democrats and Republicans have pointed to risks in letting large developers continue to innovate without limitation while also noting that a patchwork of state laws may stifle that innovation.
Currently, there are no federal laws regulating the development or deployment of AI. Thirty-eight states have adopted or enacted about 100 AI-related measures this year, according to the National Conference of State Legislatures.