California Nears AI Law: What's in SB 53?

California AI Safety Bill: New Transparency Requirements

The California State Senate finally approved on Saturday morning a major AI safety bill, imposing new transparency requirements on large companies. This bill, named SB 53 and authored by Senator Scott Wiener, requires large AI labs to be transparent about their security protocols, provides whistleblower protection for employees in these labs, and establishes a public cloud to expand access to computing (CalCompute). CalCompute is known as a public cloud computing pool aimed at supporting startups, researchers, and community groups lacking large-scale computing resources, thereby fostering democratic innovation in the field of AI and helping California maintain its global leadership in this area. Source: Senator Scott Wiener, 2024.

Key Components of a Knowledge Graph


Image of blue puzzle pieces scattered on a white surface

Now, the bill moves to California Governor Gavin Newsom for his signature or veto. Newsom has not publicly commented on SB 53, but last year he vetoed a more comprehensive safety bill, also authored by Wiener (SB 1047). Newsom criticized the previous bill for applying "stringent standards" to large models regardless of whether they were "deployed in high-risk environments, involved critical decision-making, or used sensitive data." It is worth noting that SB 1047 would have required developers of "Frontier AI models" to conduct safety assessments, report any potential severe risks, and establish safety mechanisms to prevent misuse. Source: Wikipedia, 2025. In contrast, Newsom signed narrower legislation targeting issues such as deepfakes, emphasizing the importance of "protecting the public from the real threats posed by this technology."

Benefits of Using Knowledge Graphs


Businessman giving a presentation on a whiteboard

Wiener stated that the new bill, SB 53, was influenced by the recommendations of an AI expert committee formed by Newsom after his veto of the previous bill. Politico reports also indicate that SB 53 was recently amended so that companies developing "frontier" AI models with annual revenues of less than $500 million only need to disclose high-level security details, while companies exceeding this revenue will have to provide more detailed reports. These transparency requirements include disclosing security protocols related to the development and deployment of AI models, aiming to ensure greater accountability and reduce potential risks. Source: LegiScan, 2025.

Use Cases of Knowledge Graphs


A diagram showing data and charts related to finance and growth

The bill was opposed by a number of Silicon Valley companies, venture capital firms, and lobbying groups. In a recent letter to Newsom, OpenAI did not specifically mention SB 53, but argued that to avoid "duplication and inconsistencies," companies should be considered compliant with state-level safety rules as long as they meet federal or European standards. The head of AI policy and legal director at Andreessen Horowitz recently claimed that "many current AI bills in the states – such as proposals in California and New York – risk" overstepping constitutional lines regarding how states regulate interstate commerce. A16z co-founders previously cited technology regulation as one of the factors that drove them to support Donald Trump's bid for a second term. The Trump administration and its allies later called for a 10-year ban on state-level AI regulation, arguing that federal regulation is the most appropriate approach for such emerging technology. Source: Politico, 2025. In contrast, Anthropic announced its support for SB 53. Jack Clark, co-founder of Anthropic, stated in a post: "We've long said we prefer a federal standard. But in its absence, this creates a strong blueprint for AI governance that cannot be ignored."

Next Post Previous Post
No Comment
Add Comment
comment url