Codify — Article

California SB 53 requires large frontier AI developers to publish safety frameworks and report incidents

Mandates public frontier-AI safety frameworks, pre-deployment transparency reports, and critical-incident reporting to California emergency authorities for well‑resourced AI model developers.

The Brief

SB 53 (Transparency in Frontier Artificial Intelligence Act) imposes new transparency, reporting, and governance duties on “large frontier developers” — entities that train extremely large foundation models and have at least $500 million in annual revenue. The bill defines “frontier models” by a compute threshold, requires published frontier AI frameworks describing risk-assessment and mitigation processes, and forces pre-deployment transparency reports for new or substantially modified frontier models.

The statute also creates a critical-safety incident reporting channel to the California Office of Emergency Services, mandates quarterly summaries of internal catastrophic-risk assessments for OES, authorizes the Attorney General to enforce civil penalties up to $1 million per violation, and directs state agencies to review definitions and aggregate incident data annually. For compliance officers and risk teams, the bill turns several technical definitions into legal triggers and sets discrete timelines and retention obligations that will shape procurement, redaction practice, and security controls.

At a Glance

What It Does

SB 53 requires large frontier developers to publish and maintain a documented "frontier AI framework," produce pre-deployment transparency reports for new or substantially modified frontier models, and report critical safety incidents and certain internal risk assessments to the Office of Emergency Services.

Who It Affects

The obligations apply to frontier developers that—and their affiliates that—collectively had over $500 million in annual gross revenue; the statute targets developers of foundation models trained above a high compute threshold and their supply chains, evaluators, and third‑party assessors.

Why It Matters

The bill converts technical thresholds (compute, model weights, and capability tests) into legal coverage tests, creates a state-level incident reporting regime with confidentiality protections, and establishes civil penalties — shifting operational and disclosure choices for leading AI firms and their vendors.

More articles like this one.

A weekly email with all the latest developments on this topic.

Unsubscribe anytime.

What This Bill Actually Does

SB 53 creates a package of definitional rules, disclosure obligations, and reporting channels focused on the riskiest class of AI systems the text calls "frontier models." The bill starts by defining key terms: foundation models (broad, general-purpose systems), frontier models (foundation models trained above a specified compute threshold), frontier developers (the parties that trained them), and large frontier developers (those with affiliates whose combined revenue exceeds $500 million). Those definitions determine who must comply.

Once an entity meets the large frontier developer test, the bill requires it to write, publish, and implement a "frontier AI framework." That framework must explain the developer’s criteria and thresholds for detecting catastrophic capabilities, the mitigation steps and governance processes used before deployment or broad internal use, cybersecurity practices to protect unreleased model weights, and procedures for identifying and responding to "critical safety incidents." The developer must review the framework at least annually and publish any material modification with a justification within 30 days.SB 53 also requires transparency reports posted on the developer’s website before or at deployment of any new frontier model or substantially modified model. Those reports must include basic metadata — release date, supported languages and modalities, intended uses and restrictions — and, for large frontier developers, summaries of catastrophic-risk assessments, results, and the role of third‑party evaluators.

The statute allows redactions to protect trade secrets, cybersecurity, public safety, or national security, but requires a public description of the nature and justification for redactions and retention of unredacted material for five years.On the incident side, the Office of Emergency Services (OES) gets a confidential reporting mechanism for "critical safety incidents" and a separate confidential channel for quarterly (or otherwise reasonable) transmission of internal catastrophic-risk assessment summaries. Frontier developers generally must report critical safety incidents to OES within 15 days of discovery, with a 24‑hour obligation when the incident poses imminent risk of death or serious physical injury to an appropriate authority.

OES must keep submitted information out of the public-records act, produce annually aggregated anonymized reports starting in 2027, and may designate federal standards as equivalent for compliance purposes. The Attorney General enforces civil penalties in actions capped at $1 million per violation.

The Five Things You Need to Know

1

The bill sets the frontier-model compute threshold at greater than 10^26 integer or floating‑point operations for training (including subsequent fine‑tuning).

2

A "large frontier developer" is a frontier developer whose affiliates’ combined annual gross revenues exceeded $500,000,000 in the preceding calendar year.

3

Frontier developers must report any critical safety incident to the Office of Emergency Services within 15 days of discovery, and within 24 hours if the incident presents an imminent risk of death or serious physical injury to an appropriate authority.

4

Developers may redact disclosures to protect trade secrets, cybersecurity, public safety, or national security, but must describe the character and justification for redactions and retain the unredacted material for five years.

5

The Attorney General may sue for civil penalties up to $1,000,000 per violation for failures to publish required documents, making materially false statements about catastrophic risk or compliance, failing to report incidents, or failing to follow the developer’s own frontier AI framework.

Section-by-Section Breakdown

Every bill we cover gets an analysis of its key sections. Expand all ↓

22757.11

Definitions and coverage triggers

This section contains the operative definitions that determine who and what the statute covers: "frontier model" (tied to a compute threshold), "foundation model," "frontier developer," "large frontier developer" (the $500M affiliate revenue test), "catastrophic risk," "critical safety incident," and other terms. Practically, these definitions translate technical variables—compute spent on training, whether a model is broadly adaptable, and revenue thresholds—into legal criteria that compliance teams must measure and document before training or deployment decisions.

22757.12(a)-(b)

Frontier AI framework: required content and review cadence

Large frontier developers must publish a frontier AI framework that documents how they map standards and best practices into risk thresholds, assessment procedures, mitigation strategies, third‑party evaluation use, cybersecurity protections for unreleased model weights, incident response, and internal governance. The framework must be reviewed at least annually; material changes trigger public publication of the revised framework plus a justification within 30 days. For operational teams this creates a recurring compliance calendar and a need to codify technical thresholds and governance roles into a public-facing document.

22757.12(c)-(f)

Pre-deployment transparency reports and redaction rules

Before or at deployment of new or substantially modified frontier models, developers must publish a transparency report with basic model metadata and, for large frontier developers, summaries of catastrophic-risk assessments, results, and third‑party evaluator involvement. The bill permits redactions for trade secrets, cybersecurity, public safety, or national security but requires a description of the character and justification for any redaction and retention of unredacted records for five years. That combination forces teams to design disclosure templates that balance public usefulness with defensible redaction rubrics and recordkeeping.

3 more sections
22757.13

Critical safety incident reporting to OES and confidentiality

OES must provide mechanisms for public and developer reporting of critical safety incidents and confidential submission of internal catastrophic-risk assessment summaries. Frontier developers must report critical safety incidents to OES within 15 days (or 24 hours to an appropriate authority if imminent death/serious injury is possible). Reports and covered employee reports are exempt from the California Public Records Act, and OES must produce annual anonymized reports beginning January 1, 2027. The provision creates a specially protected intelligence‑style channel for safety communications, with statutory nondisclosure and internal access control expectations for OES.

22757.14

Annual technical review and cross‑agency reporting

The Department of Technology must annually reassess whether statutory definitions (frontier model, frontier developer, large frontier developer) match technological developments and international or federal thresholds and submit recommendations to the Legislature. The Attorney General must also annually aggregate anonymized covered‑employee reports. Together these provisions institutionalize an iterative definitional review process and a transparency loop between regulators and the Legislature intended to keep coverage rules aligned with technological change.

22757.15-16

Enforcement, penalties, and scope limits

The Attorney General is the sole enforcement actor for civil penalties; violations (failure to publish or transmit required documents, false statements, failure to report, or noncompliance with one's own framework) can yield fines up to $1,000,000 per violation. The statute explicitly excludes loss of equity value from the definition of "damage to or loss of property" when calculating catastrophic-risk thresholds. Enforcement rests with the AG’s civil action authority rather than an administrative fines regime, which shapes how investigations and settlements are likely to proceed.

At scale

This bill is one of many.

Codify tracks hundreds of bills on Technology across all five countries.

Explore Technology in Codify Search →

Who Benefits and Who Bears the Cost

Every bill creates winners and losers. Here's who stands to gain and who bears the cost.

Who Benefits

  • State and local emergency responders — receive confidential, structured incident reports and aggregated trends that improve situational awareness and planning for AI‑linked emergencies.
  • Policy makers and regulators — receive annual, anonymized data and Department of Technology recommendations that make it easier to calibrate future statutory or regulatory updates to definitions and thresholds.
  • Third‑party evaluators and academic auditors — gain formal recognition in statutory disclosures and may see expanded demand for independent assessments and certification services.
  • Covered employees and safety‑minded insiders — benefit from statutory confidentiality protections around reports and from the Attorney General’s annual anonymized reporting, which raises the visibility of internal safety concerns.

Who Bears the Cost

  • Large frontier developers and their compliance/legal teams — must create, publish, and annually update a frontier AI framework, prepare pre‑deployment transparency reports, implement cybersecurity and recordkeeping controls, and retain unredacted records for five years.
  • Office of Emergency Services and Department of Technology — must build confidential intake channels, protect sensitive reports, produce annual aggregated reports, and run cross‑agency coordination without a dedicated funding stream in the text.
  • Affiliates, cloud providers, and contractors — may face new contract and audit demands as developers seek evidence to justify redactions, demonstrate cybersecurity protections for model weights, and show third‑party evaluation outcomes.
  • Open‑source contributors and smaller model developers — may experience indirect costs or uncertainty if the compute‑based definition and commercial revenue threshold create pressure to centralize capabilities or to obscure training details to avoid coverage.

Key Issues

The Core Tension

The central dilemma is between transparency for public safety and the protection of proprietary cybersecurity and national‑security interests: the law seeks to force disclosure of risk assessments and incidents to improve emergency response and oversight, but allowing broad redactions and confidentiality weakens public accountability; limiting redactions risks exposing trade secrets or facilitating misuse of dangerous capabilities. There is no formula that fully satisfies both objectives, so the statute delegates hard judgment calls to developers, OES, and the Attorney General — and those choices will determine whether the law improves safety in practice or mainly imposes compliance costs.

SB 53 places visibility requirements on the firms most likely to build the most capable models, but the statute balances transparency against trade secrets, cybersecurity, and national security by allowing redactions and by exempting submitted materials from the Public Records Act. That balance is conceptually tidy but practically fraught: the usefulness of published reports depends on how aggressively developers redact and how candidly OES and the Attorney General describe the types of incidents they receive without revealing sensitive details.

Compliance teams will therefore need defensible redaction policies that can survive AG review while still providing meaningful public information.

The bill relies on technical thresholds—most notably the >10^26 operations training threshold and a $500 million affiliate‑revenue trigger—to draw a bright line around coverage. Those numeric thresholds are easy to apply but risk both false negatives (dangerous systems trained under the threshold) and false positives (large, well-funded projects that do not present the same systemic risks).

Developers can also restructure training (distributed runs, off‑chain fine‑tuning) or allocate compute to avoid a legal definition, raising monitoring and enforcement challenges. Finally, authorizing OES to accept federal standards as equivalent introduces a layer of intergovernmental dependence: compliance may shift if and when federal rules emerge, creating uncertainty for firms operating across jurisdictions.

Try it yourself.

Ask a question in plain English, or pick a topic below. Results in seconds.