Codify — Article

California SB 833: Human oversight and AI adverse-event reporting for critical infrastructure

Requires state agencies running critical-infrastructure AI to put humans in the loop, complete annual safety assessments, and report AI adverse events on tight timelines to OES.

The Brief

SB 833 imposes human‑oversight and adverse‑event reporting rules for artificial intelligence systems that operate, manage, oversee, or control access to California’s critical infrastructure. The bill makes state agencies that act as operators responsible for real‑time monitoring, prior review of AI‑proposed actions (with a narrow exception), annual safety assessments, and mandatory training administered by the Department of Technology.

Separately, the Office of Emergency Services (OES) must receive standardized reports of AI “adverse events” on a series of accelerated timelines (from four hours to 14 days depending on severity). The bill defines reportable events, sets a civil penalty schedule for reporting failures, and protects certain records from public disclosure while allowing curated sharing for collaborative mitigation.

At a Glance

What It Does

SB 833 requires state agency operators of covered AI systems to implement human oversight that monitors operations in real time and reviews AI‑proposed plans before execution (unless prior review would destabilize the system). It also mandates annual assessments, Department of Technology training, and a statewide adverse‑event reporting regime administered by OES with specific deadlines and content requirements.

Who It Affects

Primary targets are state agencies defined as operators of critical‑infrastructure systems; the reporting requirement also reaches any entity whose conduct could materially affect critical infrastructure safety, security, or operations. Vendors, emergency responders, and the Department of Technology/OES are secondarily affected because they will supply, analyze, or act on the reports and training.

Why It Matters

The bill creates a formal bridge between AI operations at scale and emergency management: it operationalizes human review duties, creates a common taxonomy and timelines for incident reporting, and centralizes incident data for coordination — a change that alters compliance workflows, procurement specifications, and incident response playbooks across state infrastructure.

More articles like this one.

A weekly email with all the latest developments on this topic.

Unsubscribe anytime.

What This Bill Actually Does

SB 833 builds two linked regulatory tools: (1) a human‑in‑the‑loop requirement for AI systems that influence critical infrastructure, and (2) an adverse‑event reporting system to capture incidents where AI causes or contributes to serious harms. The statute establishes working definitions (AI; automated decision systems; covered AI systems; critical infrastructure) and identifies state agencies as “operators” subject to the rules.

Operators must put an oversight mechanism in place by July 1, 2026. That mechanism must deliver two capabilities: real‑time monitoring of the AI system’s operations and a procedure to review and approve any plan or action the system proposes before it executes.

If requiring prior approval would substantially destabilize an existing operational system, the operator can instead implement periodic retrospective review, but must document that choice. Each operator must designate oversight personnel, who are required to complete annual training developed by the Department of Technology.Oversight personnel must also run an annual assessment of covered systems.

Those assessments must check statutory compliance, evaluate performance and safety, and surface vulnerabilities (including risks that could cause mass casualties or large property damage) and proposed updates to oversight mechanisms; summaries of those assessments must go to the Department of Technology. The bill cross‑references existing state risk‑analysis duties to align reporting cycles with Section 11549.65 where applicable.Separate from operator duties, the Office of Emergency Services will host an adverse‑event reporting program.

SB 833 defines an “AI adverse event” with explicit thresholds (death; serious injury; disruption >1 hour; data compromise >100 individuals; loss >$50,000; system failures requiring manual intervention; or failures that could foreseeably lead to mass casualty). Entities must file reports according to severity: immediate four‑hour reports for ongoing urgent threats, 24 hours for death/serious injury, 72 hours for major disruptions or data compromises, and up to 14 days for other events.

Reports must include system identification, vendor and version information, training data sources where relevant, human‑oversight status, a description of the event and impact, response/mitigation steps, and a designated contact. OES may broaden participation, share aggregated or event data with authorized entities, and publish summaries while exempting privileged or exempt records from disclosure.

The Five Things You Need to Know

1

Deadline: Operators must implement a human oversight mechanism for covered AI systems by July 1, 2026.

2

Reporting windows: AI adverse events must be reported to OES within 4 hours if an ongoing urgent threat, 24 hours for death/serious injury, 72 hours for major disruption or data compromise, or 14 days for other reportable events.

3

Adverse‑event thresholds: A reportable AI adverse event includes death, serious physical injury, >1 hour critical‑infrastructure disruption, data compromise affecting >100 people, financial loss >$50,000, or system failures requiring manual intervention.

4

Assessments and training: Oversight personnel must complete annual AI‑safety training from the Department of Technology and submit annual system safety assessments summarizing risks and compliance.

5

Penalties and confidentiality: Late reporting draws a civil penalty up to $500 per seven‑day period of noncompliance; OES must protect privileged, copyright‑protected, or otherwise exempt records from public disclosure while allowing authorized sharing for mitigation.

Section-by-Section Breakdown

Every bill we cover gets an analysis of its key sections. Expand all ↓

Section 1

Legislative findings and purpose

This section summarizes why the Legislature is acting: rapid GenAI deployment, recommendations from the state’s AI working group, and the need for workforce training and adverse‑event reporting in high‑risk domains. It frames the bill as a public‑safety measure and establishes the policy basis for limiting public access to some records on confidentiality grounds.

Section 8592.51

Definitions; human oversight and annual assessments (OES code addition)

Section 8592.51 defines key terms for adverse‑event reporting and requires operators—defined here as state agencies in charge of critical infrastructure—to put in place real‑time monitoring and prior‑approval oversight for AI actions (with a narrow exception for destabilizing legacy systems). It also tasks the Department of Technology with providing specialized training and requires annual assessments that evaluate compliance, safety, vulnerabilities, and potential mass‑casualty risks, with summaries sent to the Department.

Section 8592.52

AI adverse‑event reporting duties and OES powers

This section prescribes what constitutes an AI adverse event and sets tiered reporting deadlines tied to severity. It lists detailed report contents (system type, vendor/version, training data sources, oversight status, root‑cause information, and mitigation actions). The office (OES) can invite voluntary reporters, share data with authorized entities, publish aggregated statistics, and is required to withhold privileged or otherwise exempt information from disclosure.

2 more sections
Article 6.6 (Sections 8954.50–8954.52)

Government Code definitions and covered AI system requirements

Article 6.6 replicates and expands definitions (covered AI systems; critical infrastructure sectors) and moves the human‑oversight, training, and assessment duties into the Department of Technology’s statutory framework. It specifies that operators must designate at least one trained oversight employee and adds a dollar threshold ($500,000) for property damage in the assessment requirement, differentiating the Article’s thresholds from the OES adverse event thresholds.

Section 3 (constitutional finding and disclosure limits)

Constitutional findings and record‑protection clause

The Legislature expressly finds the reporting provisions limit public access to certain records and supplies the statutorily required findings to justify that limitation under Article I, Section 3 of the California Constitution. The bill also codifies an exemption for privileged or otherwise exempt records in OES’s handling of reports and authorizes OES to withhold records when nondisclosure serves the public interest.

At scale

This bill is one of many.

Codify tracks hundreds of bills on Technology across all five countries.

Explore Technology in Codify Search →

Who Benefits and Who Bears the Cost

Every bill creates winners and losers. Here's who stands to gain and who bears the cost.

Who Benefits

  • State emergency managers and OES: gain a standardized feed of AI‑related incidents and a legal channel for sharing aggregated intelligence to coordinate mitigation and reduce cascading failures.
  • Department of Technology and oversight personnel: receive consolidated assessment summaries and mandatory training, improving institutional expertise for managing AI risks across state systems.
  • Residents served by critical infrastructure: benefit indirectly through required human oversight, mandatory safety assessments, and rapid reporting of severe AI incidents that threaten health, safety, or essential services.
  • Researchers and private‑sector partners: authorized access to aggregated or de‑identified incident statistics can support safety research, vulnerability remediation, and development of best practices.

Who Bears the Cost

  • State agencies acting as operators: must allocate staff to serve as oversight personnel, integrate real‑time monitoring and prior‑approval processes, and prepare annual assessments — all before July 1, 2026.
  • Vendors and system integrators: will face disclosure requests for vendor/version and training‑data provenance during incident reporting and may need to support investigations and remediation.
  • Office of Emergency Services and Department of Technology: will bear administrative costs to run the reporting platform, vet incoming reports, curate shareable data, and deliver training without a specified funding stream in the bill.
  • Entities other than state operators whose conduct can affect infrastructure: they must understand the broad definition of reportable entities and potentially submit rapid reports under tight timelines or face penalties.

Key Issues

The Core Tension

The bill tries to square two competing demands: rapid transparency and central reporting of AI failures (which argues for detailed public reporting and swift information sharing) versus protecting sensitive infrastructure details and ensuring operational continuity (which argues for narrow disclosure and minimal disruption). Requiring human oversight improves safety but can slow or destabilize systems designed to operate at machine speed — a trade‑off the bill leaves largely to operational judgment and future guidance.

SB 833 leaves several operational details to implementing guidance, producing implementation risks. The bill’s working definitions sweep broadly: its definition of “artificial intelligence” and “automated decision system” could capture diverse software used in control rooms, telemetry, and analytics platforms.

Agencies will need clear guidance on whether a given tool is a covered AI system versus an excluded utility (spam filters, firewalls, etc.).

The reporting timelines are strict, but the bill does not supply explicit resources or protocols for triage, verification, redaction, or secure submission of sensitive datasets. Four‑hour notification for ongoing threats and 24‑hour reporting for death/serious injury create practical friction for operators that must simultaneously manage emergency response, forensic preservation, and legal/privacy obligations.

The civil penalty ($500 per seven days) is small and framed as per‑period exposure, raising questions about deterrence and reliance on voluntary compliance versus enforcement.

The bill’s confidentiality carveouts and the constitutional finding allow OES to withhold records, but the criteria for withholding — especially balancing the public’s right to know against infrastructure risk — are fact‑dependent and may spawn litigation. Finally, the exception that permits retrospective review when prior approval would destabilize operations introduces potential loopholes; agencies could claim operational necessity to avoid proactive human review, undermining the statute’s intent unless the Department of Technology provides strict implementation standards.

Try it yourself.

Ask a question in plain English, or pick a topic below. Results in seconds.