This bill establishes the California AI Standards and Safety Commission, sets rules for designating independent verification organizations (IVOs) and multistakeholder regulatory organizations (MROs), and requires those entities to operate under plans the state approves. Designated IVOs and MROs audit, certify, and monitor AI developers and deployers against measurable risk metrics, report aggregated findings publicly, and may have designations revoked for failures or harms.
The statute also creates a procedural advantage in civil litigation: developers whose models are certified by an MRO at the time of injury receive a rebuttable presumption that they exercised reasonable care. The package centralizes a state-backed private-certification model intended to inform procurement and reduce certain liability risks while imposing new compliance, reporting, and independence requirements on verifiers and AI firms.
At a Glance
What It Does
Creates a state commission to vet and designate private verification bodies (IVOs) and requires the Attorney General to designate multistakeholder regulatory organizations (MROs). Those designated entities must implement audited plans to certify AI models and monitor post‑deployment changes, collect and report aggregated data, and follow conflict‑of‑interest and independence rules.
Who It Affects
AI developers and deployers operating or offering services in California, private labs and security vendors used for evaluation, the Attorney General and the new state commission, and state agencies that procure AI tools. Civil litigants and courts will face new evidentiary presumptions when certified models are implicated in injury claims.
Why It Matters
The bill effectively creates a state‑backed market for private AI certification and ties certification to liability advantage and procurement guidance. That combination can shift incentives for safety engineering, litigation strategy, and vendor selection across the California market—potentially influencing national practice given California’s procurement scale.
More articles like this one.
A weekly email with all the latest developments on this topic.
What This Bill Actually Does
The statute creates two parallel but connected tracks for private verification. The Attorney General designates one or more MROs after reviewing applicant plans; the new California AI Standards and Safety Commission—set up within the Government Operations Agency—designates IVOs after a similar plan review.
Both MRO and IVO applicants must submit detailed plans describing how they will audit models and applications, define acceptable risk levels, monitor post‑certification changes, use third‑party security vendors, collect metadata, protect trade secrets, implement whistleblower protections, and remediate noncompliance.
Designation is time‑limited: both MROs and IVOs receive three‑year designations and must reapply. The law lists specific triggers for revocation, including materially misleading plans, systematic failures to follow the plan, loss of independence, obsolescence of methods, or an instance where a certified model causes significant or material harm.
Designated entities must annually audit board composition, resources, funding sources, and civil‑society representation and report those audits to the designating authority to demonstrate ongoing independence.The Commission’s membership, appointment conditions, and ethics rules emphasize technical qualifications and independence: governor‑appointed seats cover both frontier‑model expertise and smaller‑scale AI, plus civil society, labor, and research representation; ex officio participation includes the Attorney General and disaster/ethics experts. Members cannot hold stock in major AI companies (outside mutual funds), must avoid incompatible employment during service, and face a one‑year cooling‑off bar on joining designated IVOs.
The Commission maintains a public registry of IVOs, issues biennial reports to the Legislature summarizing standards and gaps, and provides procurement guidance to state agencies.On liability, the statute creates a rebuttable presumption in civil suits for personal injury or property damage: if an MRO certified the model at the time of the plaintiff’s injury, the developer is presumed to have exercised reasonable care. Courts may admit evidence to overcome that presumption.
The law also requires designated entities to retain records related to their activities for 10 years and permits the Attorney General and Commission to set fees and regulations — including rules to manage conflicts of interest and minimize antitrust and trade‑secret exposure from aggregated reporting.
The Five Things You Need to Know
Designations for both IVOs and MROs expire after three years and are renewable only by reapplication.
A developer whose model was MRO‑certified at the time of injury receives a rebuttable presumption of reasonable care in civil actions for personal injury or property damage.
Designated IVOs and MROs must retain documents related to their activities for 10 years and annually audit board composition, funding, resources, and civil‑society representation to demonstrate independence.
Commission members face a one‑year post‑service bar from accepting employment with an entity designated as an IVO and cannot hold direct equity (outside mutual funds) in major AI developers during their term.
Applicant plans must include measurable definitions of “acceptable levels of risk,” target metrics and data sources, and technical thresholds that trigger recertification when models are updated.
Section-by-Section Breakdown
Every bill we cover gets an analysis of its key sections.
Key terms that define coverage and roles
This section sets the statute’s scope by defining core terms: what counts as an artificial intelligence model or application, who is a developer or deployer, and which organizations qualify as MROs or IVOs. Those definitions govern who needs certification, who may be audited, and what activities fall under the law’s monitoring and reporting requirements. The breadth of "deployer"—including entities that make models available as part of services—means SaaS providers and platform operators are captured, not just in‑house model authors.
Attorney General designates MROs and vets their plans
Under these provisions, the Attorney General reviews applicant MRO plans against explicit criteria: personnel qualifications, evaluation rigor, measurable standards for risk mitigation, and independence from industry. Applicants must describe auditing approaches, risk mitigation for high‑impact threats (cybersecurity, biological, CBRN, malign persuasion, autonomy), disclosure processes, data collection for public reporting, and vendor use. The AG also gains authority to revoke designations for misleading plans, systematic noncompliance, compromised independence, obsolete methods, or if certification fails and causes significant harm—creating a regulatory oversight lever tied to both process and outcomes.
Creates the California AI Standards and Safety Commission and assigns advisory, liaison, and reporting roles
The Government Operations Agency must establish the Commission with governor‑appointed experts (covering frontier and smaller‑scale AI, labor, civil society, and academia) plus ex officio agency designees. The Commission’s duties are advisory and coordinating: it analyzes standards, maintains a public IVO registry, liaises with procuring agencies, and issues biennial reports identifying standards gaps and procurement recommendations. It is explicitly empowered to provide procurement guidance that could be adopted by state purchasing agents, leveraging the registry and reports to shape public‑sector buying decisions.
Commission designates IVOs; IVO plans must operationalize measurable verification
The Commission evaluates IVO applicants against a long checklist that mirrors MRO rules but emphasizes ongoing supervision: defining acceptable risk outcomes, setting metrics and targets, detailing continuous monitoring for post‑certification fine‑tuning, prescribing corrective actions and revocation procedures, and safeguarding trade secrets/antitrust concerns in shared data. IVOs must specify how they will certify security vendors and enforce whistleblower protections. The statute bars the Commission from modifying applicant plans, so acceptance hinges on the initial proposal’s quality.
Certified verifiers must implement plans, report aggregated findings, and archive records
Designated entities must carry out their approved plans, decertify noncompliant models, and publish annual reports with aggregated capability assessments, certification results, remedial actions, and gaps outside personal injury/property damage. They must also perform annual independence audits and retain related documentation for 10 years. The reporting rules require metadata categories, aggregation strategies, and trade‑secret protections, forcing verifiers to balance public transparency against confidentiality and antitrust risk.
Creates a litigation presumption and authorizes regulations and fees
The law establishes a rebuttable presumption that developers exercised reasonable care if an MRO certified the model at the time of a plaintiff’s injury; defendants can overcome the presumption with contrary admissible evidence. The Commission and Attorney General may also promulgate regulations on minimum plan requirements, conflict‑of‑interest disclosures, fee structures to cover administrative costs, and certification fee regimes—empowering the state to shape operational incentives for both verifiers and applicants.
This bill is one of many.
Codify tracks hundreds of bills on Technology across all five countries.
Explore Technology in Codify Search →Who Benefits and Who Bears the Cost
Every bill creates winners and losers. Here's who stands to gain and who bears the cost.
Who Benefits
- Certified developers: Receiving an evidentiary presumption of reasonable care reduces exposure in tort suits and can lower defense costs and insurance premiums, improving predictability for firms that obtain certification.
- State procurement agencies: The Commission’s registry, standards analyses, and procurement recommendations give agencies vetted criteria to select AI products that meet defined safety thresholds, reducing procurement risk.
- Consumers and public‑interest organizations: Aggregated reporting, public registries, and whistleblower protections increase visibility into AI capabilities and harms, making systemic risks more discoverable and accountable.
- Verification providers and accredited labs: Designated IVOs, MROs, and certified security vendors gain a state‑backed market signal and potential demand for formal verification services from both industry and government buyers.
Who Bears the Cost
- Small and early‑stage AI developers: Preparing measurable plans, undergoing audits, and funding recertification after model updates can impose significant compliance costs that disproportionately affect smaller entrants.
- Designated IVOs and MROs: They must build and maintain robust auditing infrastructure, handle sensitive aggregated data with trade‑secret safeguards, perform annual independence audits, and absorb reporting burdens—potentially requiring staff and systems investments.
- Attorney General and Commission operations: Staffing, review of complex technical plans, enforcement of revocations, and rulemaking require resources; while fee authority exists, initial implementation could strain budgets.
- Security vendors and third‑party labs: Vendors must be certified or trained per applicants’ plans and face oversight and potential liability if their evaluations are found inadequate, raising operational costs and credentialing requirements.
Key Issues
The Core Tension
The central dilemma is that state‑endorsed private certification can improve safety incentives and streamline procurement, but it also risks shifting liability away from developers, concentrating market power among verifiers and large firms that can afford certification, and erecting burdens that disadvantage smaller innovators—forcing policymakers to trade off faster, standardized safety assessments against potential market distortion and reduced legal accountability.
The bill ties a meaningful litigation advantage to state‑recognized certification without specifying how courts should weigh the presumption against particular types of evidence; this invites litigation over evidentiary standards and could produce inconsistent outcomes across trial courts. The requirement that verifiers define "acceptable levels of risk" and measurable metrics is principled, but operationalizing those concepts across heterogeneous AI use cases (from medical devices to conversational agents) will be technically and politically contentious.
Metrics may be easy to specify for narrow safety properties but much harder for social harms or emergent behaviors.
The law also creates tensions between transparency and protection of sensitive information. Aggregating and publishing evaluation data helps public oversight but raises trade‑secret and antitrust concerns that the statute asks verifiers to manage; how those safeguards are implemented will determine whether the law truly balances public benefit with commercial confidentiality.
Finally, the independence rules—audits of funding and board composition, post‑service employment bars, and revocation triggers—aim to reduce capture, but they also limit the pool of qualified technical experts with industry experience, raising a practical governance trade‑off about where to draw the line between expertise and independence.
Try it yourself.
Ask a question in plain English, or pick a topic below. Results in seconds.