Codify — Article

California bill requires risk assessments, disclosures, and governance for high‑risk automated decision systems

SB 420 mandates impact assessments, governance programs aligned with NIST, individual notices and civil enforcement for AI systems that materially affect people.

The Brief

SB 420 creates a suite of compliance obligations for so‑called high‑risk automated decision systems (ADS) used in California. The bill makes developers perform pre‑deployment impact assessments and requires deployers to assess systems they put into production, maintain documented governance programs, disclose certain information to affected people, and provide a path to human review.

The measure targets systems that materially affect access to education, employment, housing, utilities, health care, lending, legal services, and essential government services. It couples substantive assessment and governance requirements with an enforcement mechanism—state civil actions, confidentiality protections for assessments, trade secret carve‑outs, and size‑based exemptions—creating both compliance obligations and practical procurement implications for vendors and public agencies operating in California.

At a Glance

What It Does

The bill requires developers to produce impact assessments for high‑risk ADS before public availability and obliges deployers to perform or obtain impact assessments for systems they put into use. It also mandates documented governance programs, public statements about deployed systems, individual notices and an appeal opportunity, and authorizes state enforcement with civil penalties.

Who It Affects

Developers and deployers of ADS used in California—particularly those supplying or using systems that materially affect education, employment, housing, utilities, health care, lending, legal rights, or essential government services. State agencies, procurement officers, and vendors who modify existing systems are all caught by the rules.

Why It Matters

SB 420 establishes state‑level, mandatory AI risk management for high‑impact uses, ties governance to existing standards (NIST AI Risk Management Framework), and makes algorithmic discrimination a civil enforcement focus—shaping product design, vendor contracts, and public‑sector procurement practices.

More articles like this one.

A weekly email with all the latest developments on this topic.

Unsubscribe anytime.

What This Bill Actually Does

The bill starts by defining the covered universe: an automated decision system is a computational process built from machine learning, statistical modeling, analytics, or AI that produces scores, classifications, or recommendations and that can materially impact people. ‘High‑risk’ systems are those used for decisions that have legal or similarly significant effects—examples called out include admission to schools, hiring and wages, essential utilities, housing, health care, lending, legal services, and essential government services. Narrow procedural tools and systems that only detect patterns without influencing decisions are excluded.

Developers must perform an impact assessment before making a high‑risk ADS publicly available if that availability occurs on or after January 1, 2026. Developers who substantially modify pre‑2026 systems must complete an assessment by January 1, 2028.

Deployers—those who use high‑risk ADS in the state—must perform an impact assessment within two years of deploying systems first put into service after January 1, 2026, although state agencies can opt out of performing their own assessment under strict conditions (use is as intended, developer compliance with procurement rules, no reasonable basis to suspect discrimination, and compliance with the bill’s disclosure rules).The statute spells out what an impact assessment must contain: the system’s purpose and intended deployment contexts; its intended outputs; the types and processing of input data; a summary of reasonably foreseeable disproportionate impacts on protected classifications; safeguards the developer has implemented and guidance for deployers to monitor those risks; the extent to which a deployer’s use matches the developer’s intended use; deployer safeguards; and how the system will be monitored and evaluated. If an assessment concludes the system is likely to produce algorithmic discrimination, the bill bars deployment unless deployer or developer implements mitigating safeguards and performs an updated assessment verifying the mitigation.Deployers must provide notice to any natural person subject to a decision made by a high‑risk ADS: the system’s purpose and the specific decision, how the system was used, the type of data used, contact information for the deployer, and a link to the public statement required on the deployer’s website.

That website statement must list types of high‑risk ADS in use, how the deployer manages known discrimination risks, and the nature and source of information collected. Deployers must also, as technically feasible, offer the person an opportunity to appeal the ADS decision for human review.Both developers and deployers must implement and maintain a documented governance program tailored to the system’s use and to the size, complexity, and resources of the organization.

The governance program must include administrative and technical safeguards, align with existing frameworks such as the NIST AI Risk Management Framework, specify processes and personnel for identifying and mitigating discrimination risks, be regularly reviewed, and include incident documentation and resolution procedures. The bill includes a trade secret carve‑out for disclosure obligations but requires notice when information is withheld on that basis.Enforcement is vested with the California Attorney General and the Civil Rights Department, which can request impact assessments (developers must provide them within 30 days and those documents are confidential).

The state can pursue civil actions including injunctive relief, attorneys’ fees, and tiered civil penalties for failing to conduct required assessments (amounts vary by employer size) with enhanced penalties for intentional noncompliance and a steep per‑violation penalty where algorithmic discrimination is involved. Before suing, the state must give 45 days’ written notice and allow a 45‑day cure period; cured violations supported by a signed declaration cannot be the subject of an action.

Finally, the chapter exempts entities with 50 or fewer employees and systems already certified or cleared by a federal agency under substantially similar or more stringent rules.

The Five Things You Need to Know

1

Developers must complete a written impact assessment before making a high‑risk ADS publicly available if that availability occurs on or after January 1, 2026; substantial modifications to pre‑2026 systems trigger assessments by January 1, 2028.

2

Deployers must perform an impact assessment within two years of deploying a high‑risk ADS first put into use after January 1, 2026; state agencies may opt out only if several narrow conditions are met, including no reasonable basis to suspect discrimination.

3

An impact assessment must identify purpose, intended outputs, input data types and processing, foreseeable disproportionate impacts on protected classifications, developer safeguards, monitoring approaches, and how deployer use compares to the developer’s intended use.

4

When an ADS is used to make an individual decision, the deployer must notify the person of the system’s purpose, how it was used, the data types involved, provide contact information, post a public statement about deployed systems, and—if technically feasible—offer a human appeal.

5

The Attorney General or Civil Rights Department can demand assessments (30‑day response), bring civil suits, and seek tiered fines for failing to conduct assessments, daily increases for intentional violations, and a $25,000 penalty for violations that concern algorithmic discrimination; a 45‑day notice and cure mechanism applies before suits proceed.

Section-by-Section Breakdown

Every bill we cover gets an analysis of its key sections. Expand all ↓

22756

Definitions and scope for high‑risk automated decision systems

This opening section defines core terms the rest of the chapter depends on: what counts as an automated decision system, which uses are 'high‑risk', who qualifies as a developer and a deployer, what constitutes a substantial modification, and which classes of systems are excluded (spam filters, firewalls, narrow procedural tools, systems that only detect patterns). Practical implication: compliance obligations hinge on sometimes subtle thresholds—whether a function 'materially impacts' a person or has a 'legal or similarly significant effect' is the gatekeeper for the rest of the regime.

22756.1

Timing of required impact assessments for developers and deployers

This provision sets deadlines: developers must assess before making a high‑risk ADS publicly available on or after Jan 1, 2026; substantial updates to legacy systems trigger assessment by Jan 1, 2028. Deployers face a two‑year window to complete an assessment after first deploying systems put into service after Jan 1, 2026. State agencies may decline to perform their own assessment only when several conditions are satisfied—use matches developer intent, no substantial modification, developer procurement compliance, and no reasonable basis to anticipate discrimination—shifting emphasis to procurement compliance and vendor documentation.

22756.2

Mandated contents of an impact assessment

The statute prescribes a checklist‑style content requirement: purpose and intended contexts, intended outputs, data types and processing recommendations, reasonably foreseeable disproportionate impacts on protected classifications, developer‑side safeguards and deployer monitoring instructions, and plans for ongoing monitoring and evaluation. For compliance teams this means impact assessments must be technical and operational documents—neither a marketing sheet nor a legal boilerplate.

5 more sections
22756.3

Notification, public statements, and appeal rights for affected individuals

Deployers must notify any natural person subject to a decision made by a high‑risk ADS with specific items: the system’s purpose and the particular decision, how the system was used, the types of data consumed, contact information, and a link to a public statement. The public statement must list ADS types in use, how the deployer manages discrimination risks, and data sources. The obligation to provide an appeal to a human reviewer is qualified by 'as technically feasible', which preserves operational discretion but leaves open compliance risk in litigation or enforcement.

22756.4

Governance program requirements; alignment with NIST

Both developers and deployers must maintain a governance program tailored to the system’s use and organizational capacity. The program must include administrative and technical safeguards, document processes and personnel responsible for risk management, be regularly reviewed, and include an incident documentation and resolution framework. The bill explicitly directs alignment with the NIST AI Risk Management Framework, giving organizations a clear technical reference point but also creating a compliance expectation tied to a specific standard.

22756.5 & 22756.6

Trade secret carve‑outs and deployment prohibition if likely to discriminate

The law permits withholding disclosure when it would waive a legal privilege or reveal a trade secret, but requires notice of the basis for non‑disclosure. Separately, it bars deployment where an assessment finds a system is likely to cause algorithmic discrimination—unless the parties implement mitigating safeguards and perform a follow‑up assessment to verify mitigation. This creates a two‑step regulatory lever: prevention of likely discrimination and conditional allowance where mitigation is verified.

22756.7

Enforcement powers, confidentiality, notice and cure, and penalties

The Attorney General and the Civil Rights Department can request impact assessments (developers must provide them within 30 days) and bring civil actions. Requested assessments and those provided to state agencies are confidential. The statute requires 45 days’ written notice before an enforcement action and permits a 45‑day cure period; cured violations supported by a sworn statement cannot be litigated. Financial penalties are tiered by entity size for failing to perform an assessment, escalate for intentional noncompliance on a per‑day basis, and include a $25,000 per‑violation penalty for algorithmic discrimination claims; courts can also award injunctive relief and fees.

22756.8

Limited exemptions

Two narrow exemptions appear: entities with 50 or fewer employees are outside the chapter’s reach, and high‑risk ADS already approved or cleared by a federal agency under a substantially similar or more stringent law are exempt. These carve‑outs limit scope but create potential edge cases around workforce measurement and equivalence of federal standards.

At scale

This bill is one of many.

Codify tracks hundreds of bills on Technology across all five countries.

Explore Technology in Codify Search →

Who Benefits and Who Bears the Cost

Every bill creates winners and losers. Here's who stands to gain and who bears the cost.

Who Benefits

  • Individuals in protected classes: the bill requires deployers to assess and disclose foreseeable disproportionate impacts and to mitigate algorithmic discrimination, which creates additional avenues to identify and reduce systems that produce unfair outcomes.
  • Consumers and applicants (students, jobseekers, borrowers, housing applicants): they gain notice when ADS influence decisions, a public statement about deployed systems, and an opportunity for human review 'as technically feasible', improving transparency and redressability.
  • Civil rights and enforcement agencies: the Attorney General and the Civil Rights Department receive investigatory access (with confidentiality) and statutory authority to pursue civil penalties and injunctive relief, strengthening oversight capacity.
  • Vendors and developers with mature risk programs: firms that already implement robust impact assessments and governance will secure a compliance advantage in California procurement and may use their documentation to support state agency opt‑outs.

Who Bears the Cost

  • Developers of high‑risk ADS: they must produce substantive impact assessments, document mitigation strategies, and respond to government requests within 30 days—adding up‑front and ongoing compliance costs, particularly for small and medium vendors.
  • Deployers (private companies and public agencies): they must perform or obtain assessments, post public disclosures, maintain governance programs, and provide individual notices and appeals logistics—raising operational, legal, and procurement burdens.
  • Small and mid‑sized vendors near exemption thresholds: firms with slightly more than 50 employees may face disproportionate compliance burdens compared with smaller competitors who are exempt, creating potential market distortions.
  • State procurement and legal teams: state agencies must track developer compliance, decide whether to opt out of assessments, and manage confidential assessment materials—requiring internal processes, contract language changes, and possibly more procurement oversight resources.

Key Issues

The Core Tension

The bill’s central dilemma is balancing stronger, state‑level prevention of discriminatory automated decisions against preserving innovation, proprietary model protection, and workable procurement: enforcing rigorous assessments and public notice reduces harm but raises costs, increases vendor burden, and can push sensitive details into confidential channels—so the law must choose how much visibility and enforcement power to grant while avoiding chilling beneficial uses of AI.

SB 420 stacks transparency and governance requirements against confidentiality and trade‑secret protections, producing implementation tensions. The bill keeps developer and agency‑provided impact assessments confidential, and allows trade secret non‑disclosure with notice.

That structure protects proprietary models but reduces the amount of public, reproducible evidence available to assess systemic harms. The result is more control for regulators behind closed doors, but less public accountability and third‑party verification.

Several operational ambiguities will drive compliance disputes. Key terms—'materially impacts', 'legal or similarly significant effect', 'substantial modification', and 'algorithmic discrimination'—are defined but leave room for factual contest.

The statute’s 'as technically feasible' standard for human appeals, and the state‑agency opt‑out that depends on developer compliance with procurement rules, create conditional obligations that hinge on procurement practices and evolving technical capabilities. The penalty scheme also raises questions: exemptions for entities with 50 or fewer employees sit uneasily next to penalty tiers that reference 100 and 500 employee thresholds, and the 45‑day cure window may blunt deterrence for well‑resourced actors able to rapidly remediate deficiencies.

Try it yourself.

Ask a question in plain English, or pick a topic below. Results in seconds.