This bill directs the California Department of Technology to create and maintain a comprehensive, annual inventory of high‑risk automated decision systems (ADS) that are proposed, procured, developed, or used by state entities. The inventory must describe each system’s decision-making role, the data it uses, risk‑mitigation measures, research on efficacy, and feasible alternatives.
The obligation is intended to centralize information about state deployment of ADS so policy makers, auditors, and procurement officers can identify risks, preserve options, and prioritize oversight. The statute also draws narrow exclusions and defines key terms to shape which systems are captured and which state actors must report information.
At a Glance
What It Does
The bill requires the Department of Technology to assemble a yearly, statewide catalog of high‑risk ADS in coordination with appropriate interagency bodies. The catalog must enumerate the systems’ uses, alternatives, data categories, risk‑mitigation measures, and any assessments of effectiveness.
Who It Affects
The obligation applies to state offices, departments, divisions, and bureaus, plus the California State University, the Board of Parole Hearings, and professional licensing and regulatory bodies administered under the Department of Consumer Affairs; it expressly omits the University of California, the Legislature, and the judicial branch. Vendors supplying ADS to those covered entities may need to provide technical details to satisfy inventory requirements.
Why It Matters
For procurement officers and compliance teams, the inventory creates a single source to evaluate whether ADS deployments are justified, effective, and managed with appropriate controls. For privacy, civil‑liberties, and risk teams, the statute creates a foundation for targeted audits, contestability processes, and policy decisions about limiting or pausing risky applications.
More articles like this one.
A weekly email with all the latest developments on this topic.
What This Bill Actually Does
The bill builds a compliance chore and a transparency asset. It starts by defining the vocabulary the Department will use: an ADS is a computational process (from machine learning, statistical modeling, data analytics, or AI) that issues simplified outputs used to assist or replace discretionary human decisions and that materially affects people.
The definition excludes routine defensive and administrative software (like spam filters or identity and access tools) so the inventory focuses on systems that affect rights or access.
The statute singles out “high‑risk” ADS as those used in decisions with legal or similarly significant effects — examples the text names include housing, education, employment, credit, health care, and criminal justice. There’s a narrowly drawn carve‑out for automated tools that only screen benefit applications for completeness or pre‑clear applicants against preset thresholds, provided those systems never determinatively deny benefits without human review.What the Department must collect goes beyond a name‑and‑vendor list.
For each high‑risk system the inventory must describe the decisions the system can make or support and the intended benefits, document alternatives to that use, summarize research on the system’s efficacy versus alternatives, list the categories of data and personal information the system consumes, and catalogue any technical or procedural measures to mitigate risks — for example, performance metrics, cybersecurity and privacy controls, audits or risk assessments, and processes allowing people to contest automated outputs.Operationally, the Department is instructed to carry out the inventory annually and may work with other interagency groups. The statutory language identifies which state actors are in scope and which are not, defines multi‑member “boards,” and ties the reporting process to submission rules in California law.
The bill directs delivery of the inventory report to the Assembly Committee on Privacy and Consumer Protection and the Senate Committee on Governmental Organization, creating a legislative record that agencies and oversight offices can use to prioritize follow‑up work.
The Five Things You Need to Know
The bill defines “artificial intelligence” as a machine‑based system that can infer from inputs how to generate outputs that influence physical or virtual environments.
An “automated decision system” is limited to computational processes producing simplified outputs (scores, classifications, recommendations) that materially impact natural persons and explicitly excludes tools such as spam filters, firewalls, antivirus software, identity and access management, calculators, and raw datasets.
The statute exempts ADS used exclusively to verify completeness or minimum thresholds of social‑services benefit applications if any negative eligibility indication is ignored until a human reviews the case.
The Department must include, for each listed system, descriptions of the decisions it supports, alternatives, research on efficacy, data categories used, and any risk‑mitigation measures (including performance metrics, cybersecurity, privacy controls, audits, and contestability processes).
The first report is scheduled for submission on or before January 1, 2025, and annually thereafter; the text also contains an inoperative/sunset cross‑reference that raises ambiguity about how long the reporting obligation will remain in force.
Section-by-Section Breakdown
Every bill we cover gets an analysis of its key sections.
Definitions and scope
This subsection establishes the statutory vocabulary the Department will use: it defines “artificial intelligence,” “automated decision system,” “board,” and “department,” and sets out what counts as a “state agency.” The definition of ADS narrows the universe by excluding commonplace IT tools and raw datasets, focusing reporting duties on systems that can materially affect natural persons. Practically, these definitions determine what must be inventoried and whom the Department can require to contribute information.
Targeted exclusions and covered entities
The bill creates a narrow exception for automated tools used only to verify benefit application completeness or to affirm minimum eligibility thresholds, provided any system output indicating ineligibility triggers human review rather than automatic denial. It also lists the state entities covered (state offices and departments, California State University, Board of Parole Hearings, and DCA‑administered licensing bodies) and explicitly excludes the University of California, the Legislature, and the judicial branch—so implementation effort will be concentrated on executive‑branch agencies and certain statewide public institutions.
Inventory content requirements
Subdivision (b) requires the Department to produce a comprehensive inventory of high‑risk ADS used, proposed, or procured by covered agencies. Subdivision (c) lays out the elements the inventory must include: the decisions supported and their intended benefits, alternatives to those uses, research on efficacy and relative benefits, the categories of data and personal information used, and the mitigation measures in place (cybersecurity, privacy, performance metrics, audits, and contestability processes). For compliance officers, these are checkboxes agencies will need to fill with documentation and supporting evidence rather than mere high‑level descriptions.
Reporting mechanics and statutory cross‑references
The Department must submit the comprehensive inventory report to two legislative committees on or before January 1, 2025, and then annually. The report obligation must comply with Section 9795 (which governs electronic filing formats) and includes a cross‑reference that makes the requirement inoperative on a later date under Section 10231.5; the bill text as presented contains an apparent inconsistency about that inoperative date. This section sets the delivery channel for oversight but leaves open questions about public access and the format of submissions beyond the cross‑reference to existing filing rules.
This bill is one of many.
Codify tracks hundreds of bills on Technology across all five countries.
Explore Technology in Codify Search →Who Benefits and Who Bears the Cost
Every bill creates winners and losers. Here's who stands to gain and who bears the cost.
Who Benefits
- State policy makers and legislative oversight committees — they gain a standardized evidence base to prioritize audits, legislate protections, and evaluate whether ADS deployments match policy aims.
- Privacy, civil‑liberties, and consumer‑advocacy groups — the inventory centralizes information that supports targeted advocacy and litigation to challenge or limit harmful ADS use.
- Procurement and compliance teams within covered agencies — having a centralized catalog helps identify redundant systems, compare alternatives, and justify procurement or de‑provisioning decisions.
- Researchers and auditors — access to structured descriptions, data categories, and mitigation measures lets independent evaluators assess performance and disparate‑impact risks at scale.
Who Bears the Cost
- Department of Technology — it must build and maintain the inventory, coordinate interagency work, and process submissions, creating a sustained administrative burden and demand for technical capacity.
- Covered state agencies (including CSU and DCA boards) — they must assemble technical documentation, efficacy research, and mitigation evidence for each in‑scope ADS, which will require staff time and possibly vendor cooperation.
- Vendors and system integrators — to the extent agencies rely on proprietary tools, vendors may be asked to disclose model inputs, performance metrics, and security controls, creating commercial and IP tension and possible negotiation costs.
- Program administrators for social‑services systems — the narrow exception still requires operational design (e.g., ensuring human review of negative outcomes), which can increase workflow and staffing costs for client eligibility teams.
Key Issues
The Core Tension
The bill pits the need for centralized transparency and evaluative capacity against operational burdens, commercial secrecy, and privacy/security risks: it seeks to expose enough information for oversight while not crippling agencies’ ability to use tools that improve efficiency — but it leaves open how much disclosure is required and who will bear the cost of meaningful validation.
The statute creates a valuable centralized dataset but leaves several operational and legal gaps. First, the definitions are broad in places (e.g., what constitutes a system that “materially impacts” a person) and narrow in others (explicit exclusions for routine software); agencies and courts will likely need to interpret borderline cases.
Second, the bill requires documentation of “research assessing efficacy” and “alternatives,” but it does not specify methodological standards, burden of proof, or whether agencies must commission independent validation. That ambiguity could generate uneven compliance or incentivize minimal, checkbox‑style submissions.
Implementation also raises privacy and security tensions. The inventory will list categories of personal information used by systems but the bill does not require public release or specify how sensitive technical details and datasets will be protected.
Agencies and vendors may be reluctant to disclose details that reveal model behavior, training data, or security posture, and the statute does not establish confidentiality rules or review processes to balance transparency with risk. Finally, the presence of an inoperative/sunset cross‑reference with conflicting dates and the reliance on an external electronic‑filing statute (Section 9795) create drafting inconsistencies that could delay clear enforcement or complicate recordkeeping.
Try it yourself.
Ask a question in plain English, or pick a topic below. Results in seconds.