AB 1018 (Automated Decisions Safety Act) creates a statutory framework for automated decision systems (ADS) used to make “consequential decisions” affecting Californians. The bill defines covered ADS, requires developers to perform and retain impact assessments, and requires deployers to notify individuals, preserve records, and provide correction and appeal rights when an ADS influences outcomes that materially affect people.
The law also builds an enforcement structure: state and local public entities may sue for injunctions, declaratory relief, attorney fees, and civil penalties. The bill is designed to increase transparency and risk management for systems used in employment, housing, health care, finance, criminal justice, elections, and other essential services, while carving out targeted exemptions and limited trade-secret protections.
At a Glance
What It Does
The bill requires developers to evaluate and document the purpose, expected accuracy, risks of disparate treatment and disparate impact, and mitigation steps for covered ADS; requires third-party audits beginning in 2030; and obligates deployers to give pre- and post-decision disclosures, correction and appeal processes, and to retain documentation for five years after use ceases.
Who It Affects
Developers who design or substantially modify ADS and organizations that deploy ADS for consequential decisions across employment, housing, healthcare, financial services, elections, and utilities will face new compliance duties; auditors, legal teams, and consumer-facing compliance functions will be directly involved.
Why It Matters
The act establishes a statewide baseline for algorithmic accountability in critical sectors, introduces civil penalties up to $25,000 per violation, and creates a model for balancing transparency with trade-secret claims—potentially influencing procurement, vendor contracts, and product design choices for AI systems used in regulated contexts.
More articles like this one.
A weekly email with all the latest developments on this topic.
What This Bill Actually Does
The statute starts by defining key terms—what counts as “artificial intelligence,” an “automated decision system,” and a “consequential decision.” Those definitions drive scope: an ADS becomes “covered” when it profiles a natural person to make or facilitate decisions that materially affect cost, terms, access, or quality of employment, housing, healthcare, finance, education, elections, utilities, and other essential services. The text also narrows certain routine technologies (spam filters, spreadsheets) out of scope and exempts a few categories such as cybersecurity tools, aircraft operations, and military-only systems.
Developers of covered ADS must run impact assessments that describe purpose and approved uses, estimate accuracy and reliability, flag any intended or likely disparate treatment or disparate impacts, and explain mitigation measures. For systems already distributed before 2026, developers must complete an initial assessment by the specified deadline and then update annually; systems introduced after that date require an assessment before deployment and annual reassessments thereafter.
The bill also requires developers to contract with independent auditors starting in 2030 and to preserve unredacted documentation while the system is in use plus an additional retention period.Deployers—entities that actually use ADS to make consequential decisions—must provide clear, plain-language disclosures to people affected, both before finalizing a decision and after a decision is made. Post-decision disclosures must include what attributes and sources of personal information were used, any key parameters that disproportionately influenced the outcome, whether a human reviewed the ADS outputs, and instructions for exercising correction and appeal rights.
Deployers also must respond to correction and appeal requests within set timeframes and, in some cases, rectify decisions when errors are found. If a deployer’s use reaches certain scale thresholds, or the deployer substantially modifies a system, the deployer assumes the developer’s responsibilities under the chapter.Enforcement is public-entity driven: the Attorney General and certain state and local public prosecutors, the Civil Rights Department, and the Labor Commissioner (for employment cases) may sue for compliance.
Remedies include injunctive and declaratory relief, recovery of fees, and civil fines—calculated per violation with several discretionary factors. The bill preserves other statutory rights and makes clear it does not authorize uses that are otherwise prohibited, while also providing limited trade-secret redaction rights for developers and deployers when sharing documentation.
The Five Things You Need to Know
The bill makes a system “covered” when it profiles an individual to make or facilitate a consequential decision across enumerated sectors such as employment, housing, health care, finance, criminal justice, and elections.
Developers must complete impact assessments before initial deployment for systems introduced on or after Jan 1, 2026, and complete an initial assessment and then annual updates for pre-2026 systems by the statutory deadlines.
Beginning January 1, 2030, developers must contract with independent third-party auditors and complete an audit within six months of the most recent impact assessment and then every two years after the initial audit.
Deployers must provide pre-decision and post-decision plain-language disclosures, respond to correction or appeal requests within 30 business days, and retain unredacted documentation for five years after they stop using the covered ADS.
Public enforcement entities may seek civil penalties up to $25,000 per violation, and courts may weigh factors like willfulness, harm, and the number of violations when setting fines.
Section-by-Section Breakdown
Every bill we cover gets an analysis of its key sections.
Scope and definitions
This section builds the statutory vocabulary: it sets the meaning of “automated decision system,” “covered ADS,” “consequential decision,” “developer,” and “deployer,” and lists the sectors that trigger coverage. Practically, the definitions determine whether an organization must comply; the profile-based trigger (use of personal information to output a score, classification, or recommendation that influences a decision) is a functional test aimed at capture of predictive, individualized systems while excluding general-purpose data tools. The carrier of nuanced choices—what counts as material impact or reasonable foreseeability—will fall to implementing guidance and litigation to clarify borderline cases.
Developer impact assessments, audits, and documentation
This provision requires developers to document purpose, approved uses, expected accuracy and reliability, and both intended and reasonably foreseeable disparate treatment and disparate impacts, including effects of fine-tuning. It imposes annual reassessments and—starting in 2030—mandatory third-party audits with access to necessary information (subject to narrow trade-secret redactions). The practical implication: vendors must build auditability into development pipelines, preserve audit trails, and be prepared to justify design choices and mitigation strategies to auditors and downstream deployers.
Deployer notice, correction, appeal, and developer-assumption rules
Deployer obligations focus on transparency to individuals and operational accountability: pre-decision notices (with model identity, assessed attributes, and key parameters) and post-decision disclosures (with sources used, role of human review, and contact information), plus a 30-business-day window to handle correction and appeal requests. The section also contains the scale-based trigger by which a deployer that impacts more than 6,000 people over three years—or that substantially modifies a system—must assume developer responsibilities. For procuring entities, this creates a material incentive to demand fuller documentation from vendors or to accept expanded compliance duties themselves.
Attorney General access and public-records carve-out
The Attorney General may request unredacted impact assessments and the bill obligates producers, deployers, and auditors to provide them promptly; those assessments are exempt from the California Public Records Act. That combination gives the AG broad investigatory reach while limiting public disclosure of underlying materials, preserving a degree of confidentiality for sensitive information while enabling enforcement reviews.
Enforcement authorities and remedies
This section enumerates public entities with standing to sue (including the AG, certain district and city attorneys, the Civil Rights Department, and the Labor Commissioner for employment matters) and lists remedies: injunctive and declaratory relief, attorney’s fees, and civil penalties up to $25,000 per violation. The statute also makes principals liable for third-party contractors’ failures when duties are delegated, increasing the legal risk in vendor relationships.
Exemptions and preemption
The bill excludes specific ADS uses—cybersecurity, aircraft operations, military-only systems, fraud detection for electronic payments, certain physical-security systems, and advertisements—and confirms it does not apply where federal law preempts regulation. It also clarifies that using a consumer credit score alone doesn’t trigger obligations. These carve-outs narrow practical application and will be critical in procurement and compliance assessments.
Savings clauses and interaction with other law
The final section preserves other statutory protections and remedies; it states the chapter does not authorize uses that other laws prohibit and does not change civil-rights standards. Practically, regulated entities must manage ADS obligations alongside existing state and federal requirements, and cannot rely on this chapter as a shield against other legal duties.
This bill is one of many.
Codify tracks hundreds of bills on Technology across all five countries.
Explore Technology in Codify Search →Who Benefits and Who Bears the Cost
Every bill creates winners and losers. Here's who stands to gain and who bears the cost.
Who Benefits
- People subject to consequential decisions — they gain clearer, actionable information about how ADSs affected decisions and a structured right to correct data and appeal outcomes, improving opportunities to remedy errors.
- Civil-rights and consumer-protection organizations — the law creates documentation and audit pathways that strengthen investigations into systemic bias and disparate impacts and provides public enforcement routes.
- Responsible developers and vendors — firms that build auditability, clear documentation, and remediation processes can use compliance as a market differentiator in public and private procurement.
Who Bears the Cost
- Developers of ADS — they must invest in impact-assessment processes, maintain records, support audits (including sharing sensitive materials), and respond to downstream inquiries, raising development and legal costs.
- Deployers (government agencies, employers, lenders, housing providers, healthcare providers) — they must implement notification workflows, correction and appeal mechanisms, designate compliance officers, and potentially assume developer duties if scale or modification thresholds are met.
- Auditors and compliance teams — independent auditors, in-house compliance staff, and legal counsel will bear additional workload and liability exposure; third-party contractors performing delegated duties leave principals exposed to joint liability.
Key Issues
The Core Tension
The bill forces a trade-off between two legitimate objectives: maximizing individual transparency and safety for high-stakes automated decisions, versus protecting developers’ intellectual property and enabling rapid AI innovation. The statute tries to thread this needle with limited trade-secret redactions, phased auditing, and exemptions, but those same compromises create enforcement ambiguity and compliance friction that will divide reasonable observers about whether the law protects Californians or unduly burdens responsible AI development.
The act pushes transparency and risk management into AI development and procurement chains, but it leaves multiple implementation questions. Key definitional phrases—“materially impacts,” “reasonably foreseeable,” and what constitutes a “substantial modification”—are open to interpretation and will determine the statute’s effective reach.
Those ambiguities create compliance difficulties: vendors must calibrate conservatively (which can chill functionality) or risk later enforcement. The provision allowing trade-secret redactions during audits and disclosures mitigates commercial exposure but also shifts discretion to developers about what gets redacted and how much auditors and regulators can rely on redacted materials when assessing risk.
Operationally, the law sets heavy documentation, retention, and responsiveness duties (including specific disclosure and 30-business-day correction windows) that will be onerous for organizations lacking mature data governance. The scale trigger that pushes deployers into developer responsibilities creates a procurement choice: require comprehensive developer documentation up front, or accept the burden of further obligations when use grows.
Enforcement design—public entities bringing civil suits with per-violation fines—creates incentives for careful compliance but also raises questions about how a “violation” is counted in continuous or multi-factor workflows (e.g., daily use without a submitted impact assessment increases exposure per day). Finally, interaction with federal regimes and sector-specific rules (notably GLBA) may produce preemption or dual-compliance regimes; regulated entities will need cross-functional legal and privacy reviews to reconcile obligations.
Try it yourself.
Ask a question in plain English, or pick a topic below. Results in seconds.