Codify — Article

California AB 1898: Mandatory transparency for workplace AI and surveillance

Requires employers to disclose detailed information about AI‑driven decisions and surveillance, secure worker acknowledgment, keep an inventory, and face enforceable remedies—reshaping HR, procurement, and vendor relationships.

The Brief

AB 1898 establishes a statutory framework in California that centers worker-facing transparency for the use of artificial intelligence and surveillance technologies in the workplace. The bill defines key terms—like “artificial intelligence,” “automated decision system,” and “workplace surveillance tool”—and locates responsibilities with employers, including public employers and labor contractors.

The measure matters because it converts disclosure obligations into enforceable duties with administrative and civil remedies. That combination makes the bill operationally significant for HR, procurement, compliance teams, labor representatives, and AI vendors that supply or configure workplace systems.

At a Glance

What It Does

Sets out employer obligations around workplace AI and surveillance: written, plain‑language communications to affected workers; a signed acknowledgment requirement; an annual inventory of tools; and a specified list of information employers must disclose about each tool. The law pairs those obligations with enforcement power for the Labor Commissioner, public prosecutors, and a private right of action.

Who It Affects

Applies to virtually any California employer that controls wages, hours, or working conditions, including state and local government entities and labor contractors. It also implicates third‑party AI vendors, HR and legal teams, unions, and worker privacy officers.

Why It Matters

It forces employers to operationalize transparency around model identity, data collection and retention, quotas and automation plans, and risk assessments—items that will change procurement terms, vendor disclosure practices, and day‑to‑day supervision of workers.

More articles like this one.

A weekly email with all the latest developments on this topic.

Unsubscribe anytime.

What This Bill Actually Does

AB 1898 organizes around two practical objectives: define the covered technologies and make employers tell workers how those technologies will touch their jobs. The bill’s definitions are broad: an “automated decision system” covers machine‑learning or statistical processes that produce scores, classifications, or recommendations used to assist or replace human decisions and that materially impact people; “workplace surveillance tool” captures non‑human methods of collecting worker data such as video, audio, geolocation, biometrics, and continuous time‑tracking.

The statute explicitly excludes basic IT security tools and simple data compilations from the definition of automated decision systems.

On procedure, the bill requires employers to provide a standalone, plain‑language notice in the language used for routine workplace communications. Employers must give this notice well before a tool’s use in ways the bill specifies: a fixed advance notice before first deployment, a deadline for existing tools, and notice to new hires.

The notice must be acknowledged in writing by affected workers; employers may not operate the covered tool affecting a worker until the worker returns the signed acknowledgment. Employers also must maintain and provide to workers an annually updated inventory listing all workplace AI tools in active use.The notice must contain a detailed set of disclosures.

Among other things it must state the purpose and justification for the tool, what employment decisions it may affect, the categories and frequency of worker data collected and how long that data will be stored, a plain‑language description of inputs, analysis, and outputs, who can access the data and whether it will be sold or transferred, the tool creator and specific model name, the locations/activities surveilled and the technology used, any quotas the tool measures (with quantified targets and consequences), whether the tool will replace jobs or tasks and an expected timeline, training provided to managers and workers, and the results of any risk assessments conducted under California’s consumer privacy law.Enforcement is multi‑track. The Labor Commissioner may investigate, issue citations, order temporary relief, and bring civil actions using existing labor enforcement procedures referenced in the statute.

Public prosecutors may also enforce the law. Separately, any worker or their exclusive representative may sue for damages, including punitive damages; plaintiffs can seek injunctive relief, attorneys’ fees, and costs.

The bill authorizes a statutory penalty of up to $500 per employee per violation, but it bars double recovery of statutory and civil penalties for the same violation. Finally, the statute preserves local ordinances that provide equal or greater employee protections.

The Five Things You Need to Know

1

The bill requires employers to obtain a signed acknowledgment from affected workers and forbids using a workplace AI tool on a worker until that worker returns the signed notice.

2

Employers must give advance notice before first deploying a covered tool, disclose existing tools by a statutorily set deadline, and provide notice to new hires.

3

The notice must list the tool’s creator and the specific model name and describe inputs, outputs, data retention, who can access or transfer the data, quotas and their consequences, and any planned job automation and timelines.

4

Employers must maintain an annually updated list of all workplace AI tools in use and provide that list to workers each year.

5

Enforcement includes Labor Commissioner investigations and citations, public prosecutor enforcement, and a private right of action for workers; remedies include injunctive relief, punitive damages, attorneys’ fees, and up to $500 per employee per violation as a statutory penalty.

Section-by-Section Breakdown

Every bill we cover gets an analysis of its key sections. Expand all ↓

Section 1600

Definitions and scope

This section sets the boundaries of the law by defining core terms: “artificial intelligence,” “automated decision system,” “workplace surveillance tool,” “worker,” “worker data,” and “employer.” The definitions are intentionally broad—covering machine learning, statistical models, and passive surveillance methods—and they expressly include state and local governmental employers and labor contractors. The definition of automated decision system excludes common cybersecurity and basic data tools, which narrows exposure for routine IT operations but leaves many analytics and ML systems squarely covered.

Section 1601(a)–(e)

Notice timing, acknowledgment, and inventory

This provision creates the sequence employers must follow: deliver a standalone, plain‑language notice in the workplace’s routine language; require a signed acknowledgment from affected workers; and withhold use of the tool on a worker until that acknowledgment is returned. It also obligates employers to provide notice to new hires and to keep an annual, updated inventory of active workplace AI tools. Practically, employers will need workflows tying procurement and vendor onboarding to notice drafts, HR acceptance processes to acknowledgment collection, and compliance checks verifying the annual inventory.

Section 1601(f)

Required disclosure content

Subsection (f) enumerates detailed disclosure elements that must appear in the notice, from the tool’s purpose and affected employment decisions to granular technical and operational facts: the categories and frequency of data collection, data storage details, a plain‑language description of inputs/analysis/outputs, model creator and model name, access/sharing of data, surveillance locations and technologies, quantified quotas and adverse consequences for failing them, automation timelines, training provided, and the results of any CCPA‑related risk assessments. That list forces employers and vendors to reconcile transparency with trade‑secret and security concerns and to document technical materials in worker‑digestible language.

2 more sections
Section 1602(a)–(d)

Enforcement mechanisms and remedies

This section gives the Labor Commissioner primary enforcement authority, with the power to investigate, issue citations, order temporary relief, and file civil actions using the commissioner’s established procedures. It also permits public prosecutors to enforce the statute and creates a private right of action for workers and their exclusive representatives that includes damages (including punitive), injunctive relief, and attorneys’ fees. For employers, that means exposure to administrative enforcement and parallel civil litigation risk.

Section 1602(e)–(g)

Penalties, venue, and local ordinances

The statute caps statutory penalties at $500 per employee per violation (subject to the rule that plaintiffs cannot collect both statutory and civil penalties for the same violation). It specifies venue options and explicitly does not preempt city or county ordinances that provide equal or greater employee protections. Those choices create a mix of remedies that can produce localized enforcement intensity depending on municipal laws and prosecutorial priorities.

At scale

This bill is one of many.

Codify tracks hundreds of bills on Employment across all five countries.

Explore Employment in Codify Search →

Who Benefits and Who Bears the Cost

Every bill creates winners and losers. Here's who stands to gain and who bears the cost.

Who Benefits

  • Frontline workers who will receive clearer, standardized information about how AI and surveillance affect hiring, assignments, performance metrics, discipline, and job security—enabling informed responses and potential legal recourse.
  • Labor unions and exclusive representatives that gain a formalized transparency lever to negotiate over surveillance, quotas, automation timelines, and privacy protections during collective bargaining.
  • Workplace privacy and compliance officers who acquire a legally specified checklist to guide audits, vendor contracts, and disclosures—reducing ambiguity about what to collect and communicate.

Who Bears the Cost

  • Employers (private and public), which must build or rework procurement, HR onboarding, and recordkeeping processes, prepare plain‑language disclosures, train managers, and absorb legal and administrative compliance costs.
  • Third‑party AI vendors and integrators, who may need to disclose model names, architectures, and risk assessments that they treat as confidential, and who will likely face more rigorous contractual terms and indemnities.
  • Small businesses and resource‑constrained public entities that will shoulder fixed compliance costs (drafting notices, collecting acknowledgments, maintaining inventories) that may be disproportionate to their use of AI systems, and which could slow adoption of productivity tools.

Key Issues

The Core Tension

The bill frames a genuine dilemma: it advances worker control and transparency by forcing employers to disclose how AI shapes jobs, quotas, and surveillance, but it does so at the cost of operational flexibility and vendor confidentiality—creating pressure on procurement, onboarding, and innovation. Lawmakers trade secrecy for accountability; the question is whether that trade produces safer, fairer workplaces or simply shifts costs and litigation risk onto employers and suppliers without clear operational standards.

The bill trades transparency for potential operational friction. Requiring model names, risk assessment results, and data‑sharing details improves worker visibility but collides with vendors’ intellectual property and security claims; those conflicts will push disclosure negotiations into vendor contracts and could raise procurement prices or limit vendor willingness to provide detailed technical material.

The statutory requirement that employers withhold use of a tool until a worker signs the acknowledgment gives workers an effective veto over immediate deployment, but it also creates a clear operational choke point: employers must design onboarding and contingency plans to avoid productivity gaps.

Ambiguities and implementation headaches persist. The statute ties risk‑assessment disclosure to the state’s consumer privacy law, but it does not specify how to reconcile competing confidentiality obligations or how frequently disclosures must be updated when vendors change models or retrain systems.

The law’s expansive definitions—“materially impacts,” “reasonably linked,” and what counts as passive surveillance—invite litigation to draw boundaries. Finally, the $500 per‑employee penalty is meaningful for large workforces but may be small relative to litigation costs or compliance expense for both plaintiffs and defendants; conversely, aggregated statutory penalties could be substantial in class‑style suits, incentivizing settlements and aggressive enforcement.

Try it yourself.

Ask a question in plain English, or pick a topic below. Results in seconds.