Codify — Article

California SB 524 mandates disclosure, retention, and audit trails for AI‑generated police reports

Requires visible labeling and officer verification of AI‑assisted reports, retention of AI first drafts and audit records, and limits vendor reuse of law‑enforcement data.

The Brief

SB 524 obligates every California law enforcement agency to adopt policies ensuring that any official report drafted wholly or partly with artificial intelligence is clearly identified and signed by the officer who reviewed it. The bill requires agencies to keep the AI‑generated first draft as part of their records, maintain an audit trail tying the draft to the human user and any audio/video inputs, and restrict contracted vendors from sharing or using agency data except for the agency’s purposes or under court order.

The law creates operational obligations for agencies, IT teams, and vendors: labeling and signature practices, long‑term retention and storage of drafts and audit logs, and contract provisions limiting third‑party reuse while permitting limited vendor access for troubleshooting and system improvement. Those rules change how police reports are produced, preserved, and litigated, with implications for evidence integrity, privacy, and contracting with AI providers.

At a Glance

What It Does

SB 524 requires agencies to label any official report produced fully or partially with AI and to include a verifying officer signature that states the facts are true. It mandates retention of AI 'first drafts' and an audit trail identifying the person who used AI and any audio/video inputs, and it bars vendors from using agency data except for the agency’s purposes or under court order (with narrow exceptions for troubleshooting and model refinement).

Who It Affects

California state and local law enforcement agencies, officers who prepare reports, IT and records units responsible for retention and audit logging, contracted AI vendors supplying report‑generation tools, and legal actors who handle discovery and contest report accuracy (defense counsel, prosecutors, and oversight bodies).

Why It Matters

The bill builds procedural guardrails around a fast‑spreading use of generative AI in policing: it prioritizes traceability and human certification over automation alone, forces contract changes for vendors, and creates a permanent record trail that could shift how courts and oversight entities evaluate police narratives.

More articles like this one.

A weekly email with all the latest developments on this topic.

Unsubscribe anytime.

What This Bill Actually Does

SB 524 focuses on three operational goals: transparency, traceability, and limits on third‑party reuse. For transparency, the bill forces agencies to surface to any reader—by page or in the report body—which specific AI system contributed to the narrative and to include an explicit notice that AI assisted the writing.

For traceability, it requires agencies to keep the AI‑produced first draft and an audit trail that links the draft to the human operator and to any video or audio the AI used. Those materials must be kept for the same duration the agency retains the signed official report.

On vendor use, SB 524 draws a tight circle: vendors may not share or commercialize law‑enforcement inputs and outputs except to serve the contracting agency or under a court order, although they are allowed to access data for troubleshooting, bias mitigation, accuracy improvements, or system refinement. That carve‑out preserves a narrow space for technical maintenance while blocking broader data monetization or cross‑use.Practically, agencies must update internal policies, adjust evidence management and records retention schedules to include AI first drafts and audit logs, and change procurement and contract language to reflect vendor restrictions and permitted troubleshooting access.

Officers retain a central role: the final, signed 'official report' remains their certification that the facts are true, and the bill clarifies that AI drafts are not officer statements unless signed.The statute also narrows what counts as covered 'artificial intelligence'—it targets systems that auto‑draft narratives from in‑car, dash, or body‑worn camera material or that convert dictated reports into narratives enhanced by generative AI. That scope excludes some other AI uses in policing, which means agencies will need to interpret whether a tool falls inside the statutory definition before applying the new procedures.

The Five Things You Need to Know

1

The bill requires the report to state on each page or in the text which specific AI program was used and to include the prominent sentence: “This report was written either fully or in part using artificial intelligence.”, Officers must sign—physically or electronically—the official report, verifying they reviewed it and that its facts are true; the signed official report, not the AI draft, is the officer’s statement.

2

Agencies must retain the AI 'first draft' for as long as they keep the corresponding official report, and they must also preserve an audit trail that identifies the person who used the AI and any video/audio sources used.

3

Contracted vendors may not share, sell, or reuse agency data except for the contracting agency’s purposes or pursuant to a court order; vendors may, however, access data to troubleshoot, mitigate bias, improve accuracy, or refine the system.

4

The statute defines covered AI narrowly to systems that auto‑draft police narratives from in‑car/dash/body cameras or that transform dictated reports into generatively enhanced narratives, which leaves other AI tools outside the text’s literal scope.

Section-by-Section Breakdown

Every bill we cover gets an analysis of its key sections. Expand all ↓

Subdivision (a)

Mandatory labeling and officer verification for AI‑assisted reports

Subdivision (a) forces agencies to require that any official report produced in whole or in part by AI conspicuously identify the specific AI program and include a clear notice that AI was used; it also requires the preparing officer’s signature (electronic or physical) certifying they reviewed the content and attest to its truth. Practically, agencies must set formatting standards (where the disclosure appears on a page or in the body) and update signing workflows to capture the required verification without disrupting evidence handling or record creation.

Subdivision (b)

Retention of AI first drafts and status of drafts as non‑statements

Subdivision (b) obligates agencies to retain the initial AI‑produced draft for the same retention period as the signed official report and clarifies that, except for the signed official report, AI drafts do not count as an officer’s statement. This creates parallel record streams—official reports and preserved AI drafts—with implications for discovery, storage costs, and records management policies; agencies must ensure chain‑of‑custody practices and retention schedules explicitly account for these additional records.

Subdivision (c)

Audit trail requirements linking users and multimedia inputs

Subdivision (c) requires an audit trail kept for the life of the official report that at minimum records the person who invoked the AI and the video/audio footage used, if any. For compliance teams, that means adding logging at the point of AI invocation, storing metadata that links a user identity to a specific model run, and preserving pointers to the exact media files—work that touches IAM (identity access management), evidence storage, and vendor integration.

2 more sections
Subdivision (d)

Limits on contracted vendors’ reuse of agency data

Subdivision (d) bars third‑party vendors from sharing, selling, or otherwise using agency‑provided inputs except to serve the contracting agency or under court order, while allowing vendor access for troubleshooting, bias mitigation, accuracy improvement, or system refinement. Contracts will need explicit clauses detailing permitted vendor activities, data deletion or segregation requirements, and audit rights for agencies to verify vendors’ compliance with the limited‑use rule.

Subdivision (e)

Definitions that shape scope and compliance duties

Subdivision (e) narrows key terms: it defines the covered 'artificial intelligence' to systems that automatically draft police narratives from camera feeds or generatively enhance dictated reports; it defines 'contracted vendor', 'first draft', 'law enforcement agency', and 'official report'. These definitions determine which tools and vendors fall under the law, so procurement and legal teams must map existing systems to these statutory definitions to decide which policies apply.

At scale

This bill is one of many.

Codify tracks hundreds of bills on Justice across all five countries.

Explore Justice in Codify Search →

Who Benefits and Who Bears the Cost

Every bill creates winners and losers. Here's who stands to gain and who bears the cost.

Who Benefits

  • Defense counsel and prosecutors — Gain access to preserved AI first drafts and audit trails that let them trace how a narrative was generated and test whether AI introduced errors or omissions.
  • Oversight bodies and auditors — Obtain clearer evidence of when AI influenced reports and the identities of human operators, improving the ability to investigate misconduct or systemic issues.
  • Members of the public seeking transparency — Benefit from explicit labeling and officer certification that differentiate human‑verified facts from AI‑generated text, making reports easier to interpret.

Who Bears the Cost

  • Local law enforcement agencies and records units — Face increased storage, logging, and policy‑writing burdens to retain drafts and audit trails and to manage new signature and disclosure processes.
  • IT and evidence‑management teams — Must implement logging, secure storage for multimedia and drafts, and integration points with vendor systems, increasing technical and operational workload.
  • Contracted AI vendors — Lose the ability to monetize or repurpose agency data and must accept contract obligations limiting reuse and permitting only narrow troubleshooting access, which may reduce product business models or require technical segregation.

Key Issues

The Core Tension

The bill forces a classic trade‑off: increasing transparency and human accountability for AI‑drafted police narratives versus imposing operational, technical, and contractual burdens that can slow adoption and raise costs—while leaving room for vendors to access data for improvement in ways the statute does not fully constrain.

SB 524 advances transparency but leaves practical tensions and open questions. The statute prescribes that first drafts be retained "for as long as the official report is retained," but it does not set a uniform retention period; agencies must reconcile this requirement with existing local or state retention schedules, which may vary widely and impose different costs.

The obligation to log the person who used AI and to preserve audio/video inputs will require technical changes to capture metadata at model invocation; absent standards, agencies risk inconsistent implementations that undercut the intended traceability.

The vendor‑use restrictions protect against data resale yet also allow vendor access for troubleshooting, bias mitigation, accuracy improvement, or system refinement—activities that often require data aggregation and model retraining. The statute does not specify controls (such as anonymization, aggregation thresholds, or deletion schedules) for that permitted access, creating a tension between preserving a vendor’s ability to maintain a dependable product and preventing function creep or downstream data reuse.

Finally, the law's narrow definition of covered AI leaves other assistive tools (analytics that flag footage for review, automated redaction, predictive models) outside its explicit scope, which may produce patchwork governance unless agencies proactively expand policies beyond the statute’s text.

Try it yourself.

Ask a question in plain English, or pick a topic below. Results in seconds.