Codify — Article

California AB 502 limits election deepfakes, mandates disclosures, and creates private remedies

Targets AI-generated or manipulated audio/visual election ads during narrow pre- and post‑election windows, sets disclosure rules, and gives courts expedited injunctive and damages remedies.

The Brief

AB 502 responds to AI-enabled disinformation by prohibiting the malicious, knowing distribution of materially deceptive audio or visual election advertisements and communications that portray candidates, elected officials, elections officials, or election equipment in false ways likely to harm reputation or undermine confidence in results. The bill defines “materially deceptive content” (including deepfakes), sets a malice standard, and limits the prohibition to specified windows around elections.

The measure also creates a narrow labeling regime that lets a candidate portray themselves in manipulated media only if a prominent disclosure appears, establishes expedited civil remedies (injunctions, damages, attorney’s fees) with a clear-and-convincing evidentiary standard, and carves out exceptions for bona fide news, satire, and certain broadcasters and publishers—while preserving interactive computer service immunity under Section 230.

At a Glance

What It Does

Defines and bans the malicious distribution of digitally created or modified audio/visual media that would appear authentic (deepfakes) if it materially misrepresents candidates, elections officials, elected officials, or voting equipment during narrow pre- and post‑election windows. It requires prominent disclosure when a candidate is portraying themselves in manipulated media and permits courts to order injunctive relief or award damages and attorney’s fees.

Who It Affects

Political committees, campaigns, paid advertisers, broadcasters and publishers that run election ads, and anyone who republishes manipulated election audio/visual content in California; candidates and elections officials are potential plaintiffs and, in limited circumstances, creators of permitted manipulated media.

Why It Matters

AB 502 creates a sector‑specific regime for AI-enabled disinformation in elections—blending content rules, labeling specifications, expedited civil enforcement, and narrow media exceptions—forcing advertisers and media operators to adopt disclosure workflows and raising new litigation and compliance questions for political communicators.

More articles like this one.

A weekly email with all the latest developments on this topic.

Unsubscribe anytime.

What This Bill Actually Does

AB 502 targets audio and visual media that have been digitally created or modified so that a reasonable person would believe the depiction is authentic, and that are likely to harm a candidate’s reputation or to undermine confidence in election outcomes. The statute ties liability to a mental state: the distributor must have known the content was false or acted with reckless disregard for the truth (the bill calls this “malice”).

The prohibition applies only during specific windows tied to California elections—short, defined periods when misleading content is especially likely to affect voter behavior.

The bill sets a path for lawful use of manipulated media in two important circumstances. First, a candidate may portray themselves in manipulated media provided the ad includes a clear, language‑appropriate disclosure saying “This image/audio/video has been manipulated,” or the disclosure mandated by another section of the Government Code.

AB 502 is unusually prescriptive about format: it requires readable fonts, minimum type sizes for different media (from video percentage heights to specific point sizes for mailers), contrast requirements, and audio disclosure timing for recordings over two minutes. Second, the statute preserves traditional news coverage, satire or parody (if reasonably understood as such or disclosed as manipulated), and grants exceptions to regular publishers and broadcasters that either properly disclose or show that federal law compels the ad.For enforcement, the law grants affected individuals—depicted persons, candidates, committees, and elections officials—the right to seek injunctive and other equitable relief to stop distribution, and to recover general or special damages against distributors (but not broadcasters or internet websites that merely carried content they did not create).

Plaintiffs get attorney’s fees and actions receive precedence under California’s fast‑track civil calendar rules; however, the plaintiff must prove a violation by clear and convincing evidence. The bill also clarifies key definitions (e.g., “deepfake,” “materially deceptive content,” and “malice”), excludes trivial edits from the definition of deception, and preserves Section 230 immunity for interactive computer services.

The Five Things You Need to Know

1

The statute bars distribution of materially deceptive audio/visual election ads only when the distributor acted with malice—defined as knowing falsity or reckless disregard for the truth.

2

Time windows differ by target: manipulated candidate portrayals are prohibited starting 120 days before the candidate’s election; manipulated depictions of elections officials or voting equipment are prohibited from 120 days before through 60 days after an election.

3

A candidate may still distribute manipulated media of themselves if the ad contains a conspicuous disclosure—“This image/audio/video has been manipulated”—rendered in the statute’s required font, size, placement, and contrast rules (with differing specifications for video, print mailers, billboards, and graphics).

4

Relief includes expedited injunctive actions, general or special damages, and mandatory attorney’s fees for prevailing plaintiffs; plaintiffs must meet a clear‑and‑convincing evidence standard, and actions are entitled to precedence under California law.

5

Exceptions: bona fide news programs, routine periodicals, satire/parody (when obvious or disclosed), certain paid broadcasts that meet station disclaimer policies or federal requirements, and interactive computer services under Section 230 are not liable under this section.

Section-by-Section Breakdown

Every bill we cover gets an analysis of its key sections. Expand all ↓

Section 20012(a)

Findings and legislative purpose

The bill opens with findings framing generative AI as a novel election risk and declares California’s interest in preventing fabricated audio/visual content that could mislead voters or damage candidates. These findings justify the statute’s limited time windows and labeling requirements as narrowly tailored to protect election integrity, which courts will likely consider when reviewing constitutional challenges.

Section 20012(b)(1)–(3)

Core prohibition and required disclosures for candidate self‑portrayal

This subsection creates the core offense: knowingly distributing or republishing materially deceptive content that presents candidates, elected officials, elections officials, or voting‑related property in false ways likely to harm reputations or undermine confidence. It then creates a specific carve‑in for candidates: a candidate may portray themselves in manipulated media only if the communication includes a mandatory disclosure (fill‑in for image/audio/video) and meets detailed formatting and timing rules. Paragraph (3) forbids removing required disclosures or republishing without them and treats such removal as evidence of intent.

Section 20012(c)

Temporal scope: election windows

Subdivision (c) sets the statute’s reach in time. Candidate‑targeted prohibitions generally trigger 120 days before an election in which the candidate is running. Content that targets elections officials or voting equipment has the broadest window—beginning 120 days pre‑election and extending through 60 days post‑election—reflecting the legislature’s aim to protect post‑election canvassing and confidence in results.

4 more sections
Section 20012(d)

Civil remedies, fees, and evidentiary standard

The bill authorizes depicted individuals, candidates or committees, and elections officials to seek injunctive or equitable relief and to recover general or special damages from distributors. Prevailing plaintiffs receive reasonable attorney’s fees and costs, and actions receive precedence under CCP Section 35. The plaintiff must prove the violation by clear and convincing evidence, a higher standard that will shape discovery and trial strategy.

Section 20012(e)

Media and content exceptions

AB 502 exempts bona fide newscasts, news interviews, documentaries, and certain commentary so long as the broadcast clearly disavows that the deceptive material represents actual events. It also shields routine periodicals that clearly label the content, preserves paid broadcast obligations where federal law applies or when stations provide buyers with policy requirements, and excludes obvious satire or properly disclosed parody from liability.

Section 20012(f)

Definitions and scope—what counts as a deepfake or materially deceptive content

The statute defines key terms: “deepfake” and “materially deceptive content” mean digitally created/modified audio or visual media that would falsely appear authentic to a reasonable person; trivial edits (brightness, minor audio cleanup) are excluded. It defines 'broadcasting station' broadly to include streaming and satellite operators, while preserving interactive computer service immunity under Section 230(f)(2).

Section 20012(g)–(h)

Multilingual application and severability

The law applies regardless of language; disclosures must be in the language used in the ad. A severability clause preserves the rest of the section if any part is invalidated, signaling legislative intent that courts should attempt to preserve enforceable provisions rather than strike the entire measure.

At scale

This bill is one of many.

Codify tracks hundreds of bills on Elections across all five countries.

Explore Elections in Codify Search →

Who Benefits and Who Bears the Cost

Every bill creates winners and losers. Here's who stands to gain and who bears the cost.

Who Benefits

  • Voters concerned about manipulated election content — the statute narrows the window and channels through which convincing AI‑generated deception can be distributed, making it easier to identify and stop high‑risk ads.
  • Candidates and depicted individuals targeted by fabricated media — they gain a clear cause of action for expedited injunctive relief, damages, and attorney’s fees to remove or challenge deceptive material.
  • Elections officials and election administrators — the law specifically protects officials and voting equipment depictions with a longer post‑election window to address misinformation that could undermine canvass and certification processes.
  • Mainstream newsrooms and legitimate publishers — by including explicit exceptions for bona fide news and routine periodicals, the bill protects standard reportage and investigative use of manipulated media when properly labeled.

Who Bears the Cost

  • Political committees and paid advertisers — they must implement vetting, labeling, and recordkeeping workflows to avoid the malice standard and to ensure disclosures meet precise formatting rules across media channels.
  • Broadcasters and streaming services that accept paid political ads — they will need to maintain and provide disclaimer policies to buyers and may face operational burdens verifying disclosures and documenting compliance.
  • Smaller advocacy groups and independent creators — facing potential liability and litigation exposure, smaller operators may have to restrict certain rapid, low‑budget communications or pay for legal risk assessments.
  • California courts and litigants — expedited precedence, clear‑and‑convincing proof, and demands for fast injunctive relief will increase pressure on trial courts and require parties to marshal technical forensic evidence quickly.

Key Issues

The Core Tension

The central dilemma is balancing swift, effective protection of voters and election administration against the constitutional and practical costs of regulating political speech: strong rules and fast remedies reduce the spread of harmful deepfakes but risk chilling legitimate speech, burdening publishers and small communicators, and placing courts in the position of policing rapidly evolving technical provenance questions under tight deadlines.

AB 502 blends content regulation and consumer‑protection approaches in election contexts, which creates several implementation challenges. The malice standard (knowing falsity or reckless disregard) narrows liability but also shifts the enforcement burden onto plaintiffs to prove state of mind, creating discovery fights over internal publisher or advertiser communications and the provenance of a file.

The requirement that plaintiffs prove violations by clear and convincing evidence further raises the bar, meaning many harmful manipulative ads could go unremedied or be expensive to litigate. At the same time, the injunctive remedy and expedited precedence push courts into fast decisions that could effectively operate as prior restraints if factual disputes are close.

The bill’s technical disclosure rules (font families, point sizes, percentage heights, contrast rules, and specific audio timing) aim to eliminate ambiguity but may be awkward operationally across platforms—especially on social media and programmatic buys where dynamic ad rendering and localized language targeting are routine. The carveouts for broadcasters and routine publishers reduce the risk of chilling traditional journalism but create an uneven compliance landscape: broadcasters must show station policies or federal compulsion, while interactive computer services keep Section 230 immunity, leaving platforms with mixed incentives to moderate or remove content voluntarily.

Finally, the candidate self‑portrayal exception—which permits manipulated self‑depictions with a disclosure—could be gamed to normalize manipulation and increase the volume of borderline content, testing whether the disclosure rules are salient enough to prevent deception.

Try it yourself.

Ask a question in plain English, or pick a topic below. Results in seconds.