Codify — Article

Protect Elections from Deceptive AI Media Act

Bans knowingly distributing materially deceptive AI-generated media about federal candidates before elections, with news and satire carve-outs.

The Brief

The Protect Elections from Deceptive AI Act adds a new provision to the Federal Election Campaign Act that prohibits knowingly distributing materially deceptive AI-generated audio or visual media about candidates for federal office prior to an election. The bill defines what counts as deceptive AI media and sets a civil remedies regime for individuals or entities that distribute such media with intent to influence an election or solicit funds.

It also carves out bona fide news coverage, certain publications, and satire, and it links violations to defamation law while preserving standard defamation defenses. The overall aim is to deter the spread of AI-driven deepfakes in the political arena while preserving legitimate journalism and satire under specific disclosures.

At a Glance

What It Does

Adds Section 325 to FECA, prohibiting knowingly distributing materially deceptive AI-generated media about federal candidates before elections. Defines deception, sets a pre-election prohibition, and creates civil remedies.

Who It Affects

Applies to any person, political committee, or entity distributing AI-generated media in connection with a Federal election activity; includes media distributors, advertisers, and campaigns.

Why It Matters

Addresses growing AI-driven manipulation risks in federal elections by establishing a clear prohibition, enforcement pathways, and definitional guardrails, including carve-outs for legitimate journalism and satire.

More articles like this one.

A weekly email with all the latest developments on this topic.

Unsubscribe anytime.

What This Bill Actually Does

The bill creates a targeted prohibition on AI-generated media that falsely represents a federal candidate’s speech or conduct. It defines what counts as deception and requires that the media be distributed with knowledge and intent to influence an election or solicit funds.

The law applies to individuals, campaigns, and other entities that distribute such media in the context of a federal election, but it carves out bona fide news broadcasts, newspapers, and satire, provided there are clear disclosures that the material is not authentic. Penalties include injunctive relief and damages, and there is a link to defamation rules to treat violations as defaming the candidate per se in appropriate cases.

The bill also contemplates a severability clause if any part is found invalid. For compliance purposes, the statute establishes a high burden of proof (clear and convincing evidence) and a structured remedy framework for aggrieved candidates.

The practical effect is to deter the creation and spread of convincing political deepfakes while preserving legitimate journalism and creative expression under specified conditions.

The Five Things You Need to Know

1

The bill prohibits knowingly distributing materially deceptive AI-generated media about a federal candidate prior to an election.

2

A ‘deceptive AI-generated media’ must appear authentic to a reasonable observer under the distribution channel context.

3

There are carve-outs for bona fide news coverage with disclosures, and for satire or parody.

4

Civil remedies include injunctive relief, general or special damages, and attorney’s fees; proof is by clear and convincing evidence.

5

Violation constitutes defamation per se for purposes of defamation law.

Section-by-Section Breakdown

Every bill we cover gets an analysis of its key sections. Expand all ↓

Section 325

Definitions of key terms

Section 325(a) establishes the core terms, including ‘covered individual’ (a candidate for federal office) and ‘deceptive AI-generated audio or visual media,’ which means media produced with AI that either creates an appearance of authenticity or misrepresents a person’s appearance, speech, or conduct. It also defines ‘Federal election activity’ to align with existing FECA terms. These definitions set the baseline for what counts as prohibited conduct and who is protected under the statute.

Section 325

Prohibition on distribution

Section 325(b) prohibits knowingly distributing materially deceptive AI-generated media of a covered individual in connection with a federal election activity, with the intent to influence an election or to solicit funds. The prohibition is the central mechanism that creates civil liability for disseminators of such media and provides the enforcement hook for courts.

Section 325

Carve-outs and exceptions

Section 325(c) provides carve-outs for radio/TV broadcasters, newspapers or periodicals, and certain streaming services when the deceptive content is part of bona fide news coverage or public interest reporting and includes disclosures that the material is not authentic. It also preserves satire and parody as not actionable under the section when clearly labeled as such.

2 more sections
Section 325

Civil action and remedies

Section 325(d) authorizes injunctive or other equitable relief and allows general or special damages, plus attorney’s fees and costs for the party harmed by the deceptive media. The plaintiff bears the burden of proof by clear and convincing evidence. The provision aligns civil remedies with the seriousness of misrepresentation of a candidate’s appearance or conduct.

Section 325

Defamation and severability

Section 325(e) clarifies that a violation of the FECA provision is treated as defamation per se for defamation actions, reinforcing the seriousness of the misrepresentation. Section 325(f) provides a severability clause to ensure that if any provision is held invalid, the remainder still stands.

At scale

This bill is one of many.

Codify tracks hundreds of bills on Elections across all five countries.

Explore Elections in Codify Search →

Who Benefits and Who Bears the Cost

Every bill creates winners and losers. Here's who stands to gain and who bears the cost.

Who Benefits

  • Federal candidates and campaigns gain protection against AI-generated misrepresentations harming reputations and campaign outcomes.
  • Voters benefit from reduced exposure to misleading media and greater clarity about what is authentic content.
  • Established news organizations and broadcasters benefit from clear carve-outs that permit legitimate reporting with disclosures.
  • Digital platforms and content distributors gain a concrete framework for compliance and content-moderation expectations.
  • Election officials and courts gain a defined enforcement mechanism and remedies to address deceptive media.

Who Bears the Cost

  • Distributors of deceptive AI-generated media, including individuals, campaigns, and groups that would face civil liability and damages.
  • Online platforms and media distributors who must implement monitoring, verification, and takedown processes to avoid liability.
  • Some news organizations and smaller publishers may incur compliance costs to ensure disclosures are visible and content is accurately labeled.
  • Campaigns or actors seeking to manipulate elections through AI-generated media bear immediate legal risk and potential financial liability.

Key Issues

The Core Tension

The central policy trade-off is between preventing AI-driven deception in elections and preserving free expression, legitimate journalism, and satire. Striking the line between a deceptive deepfake and protected commentary or parody—while ensuring enforceable accountability without stifling innovation—drives the bill’s most important questions.

The bill carefully calibrates a ban on deceptive AI-generated media with multiple guardrails to avoid chilling speech or overbreadth. The definitions hinge on an observer’s reasonable impression of authenticity given the distribution channel, which invites scrutiny in quick-consumption media environments.

The carve-outs for bona fide news coverage and satire acknowledge the importance of journalism and creative expression, but they rely on disclosures that are easily accessible to audiences. Enforcement is limited to civil actions with a high evidentiary standard (clear and convincing), which helps protect against frivolous lawsuits while still offering meaningful remedies.

The intersection with defamation law adds a strong deterrent by elevating violations to defamation per se in appropriate cases, but it also raises questions about overlapping remedies and proof standards across legal regimes.

Try it yourself.

Ask a question in plain English, or pick a topic below. Results in seconds.