Codify — Article

AI Impersonation Prevention Act of 2025 bans AI impersonation of federal officials

Prohibits knowingly using AI to impersonate U.S. officers, with a satire carve-out and clear definitions.

The Brief

HB 4628, the AI Impersonation Prevention Act of 2025, would amend 18 U.S.C. 912 to make it illegal to knowingly use artificial intelligence to impersonate an officer or employee of the United States. The bill adds a new provision that covers using AI to mimic the voice or likeness of a federal official and makes content produced through such impersonation subject to fines or imprisonment for up to three years.

It preserves a First Amendment carve-out for satire or parody, as long as the content clearly discloses that it is not authentic. The act also defines artificial intelligence and impersonation, establishing a technology-aware basis for enforcement, and includes a severability clause so the rest of the act remains in effect if any part is struck down.

At a Glance

What It Does

The bill adds a new subsection to 18 U.S.C. 912 prohibiting knowingly using AI to impersonate a federal official, including by mimicking voice or likeness without an explicit disclaimer.

Who It Affects

It targets individuals who would impersonate officials, AI developers whose tools enable such impersonation, and platforms that host or disseminate content claiming to be official communications.

Why It Matters

It creates a clear criminal liability framework for AI-driven impersonation, addressing a growing risk as AI capabilities evolve, while preserving a constitutional carve-out for satire with explicit disclosure.

More articles like this one.

A weekly email with all the latest developments on this topic.

Unsubscribe anytime.

What This Bill Actually Does

The bill changes the criminal statute to address AI-generated impersonations of federal officers. It defines AI broadly to cover systems capable of producing human-like audio, video, or text, and it defines impersonation as falsely presenting oneself as another identifiable individual with the intent to mislead.

If someone knowingly uses AI to impersonate a federal official and produces content that is materially false or misleading, they can be fined or imprisoned for up to three years. The statute explicitly allows satire and parody, provided the creator clearly discloses that the content is not authentic and not intended to be taken as real.

A severability clause ensures the rest of the law remains in effect if a provision is invalidated. This creates a legal mechanism to deter AI-enabled deception while attempting to protect legitimate expressive content with a clear disclaimer.

The Five Things You Need to Know

1

The bill adds a new subsection (b) to 18 U.S.C. 912 prohibiting AI-based impersonation of federal officials.

2

AI is defined to include generative models capable of producing human-like audio, video, or text.

3

“Impersonates” means falsely representing oneself as another identifiable person to mislead.

4

Penalties include fines or imprisonment for up to three years, or both.

5

There is a First Amendment carve-out for satire or parody with explicit disclosure.

Section-by-Section Breakdown

Every bill we cover gets an analysis of its key sections. Expand all ↓

Section 1

Short title

This section designates the act’s short title as the AI Impersonation Prevention Act of 2025. The title sets the governing frame for the statute’s scope and enforcement.

Section 2

Prohibition on AI-based impersonation of Federal officials

This section amends 18 U.S.C. 912 by inserting a new paragraph (a) to establish a general framework and a new paragraph (b) that prohibits knowingly using artificial intelligence to impersonate or falsely pretend to be a U.S. government officer or employee. It expressly covers actions such as mimicking a voice or likeness of a federal official and makes material content produced under such impersonation punishable by fines or imprisonment for up to three years. A carve-out preserves legitimate satire or parody with a clear disclaimer that the content is not authentic.

Section 3

Severability

Should any provision of this Act be deemed invalid, the remainder shall stay in force. This ensures that a partial invalidation does not collapse the entire measure and that enforceable provisions can stand independently.

At scale

This bill is one of many.

Codify tracks hundreds of bills on Justice across all five countries.

Explore Justice in Codify Search →

Who Benefits and Who Bears the Cost

Every bill creates winners and losers. Here's who stands to gain and who bears the cost.

Who Benefits

  • Federal officials and agencies face reduced risk from impersonation attempts that could harm operations or mislead the public.
  • The general public benefits from decreased exposure to deceptive messages attributed to government actors.
  • Digital platforms hosting official content gain clearer liability standards and enforcement paths for removing or labeling impersonation content.
  • Investigators and prosecutors obtain a defined statutory basis for pursuing AI-driven impersonation cases.

Who Bears the Cost

  • Content creators and AI developers may incur compliance costs to ensure disclaimers or to avoid prohibited impersonation in outputs.
  • Platforms and service providers may incur costs to monitor, label, or restrict AI-generated content that resembles official communications.
  • Government agencies may face enforcement costs and need for training on applying the new standard to cases involving AI impersonation.

Key Issues

The Core Tension

Balancing deterrence of deception with protections for speech and innovation: applying a criminal standard to AI-generated impersonations while avoiding undue restriction on satire or legitimate AI uses.

The bill creates a targeted response to AI-enabled impersonation by tying liability to knowingly using AI to imitate a federal official. While the satire carve-out attempts to protect expressive content, the standard relies on explicit disclosures and the determination that content is not authentic, which could complicate enforcement in fast-moving digital contexts.

The breadth of the definition for AI and the concept of impersonation may raise questions about edge cases, such as ambiguous deepfakes or content used in research or entertainment that could be misinterpreted. Enforcement would hinge on proving the intent to impersonate and the knowledge that the impersonation is likely to be believed as authentic, which can be challenging in practice.

Try it yourself.

Ask a question in plain English, or pick a topic below. Results in seconds.