Codify — Article

Federal crime for AI-generated intimate deepfakes without consent (up to 5 years)

Creates 18 U.S.C. 1802 criminalizing production or distribution of AI- or software-generated intimate images of identifiable people made or shared without consent, with narrow exceptions and a reckless-disregard mens rea.

The Brief

The bill adds a new federal offense to Title 18 (new 18 U.S.C. 1802) that makes it a crime to produce or distribute a "digital forgery" of an identifiable individual's intimate visual depiction without that individual's consent, when done with reckless disregard. Convictions carry fines and up to five years' imprisonment. "Digital forgery" explicitly covers images created or altered through software, machine learning, or artificial intelligence and includes adapations of authentic depictions designed to appear authentic to a reasonable person.

The measure also sets narrow exceptions (law enforcement, legal proceedings, medical purposes, and certain reporting or investigatory uses), limits provider liability unless a communications service recklessly distributes content, and applies extraterritorially when either the perpetrator or the victim is a U.S. national. The bill therefore creates a federal criminal tool aimed at AI-enabled intimate-image abuse while leaving open difficult questions about proof, platform obligations, and legitimate expressive uses.

At a Glance

What It Does

The bill creates a new federal offense for producing or distributing a "digital forgery" of an identifiable person's intimate visual depiction without consent, subject to a reckless-disregard mens rea and a penalty of up to five years' imprisonment and fines. It requires an interstate-commerce nexus for domestic jurisdiction and applies extraterritorially when either the offender or the victim is a U.S. national.

Who It Affects

Independent creators, AI tool vendors, and anyone who produces or shares intimate-image deepfakes; online platforms and interactive computer services (defined by reference to the Communications Act and Section 230) that host such content; federal prosecutors and forensic investigators who would enforce the statute.

Why It Matters

This is a first-line federal criminal response to AI-enabled nonconsensual intimate imagery, tying liability to a reckless standard and carving out a provider safe harbor unless a platform recklessly distributes content. It intersects directly with existing communications law (Section 230 definitions) and raises operational questions for content moderation, forensics, and cross-border enforcement.

More articles like this one.

A weekly email with all the latest developments on this topic.

Unsubscribe anytime.

What This Bill Actually Does

The bill inserts a standalone criminal provision into federal law that focuses on intimate-image deepfakes. Rather than outlawing all deepfakes, it targets a specific category—intimate visual depictions of an identifiable person—and requires the government to prove the defendant acted with "reckless disregard" in producing or distributing the forgery without the individual's consent.

That mens rea sits below purposeful intent and knowledge, which means prosecutors can pursue actors who fail to take reasonable steps to verify authenticity or consent.

Definitions drive the scope. "Digital forgery" covers images generated or modified by software, machine learning, or artificial intelligence and specifically includes altering authentic images to make them appear genuine to a reasonable observer. "Consent" is defined as affirmative, conscious, competent, and voluntary authorization free from force, fraud, misrepresentation, or coercion, and it applies even if the subject is a public figure. These choices shape evidentiary battles: forensic testimony will be needed to show a depiction is forged, while consent inquiries will probe the context and communications around image creation or distribution.The bill preserves several narrow exceptions—disclosures to law enforcement, use in legal proceedings, medical education/diagnosis/treatment, and certain reporting or investigations of unlawful or unwelcome conduct—so not every circulation of a deepfake triggers liability.

It also shields communications-service providers from liability for content created by others unless the provider itself recklessly distributes the content in violation of the new section, folding statutory terms from the Communications Act and Section 230 into the definition of covered services.Jurisdictional rules require an interstate- or foreign-commerce connection for domestic prosecutions and extend the statute's reach abroad where either the offender or the victim is a U.S. national. Practically, that means U.S. prosecutors can target foreign actors who victimized U.S. nationals, but victims who are not U.S. nationals and whose abusers are abroad may fall outside federal reach.

The bill also makes a clerical amendment to the chapter's table of sections and includes a severability clause in case parts are struck down by courts.

The Five Things You Need to Know

1

New statute: the bill adds 18 U.S.C. 1802 making production or distribution of nonconsensual "digital forgeries" of intimate images a federal offense carrying fines and up to 5 years imprisonment.

2

Mens rea: liability turns on acting with "reckless disregard," a negligence-adjacent standard rather than requiring specific intent or knowledge that the image was forged.

3

Definition: "digital forgery" expressly includes AI, machine learning, or software-generated images and alterations that would make a depiction appear authentic to a reasonable person.

4

Provider carveout: communications-service providers are exempt from the statute for third-party content unless the provider itself recklessly distributes content in violation of 1802; statutory terms borrow from Sections 3, 230, and 2510 of communications law.

5

Extraterritorial reach: the statute applies when either the person who committed the offense or the victim is a U.S. national, creating a nationality-based basis for prosecuting foreign actors who target U.S. nationals.

Section-by-Section Breakdown

Every bill we cover gets an analysis of its key sections. Expand all ↓

Section 1

Short title

Provides the Act's name: "Protect Victims of Digital Exploitation and Manipulation Act of 2025." This is a formal placement; it has no operational effect on scope or enforcement but is how the statute will be cited.

Section 2 — New 18 U.S.C. 1802(a)

Core offense and penalty

Establishes the substantive crime: knowingly acting with reckless disregard to produce or distribute, or cause to be produced or distributed, a digital forgery of an identifiable individual's intimate visual depiction without that individual's consent. Punishment is a fine and/or imprisonment of not more than five years. The provision frames the offense as unitary—production or distribution both trigger liability—and attaches the same penalty to either act.

Section 2 — New 18 U.S.C. 1802(b)

Enumerated exceptions and provider liability standard

Lists exceptions where the statute does not apply: disclosures to law enforcement, materials used in legal proceedings, medical education/diagnosis/treatment, and reporting or investigation of unlawful content or unsolicited/unwelcome conduct. Separately, it creates a safety valve for communications-service providers: platforms hosting third-party content are not liable under 1802 unless they recklessly distribute content themselves. By incorporating that reckless-distribution qualifier, the bill stops short of strict intermediary liability but sets a higher bar for platform exposure if they take actions that can be characterized as reckless.

3 more sections
Section 2 — New 18 U.S.C. 1802(c)–(d)

Jurisdiction: commerce nexus and extraterritoriality

Subsection (c) requires the offense to be produced or distributed via means that affect interstate or foreign commerce—ordinary federal jurisdictional language to capture online conduct. Subsection (d) extends application where either the offender or the victim is a U.S. national, creating nationality-based extraterritorial jurisdiction. This design lets prosecutors pursue some cross-border cases while excluding purely foreign disputes that lack a U.S. national connection.

Section 2 — New 18 U.S.C. 1802(e)

Key definitions shaping scope

Defines critical terms: "consent" (affirmative, conscious, competent, voluntary, free from force, fraud, misrepresentation, coercion, applicable even to public figures); "digital forgery" (AI/software-generated or manipulated intimate depictions that would appear authentic to a reasonable person); "identifiable individual" (appearance in whole or part plus recognition by facial features, unique marks, or accompanying info); and "intimate visual depiction" (borrows from section 2256's definitions of uncovered genitals, sexual fluids, or sexually explicit conduct). The section also imports statutory meanings for "communications service" and "information content provider" from federal communications law, ensuring alignment with Section 230 terminology.

Section 2 — Clerical amendment & Section 3

Technical table update and severability

Adds the new section to the chapter's table of sections to make the statutory code accurate and includes a severability clause so that if any part of the Act is held unconstitutional, the remainder remains in force. These are drafting safeguards and not substantive policy changes.

At scale

This bill is one of many.

Codify tracks hundreds of bills on Justice across all five countries.

Explore Justice in Codify Search →

Who Benefits and Who Bears the Cost

Every bill creates winners and losers. Here's who stands to gain and who bears the cost.

Who Benefits

  • Survivors of nonconsensual intimate-image deepfakes: the statute creates a federal criminal avenue for prosecution and potential deterrence, particularly when the victim is a U.S. national or the actor's conduct touches interstate commerce.
  • Federal prosecutors and law enforcement: they receive a clear, specific statutory tool tailored to AI-enabled intimate-image abuse, including definitions that align with existing federal sexual-image definitions to streamline charging decisions.
  • Forensic and detection technology vendors: demand for tools that prove forgery, authenticate images, and establish provenance will increase as courts and prosecutors require technical evidence to show an image is a forgery or that the distributor acted recklessly.
  • Platforms seeking legal clarity: the provider carveout gives platforms a defined safe harbor unless they recklessly distribute content, enabling clearer internal policies about moderation and liability thresholds.

Who Bears the Cost

  • Interactive platforms and smaller hosting services: to avoid allegations of reckless distribution they may need to invest in moderation, detection, and documentation systems, increasing operational costs and potential over-removal of borderline content.
  • Independent creators and AI developers: ambiguous boundaries around "reasonable person" authenticity and the reckless standard can expose developers and users of generative tools to criminal risk for benign or transformative uses, chilling innovation and experimentation.
  • Courts and public defenders: proving forgery, mental state, and consent will require technical evidence and expert witnesses, increasing litigation complexity, costs, and judicial resource burdens.
  • Journalists, artists, and researchers: the statute lacks explicit carve-outs for parody, satire, or scholarly uses beyond limited reporting exceptions, creating legal uncertainty for legitimate expressive and investigatory practices.

Key Issues

The Core Tension

The bill confronts a real harm—AI-enabled nonconsensual intimate imagery—by lowering mens rea and creating a federal criminal offense, but that approach forces a trade-off: stronger victim protection and prosecutorial reach versus legal uncertainty for legitimate expression, heavier burdens on platforms to police content, and practical evidentiary hurdles in proving forgery and reckless conduct.

The bill ties criminal liability to a "reckless disregard" mens rea and relies on a "reasonable person" standard to assess whether a forged image appears authentic. Those two choices lower the threshold for prosecution compared with intentional-knowing standards, but they also inject ambiguity: what steps amount to reasonable verification before distribution, and how will courts instruct juries on recklessness in a fast-moving technological context?

Forensic proof that an image is a forgery will be central to prosecutions, and adversaries will contest the reliability of AI-detection tools and the provenance evidence platforms retain. Technical complexity means prosecutions will be evidence-heavy and expensive.

The statute's interplay with communications law is consequential but unsettled. By importing definitions from the Communications Act and Section 230, the bill signals an intent to preserve core intermediary definitions while exposing platforms to criminal risk only if they "recklessly distribute" content.

Yet that term is novel in the intermediary context and may push platforms toward proactive removals and overbroad takedowns to avoid culpability. The exceptions (reporting/investigation of unlawful or unwelcome conduct) are useful but vaguely worded; they could generate litigation over whether investigative journalism or cybersecurity research fits the carve-outs.

Finally, the nationality-based extraterritorial reach protects U.S. nationals but leaves non-U.S. victims with limited federal recourse, creating a partial solution to a global problem.

Try it yourself.

Ask a question in plain English, or pick a topic below. Results in seconds.