Codify — Article

California bill adds AI deepfakes to extortion statute

Makes threats to create, post, or distribute AI-generated images or videos an enumerated form of extortion, shifting investigative and evidentiary burdens onto prosecutors and platforms.

The Brief

AB 355 amends Penal Code §519 to add a new enumerated threat: a threat to post, distribute, or create AI-generated images or videos of another may induce the fear necessary to constitute extortion. The change brings explicitly AI-enabled fabrication and dissemination — commonly called "deepfakes" — within the list of threats that can support an extortion charge.

The amendment is narrow in scope: it does not alter the elements of extortion (the obtaining of property or an official act by wrongful use of force or fear) or change penalties, but it does broaden the kinds of threats prosecutors can point to when charging extortion. The bill also contains a fiscal clause stating no state reimbursement is required for local costs tied to the change.

At a Glance

What It Does

AB 355 adds subsection (f) to Penal Code §519 making a threat to post, distribute, or create AI-generated images or videos of another an enumerated form of fear that can constitute extortion. It leaves the remainder of the extortion statute, including elements and penalties, intact.

Who It Affects

Prosecutors and police investigating blackmail or coercion that involves manipulated or fabricated imagery; victims of deepfake blackmail (including sexual extortion); online platforms that host or remove content when subpoenaed; defense counsel confronting new technical evidence.

Why It Matters

The bill recognizes AI-generated content as a tool of coercion and gives prosecutors a clearer statutory basis to charge deepfake-enabled extortion. Practically, it creates new evidentiary and investigative demands — proving a credible threat, the AI origin of content, and traceability across platforms.

More articles like this one.

A weekly email with all the latest developments on this topic.

Unsubscribe anytime.

What This Bill Actually Does

California's extortion law already lists several kinds of threats that, if they induce fear, can support an extortion charge — threats to injure person or property, to accuse someone of a crime, to expose a secret, and to reveal immigration status. AB 355 adds a sixth item: a threat to post, distribute, or create AI-generated images or videos of another.

That addition treats the act of threatening to produce or disseminate fabricated audiovisual material the same way the statute treats threatening to expose a secret or accuse someone of wrongdoing.

The text is deliberately broad in three ways: it mentions posting and distribution (targeting dissemination), it mentions creation (targeting fabrication even if the content does not yet exist), and it uses the phrase "AI-generated images or videos" without defining technical characteristics such as realism, source, or method. The bill does not change the underlying elements prosecutors must prove for extortion — notably, the defendant's wrongful use of force or fear and the obtaining of property or an official act — but it supplies an explicit example of a threat that may induce fear.Operationally, this will push law enforcement and prosecutors into more technical territory.

Cases will require demonstration that the accused threatened to use AI tools and that the threat was credible enough to induce fear, which may mean subpoenas to platforms, digital forensics to identify who produced content or whether content was AI-generated, and witness statements about fear and coercion. The statute does not create new civil remedies, nor does it specify investigative resources, so the burden of implementation falls to existing local prosecutors and courts.Finally, the bill includes a short fiscal clause.

The Legislative Counsel included language acknowledging a state-mandated local program, but Section 2 declares no state reimbursement is required under Article XIII B, pointing to statutory provisions that treat changes to crimes as not triggering reimbursement. In practice, this preserves the statutory change while leaving costs for training, forensic services, and platform requests to local budgets.

The Five Things You Need to Know

1

AB 355 adds subsection (f) to Penal Code §519 to list a threat to post, distribute, or create AI-generated images or videos of another as a threat that may induce fear sufficient for extortion.

2

The bill explicitly includes the word "create," so a threat to fabricate a deepfake (even if none yet exists) can support an extortion charge.

3

AB 355 does not alter the elements or penalties of extortion — it only expands the catalogue of threats courts may rely on when assessing whether fear was induced.

4

Section 2 of the bill states that no state reimbursement is required under the California Constitution, effectively leaving any local implementation costs with local agencies and prosecutors.

5

The statutory language uses the broad term "AI-generated images or videos" without defining technical thresholds (e.g.

6

realism or provenance), creating potential evidentiary questions at trial.

Section-by-Section Breakdown

Every bill we cover gets an analysis of its key sections. Expand all ↓

Section 1 — Amend Penal Code §519

Adds AI-related threats to the statute's list of fear-inducing threats

Section 1 revises the existing enumeration of threats that may constitute extortion by inserting a new item at the end of the list. It preserves the statute's existing structure and examples (injury, accusation, exposure of secrets, immigration status) but makes explicit that threats tied to AI-generated audiovisual content fall within the same legal category. For practitioners this is a textual expansion — prosecutors can now cite §519(f) as the statutory example showing the type of threat that induced fear.

Section 1(f) — AI-generated images or videos

Scope: post, distribute, or create — dissemination and fabrication both covered

Subsection (f) uses three verbs — "post, distribute, or create" — which cover both threats to circulate existing content and threats to fabricate content. That drafting choice means a defendant who threatens to fabricate a compromising deepfake can be treated the same as one who threatens to publish an already created image. The provision does not define "AI-generated," so proving the content's origin and the defendant's capacity or intent to produce it will be central practical issues in litigation and investigation.

Section 2 — Fiscal and reimbursement clause

Declares no state reimbursement required for local costs

Section 2 responds to the constitutional requirement about state-mandated local costs by declaring that no reimbursement is required under Article XIII B because the change alters the definition of a crime within the meaning of applicable law. The net effect is the legal recognition that local agencies may incur costs (training, forensics, caseload) but the state will not provide separate reimbursements for those costs under the cited constitutional provision.

1 more section
Legislative Counsel's Digest (Digest Key)

Context and legal framing in the digest

The digest frames the amendment as an expansion of the extortion statute to include AI-enabled threats, and flags the bill as creating a local program for which the state typically would provide reimbursement. That context is procedural, but it underscores the legislative intent to target coercive uses of AI while relying on existing enforcement structures rather than creating a new enforcement body or funding stream.

At scale

This bill is one of many.

Codify tracks hundreds of bills on Criminal Justice across all five countries.

Explore Criminal Justice in Codify Search →

Who Benefits and Who Bears the Cost

Every bill creates winners and losers. Here's who stands to gain and who bears the cost.

Who Benefits

  • Victims of deepfake or AI-enabled blackmail: The statutory change gives prosecutors a clearer, expressly enumerated basis to charge extortion when perpetrators threaten to fabricate or publish AI-generated images or videos, which may improve chances of criminal remedy and provide leverage in getting content removed.
  • Prosecutors and law enforcement units focused on cyber-enabled crimes: The bill supplies a specific statutory theory to attach to AI-based coercion, making charging decisions more straightforward and allowing prosecutors to point to §519(f) in pleadings and briefs.
  • Platforms and content hosts (indirectly): A clearer criminal statute can make platform compliance requests, subpoenas, and takedown actions more legally grounded and may speed removal of illicitly used content when tied to criminal investigations.
  • Organizations offering victim services and advocacy groups: The law expands the legal tools available to advocate for victims, potentially improving cross-agency cooperation (law enforcement, civil remedies, and platform support) in deepfake extortion cases.

Who Bears the Cost

  • Local law enforcement and prosecutors: They will absorb the operational costs of investigating AI-related extortion claims — digital forensics, subpoenas to platforms, training on AI evidence — without a dedicated state reimbursement.
  • Local courts and public defenders: An uptick in technical, contested prosecutions could increase caseload complexity and defense costs, requiring experts and longer proceedings.
  • Online platforms and intermediaries: Platforms may face more subpoenas, takedown requests, and urgent legal demands tied to extortion investigations; complying imposes moderation and legal-processing costs.
  • Civil liberties and legitimate AI creators: Broad, undefined language risks spillover where legitimate satire, parody, or lawful simulated content faces takedown or criminal scrutiny, imposing compliance and legal-review burdens on creators and platforms.

Key Issues

The Core Tension

The bill confronts a direct policy choice: protect people from a new, uniquely harmful form of coercion (fabricated audiovisual blackmail) by expanding criminal law, or avoid broad, vaguely worded criminalization that will shift heavy investigative burdens to under-resourced local actors and risk chilling legitimate speech and AI uses. There is no simple fix: narrow drafting would limit overreach but might leave victims unprotected; broad drafting protects more victims but creates evidentiary, fiscal, and constitutional headaches.

AB 355 is targeted — it does not change the elements or penalties of extortion — but its operational impact comes from breadth and evidentiary demands. The bill's failure to define "AI-generated" leaves courts and investigators to develop operational definitions through case law or forensic protocols.

That raises immediate questions: what technical standard will courts accept to prove a given image or video was AI-generated, and what chain-of-custody and provenance evidence will be sufficient to link a specific defendant to the threat? In practice, prosecutors will need digital forensics expertise and ready access to platform records, which can be costly and cross-jurisdictional.

The inclusion of the verb "create" is consequential but also legally awkward. It allows charges where the defendant threatens to fabricate a deepfake even if the content does not exist at the time of the threat; that helps victims threatened with future fabrication but also invites pretext arguments and contested proof about capability and intent.

The statute's broad phrasing risks chilling protected speech categories — hypothetical statements about making content, creative uses of generative models, and bona fide journalism — if platforms or courts treat the threat language expansively. Finally, the fiscal clause means localities shoulder implementation costs, concentrating resource pressure where sophisticated forensic capability may be least available, potentially producing uneven enforcement across the state.

Try it yourself.

Ask a question in plain English, or pick a topic below. Results in seconds.