AB 621 makes it a civil wrong in California to create or disclose digitized sexually explicit material—commonly called deepfake pornography—portraying someone who did not consent or was a minor. The bill defines key terms (consent, digitization, deepfake pornography service), presumes operators of such services knew consent was absent unless they produce express written consent, and creates a mechanism for victims and public prosecutors to seek damages and injunctive relief.
The law also targets third-party providers that enable the ongoing operation of deepfake services: after a specified notice process, those enablers face liability if they do not stop providing services within a 30-day window (subject to limited judicial extension). AB 621 includes specific remedies and statutory penalties, preserves constitutional and law-enforcement exceptions, and carves out conduct protected by federal law, including Section 230 and a narrow internet-service-provider conduit safe harbor.
At a Glance
What It Does
Creates a private cause of action and public-enforcement remedy for creation, disclosure, or facilitation of nonconsensual digitized sexually explicit material; defines consent and digitization; presumes operators of deepfake pornography services knew consent was absent unless express written consent is produced. It requires enabling service providers to stop supporting offending services within 30 days after a specified notice and evidence process or face liability.
Who It Affects
Operators of websites and apps whose primary purpose is creating explicit digitized imagery, cloud hosts, registrars, payment processors, CDNs and other service providers that enable those sites, and attorneys or public prosecutors bringing enforcement actions on behalf of depicted individuals.
Why It Matters
The bill establishes a civil enforcement framework that shifts evidentiary burdens and creates direct financial incentives (statutory damages, disgorgement, civil penalties) to remove and deter nonconsensual deepfake pornography—potentially forcing infrastructure providers to act on reports or face litigation exposure.
More articles like this one.
A weekly email with all the latest developments on this topic.
What This Bill Actually Does
AB 621 starts with definitions: “consent” must be a plain-language, written agreement that describes the digitized sexually explicit material and the visual or audiovisual work where it will appear; the bill also allows a depicted individual to rescind consent within three business days unless the individual had at least 72 hours to review the agreement before signing or an authorized representative approved it. “Digitization” and “digitized sexually explicit material” are defined broadly to cover realistic depictions of nude body parts, computer-generated nudity, or portrayals of sexual conduct the person did not actually engage in.
The statute creates civil liability in three main ways: for persons who create and intentionally disclose nonconsensual digitized explicit material; for people who intentionally disclose such material they did not create; and for those who knowingly facilitate or recklessly aid or abet the prohibited conduct. Importantly, the law presumes that anyone who owns, operates, or controls a deepfake pornography service engaged in creation and disclosure and knew the depicted person did not consent—unless the operator produces evidence of express written consent.AB 621 also targets upstream “enablers.” If a depicted individual or a public prosecutor provides evidence to an entity that is providing services enabling a deepfake pornography service—through a prominently displayed customer-service channel and containing specified information—the enabler must take all necessary steps to stop providing those services within 30 days.
A court may extend that period for ongoing law-enforcement activity. The bill specifies what evidence and contact information must be included in the notice and ties liability to failure to act after receiving it.On remedies, a prevailing individual plaintiff can recover the defendant’s monetary gain, economic and noneconomic damages or statutory damages (ranging by work from $1,500 to $50,000, and up to $250,000 if committed with malice), punitive damages where appropriate, attorney’s fees, and injunctive relief.
Public prosecutors may bring civil actions without proving actual harm and can obtain injunctive relief plus civil penalties of $25,000 per violation (or $50,000 if malicious) and fees. The statute sets a three-year discovery statute of limitations, preserves constitutional and law-enforcement exceptions, bars certain defenses (disclaimers and internal policies), and explicitly does not apply to conduct protected by federal law, including Section 230.
Finally, the text clarifies that traditional internet service providers are not required to engage in activities that would violate California’s Title 15 consumer-protection provisions and are not liable for mere transmission, routing, or access provision.
The Five Things You Need to Know
Consent must be written, plain language, and include a general description of the digitized sexually explicit material and the work it will appear in; a depicted individual can rescind within three business days unless they had at least 72 hours to review before signing or an authorized representative approved it.
Operators of a ‘deepfake pornography service’ are presumed to have known the depicted individual did not consent unless they produce evidence of the depicted individual’s express written consent—shifting the burden onto service operators.
A service provider that enables the ongoing operation of a deepfake site faces liability if a depicted individual or public prosecutor submits evidence through a prominently displayed reporting channel (naming the deepfake service, describing the enabling services, and providing contact information) and the provider fails to stop within 30 days (court extension allowed for law enforcement).
Statutory remedies are significant: victims may recover disgorgement, actual damages or statutory damages of $1,500–$50,000 per work (up to $250,000 for malice), punitive damages, and fees; public prosecutors may seek $25,000 per violation or $50,000 if malicious plus injunctive relief and fees.
The bill disallows common defenses: a disclaimer on the material or a stated in-platform prohibition against nonconsensual deepfakes does not shield a defendant; at the same time, the statute preserves federal preemption under Section 230 and a narrow ISP conduit safe harbor for mere transmission, routing, or access.
Section-by-Section Breakdown
Every bill we cover gets an analysis of its key sections.
Definitions and consent mechanics
This section supplies detailed definitions: what counts as digitization, digitized sexually explicit material, deepfake pornography service, nude, sexual conduct, malice, and who qualifies as an authorized representative. It also defines consent narrowly—written, plain language, describing the material and work—and creates a short rescission window (three business days) with two exceptions (72-hour review or rep approval). Compliance officers and counsel will need to rework any consent forms to match this exact statutory language to avoid disputes about form and timing.
Civil causes of action
This provision creates three distinct bases for liability: (1) creation plus intentional disclosure of nonconsensual digitized material, (2) intentional disclosure of such material the defendant did not create, and (3) knowing facilitation or reckless aiding/abetting of the prohibited conduct. The language separates creation from disclosure and explicitly captures facilitators, giving plaintiffs multiple theories to target a wide set of actors from content creators to distributors.
Presumptions against operators and notice-triggered enabler liability
Operators of services whose primary purpose is creating digitized explicit content are presumed to have known consent was absent unless they produce express written consent—an evidentiary shift that reduces plaintiffs’ proof burdens. For enablers (hosts, registrars, cloud providers, payment processors), the statute imposes conditional liability: after receiving evidence via a prominently displayed customer-service channel with required contents, the provider must cease enabling operations within 30 days or risk being treated as having facilitated the violation. Courts may extend the 30 days for law-enforcement operations, creating a procedural route for takedowns while accommodating investigations.
Defenses, protected speech exceptions, and forbidden defenses
The statute protects disclosures made in lawful contexts—reporting unlawful activity, law-enforcement duties, or in legal proceedings—and shields works of legitimate public concern, political or newsworthy value, and constitutionally protected commentary. However, it bars two commonly asserted defenses: a disclaimer stating the subject did not participate and a platform policy banning nonconsensual deepfakes do not excuse liability. The bill thus draws a line between protected speech contexts and commercial or malicious distribution.
Remedies for private plaintiffs and public prosecutors
Private plaintiffs can pursue disgorgement, actual economic and noneconomic damages, or statutory damages per work ($1,500–$50,000; up to $250,000 for malice), punitive damages, attorney’s fees, and injunctive relief. Public prosecutors can sue without proving actual harm and may obtain injunctive relief plus per-violation civil penalties ($25,000 or $50,000 if malicious), fees, and other equitable relief courts deem appropriate. The remedies are cumulative and deliberately broad to create both compensatory and deterrent effects.
Procedural limits, severability, and federal carveouts
The statute sets a three-year statute of limitations measured from discovery, makes provisions severable, and explicitly states it does not apply to conduct protected by federal law, including Section 230 of the Communications Decency Act. It also clarifies that traditional internet service providers are not required to take actions that would contravene California Title 15's rules and are not liable for mere transmission, routing, or provision of access—narrowing exposure for plain-carrier ISPs while preserving liability for active enablers.
This bill is one of many.
Codify tracks hundreds of bills on Privacy across all five countries.
Explore Privacy in Codify Search →Who Benefits and Who Bears the Cost
Every bill creates winners and losers. Here's who stands to gain and who bears the cost.
Who Benefits
- Depicted individuals (victims of nonconsensual deepfakes): The bill gives them an affirmative civil route to recover disgorgement, damages, statutory awards, and injunctive relief, plus a short rescission window to withdraw consent in certain scenarios.
- Public prosecutors and local governments: The law authorizes civil enforcement by public prosecutors without proof of actual individual harm and provides per-violation civil penalties, creating a public enforcement tool to pursue operators and enablers.
- Advocacy and victim-support organizations: Organizations that assist victims gain a clearer legal framework to advise clients, submit notices to enablers, and coordinate with prosecutors on evidence and takedown requests.
Who Bears the Cost
- Operators of deepfake pornography services: The presumption of knowledge and the availability of statutory damages, disgorgement, and civil penalties expose operators to high financial risk and potential shutdowns.
- Service enablers (hosts, registrars, payment processors, CDNs, cloud providers): These entities face compliance costs and litigation risk if they fail to act after receiving the statutorily specified notice; they may need to build intake and moderation processes, hire legal teams, or suspend services to limit exposure.
- Platforms and legitimate AI toolmakers: Producers of general-purpose image or video synthesis tools could face reputational and operational costs if their services are used to create prohibited material and they are targeted as facilitators, even where the tool has many lawful uses.
Key Issues
The Core Tension
The central dilemma is between creating an effective, victim-centered remedy that removes harmful nonconsensual deepfakes quickly and the risk of imposing policing duties on infrastructure and platform actors that produce overblocking, costly compliance burdens, and potential conflicts with federal immunities and free-speech protections; the bill solves for swift redress at the risk of shifting difficult verification and moderation duties onto third parties.
The bill intentionally shifts evidentiary burdens onto service operators by presuming knowledge absent express written consent. That change accelerates relief for victims but raises questions about proof and forgery: what counts as adequate evidence of consent, how courts evaluate competing documents, and whether operators will face frivolous claims.
The notice-for-enablers path tries to limit liability to entities that continue to support an offending site after receiving specific information, but the statute requires the evidence be submitted via a prominently displayed channel and contain caller contact data—standards that may produce disputes over adequacy of notice, impersonation, or jurisdictional reach when servers, registrars, or payment processors are outside California.
There is also a constitutional and practical tension between redress and free expression. The law preserves newsworthy and protected speech exceptions, but its broad definitions (for example, “primary purpose” of a service or what is “digitized” in a realistic way) and the bar on disclaimers as a defense could encourage overbroad takedowns by private actors seeking to avoid litigation.
Enablers receiving reports will face difficult operational choices: verifying claims risks delaying relief, while immediate suspension risks wrongful censorship and business disruption. Finally, the Section 230 carveout and the ISP conduit safe harbor narrow federal preemption but do not eliminate the prospect of federal challenges—particularly around the degree to which state law can require platforms or infrastructure providers to act on third-party claims without running afoul of federal immunities or interstate commerce principles.
Try it yourself.
Ask a question in plain English, or pick a topic below. Results in seconds.