Codify — Article

Australia bill creates deepfake takedown regime and new tort for wrongful disclosure

Establishes an Online Safety removal-and-penalty regime for nonconsensual deepfakes and a standalone cause of action in the Privacy Act with injunctive and damages remedies.

The Brief

The Online Safety and Other Legislation Amendment (My Face, My Rights) Bill 2025 adds a bespoke framework for “deepfake material” to the Online Safety Act 2021 and creates a separate cause of action in the Privacy Act 1988 for wrongful use or disclosure of deepfakes. It gives the eSafety Commissioner new complaint, investigatory and notice powers to require removal of nonconsensual deepfakes from social media, electronic and hosting services, and attaches civil penalties and remedial directions for non‑compliance.

Separately, the Bill inserts Schedule 3 into the Privacy Act to create a tort that allows an affected individual to sue for wrongful use or disclosure of deepfake material where the defendant knew or was reckless about the material’s artificial origin, and where the plaintiff suffered detriment or the defendant profited. The statutory cause of action is actionable without proof of damage, permits injunctions and a range of remedies, and contains layered exemptions (journalists, agencies, law enforcement, under‑18s) and time limits for bringing proceedings.

At a Glance

What It Does

The Bill defines "deepfake material" and requires complainants to first complain to the service; if the material remains after 48 hours the eSafety Commissioner may issue removal notices to platforms, endusers or hosting providers requiring removal within 24 hours. Posting nonconsensual deepfakes by endusers ordinarily resident in Australia attracts a civil penalty and the Commissioner can issue remedial directions and formal warnings. Separately, the Privacy Act gains a statutorily defined tort for wrongful use or disclosure of deepfakes, with injunctive relief, damages (including emotional distress) and accounts of profit.

Who It Affects

Social media and electronic service providers, hosting providers, Australian endusers (including those ordinarily resident in Australia), publishers and journalists (who have specified exemptions), the eSafety Commissioner (expanded functions), and courts that will hear new Privacy Act causes of action. Law enforcement and intelligence agencies are carved out in many contexts.

Why It Matters

The Bill creates the first dedicated statutory takedown pathway for AI‑generated impersonations in Australia and a standalone civil remedy for victims — shifting liability and moderation expectations from ad hoc platform policy to a legislated standard. It also embeds constitutional scaffolding to broaden federal reach, which matters for cross‑jurisdictional enforcement and litigability.

More articles like this one.

A weekly email with all the latest developments on this topic.

Unsubscribe anytime.

What This Bill Actually Does

The Bill introduces a tailored definition of "deepfake material" into the Online Safety Act: visuals, audio or combinations that portray an individual's face or voice realistically but falsely because they were created or substantially altered using technology. The Online Safety amendments make the eSafety Commissioner responsible for administering complaints about deepfakes and empower the Commissioner to investigate and act.

A complaint pathway requires a complainant to have first raised the issue with the service provider and to provide evidence of that complaint; the Commissioner can then investigate and, if satisfied that provision of the material is nonconsensual and not otherwise exempt, issue a removal notice.

Removal notices can be directed at the platform provider, the enduser who posted the material, or a hosting service provider. The removal notices require "all reasonable steps" to remove or cease hosting the material within 24 hours (or a longer period the Commissioner allows).

The statute sets a high‑value civil penalty (500 penalty units) for posting contravening material and for failing to comply with a removal notice or remedial direction; the Commissioner may also issue formal warnings and remedial directions aimed at preventing future contraventions. The Act builds in ministerial rule‑making power to tailor exemptions or conditions and requires the Commissioner to report specified statistics about notices and complaints each year.Schedule 3 (inserted into the Privacy Act) establishes a separate tort — "wrongful use or disclosure of deepfake material." To win, a plaintiff must be a subject of the deepfake, show the defendant used or disclosed it knowing it was created or altered with technology (or being reckless to that fact), and show detriment to the plaintiff or a profit to the defendant; importantly, the cause of action is actionable without proof of damage.

Remedies include injunctions (available at any stage), damages (including for emotional distress), orders for destruction or delivery up of material, apologies and, in exceptional cases, punitive damages or an account of profits. The Schedule contains multiple exemptions — journalists, certain agencies, law enforcement and intelligence bodies, and children under 18 — and provides procedural rules including a summary‑judgment power for courts and a single publication rule that fixes the date of actionable publication.Practically, the Bill sets divergent thresholds and territorial hooks across the two Acts: the Online Safety regime focuses on material provided on regulated services and requires complainant engagement with the service before Commission intervention; the Privacy Act tort relies on ordinary residence in Australia for the subject and provides a private litigation route that does not depend on an eSafety complaint.

Both tracks create new obligations for platforms and new pathways for victims — one administrative and rapid removal‑focused, the other judicial and compensatory — and will interact with existing cyberbullying, intimate image, and defamation laws.

The Five Things You Need to Know

1

The Online Safety amendments allow the eSafety Commissioner to give removal notices requiring platforms, individual endusers or hosting providers to remove or cease hosting nonconsensual deepfakes within 24 hours (or a longer period allowed by the Commissioner).

2

A person who posts nonconsensual deepfake material as an enduser and is ordinarily resident in Australia faces a civil penalty of 500 penalty units under section 93B; failure to comply with a removal notice or remedial direction also attracts a 500‑unit penalty.

3

Complainants must first lodge a complaint with the service provider and give the provider 48 hours to remove the material before the Commissioner may issue a removal notice under sections 93D and 93F (unless other statutory conditions apply).

4

Schedule 3 to the Privacy Act creates a new tort of wrongful use or disclosure of deepfake material that is actionable without proof of damage; a plaintiff must show the defendant knew or was reckless that the material was created or altered using technology and that the plaintiff suffered detriment or the defendant profited.

5

Special time limits apply: victims under 18 must commence proceedings before their 21st birthday; other plaintiffs must sue by the earlier of one year from awareness or three years from the use/disclosure, subject to court discretion (maximum extension up to six years).

Section-by-Section Breakdown

Every bill we cover gets an analysis of its key sections. Expand all ↓

Schedule 1 — Section 21A

Core definition and consent standard for deepfake material

Section 21A defines "deepfake material" as stills, moving images, audio or combinations that depict an individual's face or voice realistically but falsely because of creation or alteration using technology. It also defines "subject" and sets a consent standard: express, voluntary and informed consent is required (or consent by a parent/guardian for those under 16). The practical effect is to target AI‑generated impersonations while excluding existing categories (cyberbullying, intimate images, abhorrent violent conduct) and to require a high bar for lawful posting.

Schedule 1 — Division 4A (Sections 37A–37B)

Complaint intake and investigative gate for the Commissioner

Division 4A sets out who may complain (subjects aged 16+ or authorised "responsible persons" for younger subjects) and requires complainants to provide evidence that they previously complained to the service provider before seeking a removal notice from the Commissioner. The Commissioner has broad investigatory discretion under section 37B but can terminate investigations; evidentiary forms (receipt numbers, screenshots, statutory declarations) and non‑legislative procedural rules give the Commissioner flexibility but also create discretionary thresholds that will determine how readily complaints progress to notices.

Schedule 1 — Part 7A (Sections 93B–93C)

Posting prohibition and civil penalty for endusers

Section 93B makes it a civil‑penalty contravention for an enduser ordinarily resident in Australia to post deepfake material when the resulting provision on a regulated service is nonconsensual and the poster is aware of that fact; the statute ties liability to residence and awareness. The Commissioner may instead issue a formal warning under 93C, which creates an administrative enforcement ladder before or alongside penalty action.

4 more sections
Schedule 1 — Part 7A (Sections 93D–93G and 93L)

Removal notices to platforms, users and hosts; compliance obligation

Sections 93D–93F authorise removal notices to platform providers, endusers and hosting service providers once statutory conditions are met (including the 48‑hour window after a service complaint). Section 93G makes compliance mandatory to the extent possible and attaches the same 500‑unit penalty for noncompliance; section 93L allows the Commissioner to notify providers when repeated contraventions indicate systemic noncompliance. These provisions import a fast takedown rhythm (48 hours to act, then 24 hours from notice) that will press platform moderation workflows and cross‑border content handling.

Schedule 1 — Part 7A (Section 93J and reporting changes)

Remedial directions, formal warnings and transparency reporting

Section 93J lets the Commissioner issue remedial directions to persons who contravene or are contravening the posting prohibition, backed by a civil penalty. The Bill also expands annual reporting requirements to capture counts of removal notices, directions and informal notices specific to deepfakes, increasing public visibility of enforcement activity and giving the Commissioner data to identify repeat offenders or problematic services.

Schedule 2 (Schedule 3 to Privacy Act) — Part 2 (Clauses 7–14)

New tort: elements, remedies and procedural rules

Schedule 3 establishes the tort of wrongful use or disclosure: a plaintiff must be a subject of a deepfake, the defendant must have used or disclosed it knowing or recklessly indifferent to its artificial origin, and the plaintiff must have suffered detriment or the defendant profited (the cause is actionable without proof of damage). Remedies span injunctions, damages (including for emotional distress), correction orders, destruction/delivery up, apologies and, in exceptional cases, exemplary damages or an account of profits. The Schedule also provides for summary dismissal where claims have no reasonable prospects and allows courts to determine exemptions early in proceedings.

Schedule 2 (Schedule 3 to Privacy Act) — Part 3 (Clauses 16–21)

Exemptions for journalists, agencies and law enforcement

Clauses 16–21 create layered exemptions: professional journalists (as defined) and journalistic material are excluded where the use/disclosure is part of collecting or publishing news; agencies and state/territory authorities (other than intelligence or law enforcement bodies) have good‑faith carve outs; law enforcement and intelligence agencies are broadly exempt in their operational roles. These carve outs aim to safeguard public‑interest reporting and official functions but will require courts to delineate scope in close cases.

At scale

This bill is one of many.

Codify tracks hundreds of bills on Privacy across all five countries.

Explore Privacy in Codify Search →

Who Benefits and Who Bears the Cost

Every bill creates winners and losers. Here's who stands to gain and who bears the cost.

Who Benefits

  • Subjects of deepfakes (victims): the Bill provides both a rapid administrative takedown path via the eSafety Commissioner and a private cause of action with injunctive and compensatory remedies, increasing practical avenues to remove material and obtain redress.
  • Privacy lawyers and civil‑society advocates: the statutory tort and explicit remedies create clearer legal claims and precedents to advance privacy‑based relief and strategic litigation.
  • Regulators and researchers tracking online harm: expanded reporting obligations give the eSafety Commissioner structured data on removal notices and repeat offenders, improving oversight and policy development.

Who Bears the Cost

  • Social media and electronic service providers: they must build or adapt notice‑and‑takedown workflows to meet the 48‑hour and 24‑hour windows, respond to Commissioner investigations, and face potential 500‑unit penalties and public statements about repeat contraventions.
  • Hosting service providers and content platforms: hosting providers face specific cease‑hosting notices and practical burdens in identifying, locating and removing cross‑host copies of material, potentially across jurisdictions.
  • Endusers and content creators: Australian endusers who post deepfakes risk significant civil penalties and remedial directions; creators who rely on synthetic media for legitimate expression will need to document consent to avoid liability.
  • Courts and the justice system: private causes of action and injunctive claims will increase case volumes (including evidentiary disputes about origin and the "knowing or reckless" mental element), imposing costs on judiciary resources.

Key Issues

The Core Tension

The central trade‑off is between giving individuals fast, practical protection against humiliating and harmful synthetic impersonations and preserving freedom of expression, journalistic inquiry and official functions; stronger, faster takedown and private remedies reduce harm to subjects but increase the risk of over‑removal, contested exemptions, and lengthy, technically complex litigation over authorship, consent and mens rea.

The Bill mixes two enforcement pathways — an administrative takedown regime under the Online Safety Act and a private tort under the Privacy Act — that overlap but use different jurisdictional hooks, definitions and standards of proof. The Online Safety path is behaviourally targeted at regulated services and emphasises rapid removal through procedural prerequisites (a prior complaint, a 48‑hour window) and short statutory compliance windows (24 hours after notice).

The Privacy Act tort, by contrast, relies on ordinary residence and a mental element (knowledge or recklessness) and gives plaintiffs a full suite of judicial remedies. This bifurcation may produce forum shopping, duplicate proceedings, or different outcomes for the same material depending on which route a victim pursues.

Operationally, enforcement turns on the ability to identify deepfakes reliably and to prove nonconsent or a poster’s state of mind. The Commissioner’s tests are administrative and discretionary, but courts will need to decide in litigation whether material was "created or altered" using technology and whether the defendant knew or was reckless — fact patterns that can be technically complex and costly to litigate.

The Bill’s exemptions for journalism and official functions aim to protect legitimate uses, but the definitions (for example, who qualifies as a "journalist" or what counts as "journalistic material") will trigger litigated boundary disputes. Finally, cross‑border hosting and the global nature of platforms mean that removal notices and compliance obligations will encounter jurisdictional limits; platforms may respond by broad takedowns or geoblocking to limit exposure, creating potential over‑removal and speech‑chilling effects.

Try it yourself.

Ask a question in plain English, or pick a topic below. Results in seconds.