This amendment to the Online Safety Act 2021 strips out the statutory definitions of “serious harm” and “serious harm to a person's mental health,” replaces the existing test for abusive material with a two-part ordinary‑reasonable‑person standard focused on whether material is aimed at a particular Australian adult and whether it would be menacing, harassing or seriously offensive to that person. It also sets out factors to decide when material is “seriously offensive,” including community standards, artistic or educational merit, and the general character of the material.
Critically, the bill imposes an explicit consent requirement when private sexual material appears on specified online services: if private sexual material is posted to a social media service, a relevant electronic service or a designated internet service, decision‑makers must consider whether the subject gave consent to its being posted. The practical effect is a lower evidentiary threshold for removing adult‑targeted abusive content and a clearer statutory pointer to non‑consensual intimate imagery complaints — with consequential compliance and moderation impacts for online services and clearer statutory tests for victims and regulators.
At a Glance
What It Does
The bill repeals the Act’s definitions of ‘serious harm’ and replaces parts of the abusive‑material test to require that an ordinary reasonable person conclude the material was likely intended to affect a particular Australian adult and that such a person would find it menacing, harassing or seriously offensive. It adds a non‑exhaustive list of factors to decide ‘seriously offensive’ and requires consent to be considered when private sexual material is posted on covered services.
Who It Affects
The changes directly affect operators of social media services, relevant electronic services and designated internet services, users who post or host potentially abusive material, the eSafety Commissioner (and any decision‑makers under the Act), and adults who are the target or subject of online abuse or non‑consensual intimate imagery.
Why It Matters
By removing the ‘serious harm’ language and sharpening the intent and offensiveness tests, the bill lowers the bar for content to be caught by the Act and pushes platform moderation toward a consent‑sensitive approach for intimate material. That will reshape notice-and-takedown dynamics, increase regulatory and compliance work for platforms, and change the legal calculus for victims and advisers pursuing removals.
More articles like this one.
A weekly email with all the latest developments on this topic.
What This Bill Actually Does
The bill makes three focused changes to the Online Safety Act 2021. First, it wipes out the Act’s statutory definitions of ‘serious harm’ and ‘serious harm to a person’s mental health.’ Those definitions have been a gating concept for some adult protections; their removal signals a shift away from a harm‑based threshold toward a test centred on offensiveness and targeting.
Second, the bill rewrites the core test for when material is actionable. The new test requires that an ordinary reasonable person conclude the material was likely intended to affect a particular Australian adult, and that an ordinary reasonable person in the target’s position would regard the material as menacing, harassing or seriously offensive.
That two‑part framing introduces intent-to-target and an occupant‑specific assessment of offensiveness into the statutory standard.Third, the bill replaces the Act’s guidance on how to decide ‘seriously offensive’ with a short list of factors: whether the material significantly departs from community standards of decency, any literary/artistic/educational merit, and the general character of the material (for example, medical or legal content). Importantly, when private sexual material appears on a social media service, relevant electronic service or designated internet service, regulators must take into account whether the subject(s) consented to its posting.
The consent question is mandatory in that limited context.Taken together, these changes make the regime more directed at adult targets of online abuse and give decision‑makers an explicit statutory basis to treat non‑consensual intimate imagery differently when it appears on covered services. The practical consequences will be felt in platform policies, notice procedures, evidentiary expectations for complainants, and the volume and type of matters the eSafety Commissioner and compliance teams will need to triage and decide.
The Five Things You Need to Know
The bill repeals the Act’s definitions of “serious harm” and “serious harm to a person’s mental health,” eliminating those statutory gates.
It replaces paragraphs 7(1)(b)–(c) with a two‑part test: (1) whether material was likely intended to have an effect on a particular Australian adult; and (2) whether an ordinary reasonable person in that adult’s position would find it menacing, harassing or seriously offensive.
Section 8 is rewritten to list factors for determining ‘seriously offensive’ material, including departure from community standards, literary/artistic/educational merit, and the general character of the material.
When private sexual material is posted on a social media service, relevant electronic service or designated internet service, decision‑makers must take into account whether the subject(s) consented to the material being posted.
The Act (as amended) comes into force the day after Royal Assent, making the new tests and consent obligation immediately applicable upon commencement.
Section-by-Section Breakdown
Every bill we cover gets an analysis of its key sections.
Remove the 'serious harm' definitions
This provision repeals both the definition of 'serious harm' and 'serious harm to a person's mental health' from the Online Safety Act. Practically, that removes a statutory threshold that courts, the eSafety Commissioner and platforms have used to weigh whether content met the Act’s protections for adults. Without those definitions, decision‑makers must apply the amended tests elsewhere in the Act rather than rely on a harms‑centric definition.
Recast the core test for abusive material to focus on intent and occupant‑specific offensiveness
This change substitutes new wording in the Act’s main test: material will be caught if an ordinary reasonable person would conclude it was likely intended to affect a particular Australian adult and that an ordinary reasonable person in the position of that adult would regard it as menacing, harassing or seriously offensive. The two elements split the analysis between the content’s likely purpose (targeting) and its likely subjective impact on a reasonable person in the subject’s position, shifting emphasis away from a community‑wide harms test to a person‑focused offensiveness test.
Remove the explanatory note to the core test
This item deletes an earlier note that accompanied subsection 7(1). Notes often provide interpretive guidance; removing it signals Parliament prefers the statutory text to stand alone. That places a greater interpretive burden on decision‑makers and courts to define how the new two‑part test operates in practice without the previously appended clarification.
Set factors for deciding when material is 'seriously offensive' and require consent check for private sexual material
Section 8 is replaced with a concise list of factors for the ‘seriously offensive’ inquiry: whether the material departs significantly from community standards of morality and decency, any literary/artistic/educational merit, and the general character of the material (including medical, legal or scientific character). Subsection (2) adds a mandatory consent consideration where private sexual material is posted on a social media service, relevant electronic service or designated internet service — the decision‑maker must take into account whether the subject(s) consented. This provision narrows the evaluative factors but elevates consent as a determinative consideration in intimate‑image cases on covered platforms.
This bill is one of many.
Codify tracks hundreds of bills on Technology across all five countries.
Explore Technology in Codify Search →Who Benefits and Who Bears the Cost
Every bill creates winners and losers. Here's who stands to gain and who bears the cost.
Who Benefits
- Adults targeted by harassment or menacing online material — they gain a statutory test focused on whether material was intended to affect them and whether a reasonable person in their position would find it menacing, harassing or seriously offensive, which can make removals and complaints easier to bring.
- People whose private sexual material is shared without consent — the amendment requires decision‑makers to consider consent when such material appears on covered services, strengthening grounds for takedown requests.
- Advocacy groups and victim service organisations — a clearer statutory emphasis on targeting and consent gives them a more direct legal framework to support complainants and to press platforms and regulators for action.
- Regulators and adjudicators (e.g., the eSafety Commissioner) — the new list of factors and explicit consent requirement provide a sharper statutory lens for decisions, reducing reliance on the prior, broader 'serious harm' concept.
Who Bears the Cost
- Social media platforms, relevant electronic services and designated internet services — they will face more and earlier removal requests and need to update policies, content‑assessment workflows and staffing to apply the new intent and consent tests.
- Small platforms and startups — the increased moderation and compliance burden will be proportionately heavier for smaller operators without established moderation infrastructure.
- Publishers and journalists — the broadened offensiveness test and the community‑standards metric may increase takedown pressure on content with legitimate public‑interest, educational or artistic value, forcing tougher editorial choices.
- The eSafety Commissioner and administrative decision‑makers — a likely rise in complaints and more fact‑intensive consent inquiries will increase case volumes and may require additional resourcing to avoid backlog and ensure consistent determinations.
Key Issues
The Core Tension
The central dilemma is balancing easier removal of adult‑targeted abusive and non‑consensual intimate material against the risk of subjective, context‑heavy judgments that suppress legitimate expression and increase administrative burdens — the amendment solves accessibility for victims but shifts the problem to operationalizing a messy, person‑specific standard.
The bill trades a harms‑based threshold for a person‑centred offensiveness and targeting test. That lowers the legal bar for capturing abusive adult‑targeted content but introduces subjectivity: what an 'ordinary reasonable person in the position of a particular Australian adult' would find menacing or seriously offensive depends heavily on context (age, cultural background, profession, prior relationship between parties).
Regulators and platforms will face fact‑intensive inquiries that are harder to automate without producing both under‑ and over‑removal.
The consent requirement for private sexual material narrows the inquiry in one way (consent is decisive to the posting of intimate content on covered services) but raises practical evidentiary problems: how do platforms and regulators reliably determine consent, particularly for older content or cross‑border uploads? The amendment does not prescribe a standard of proof or an evidence checklist, so decision‑makers must develop operational guidance.
Finally, removing the statutory 'serious harm' definitions may create interpretive gaps where other parts of the Act previously relied on those terms; courts and the Commissioner will need to reconcile those references, risking litigation over transitional interpretation and scope.
Try it yourself.
Ask a question in plain English, or pick a topic below. Results in seconds.