HB 908 proposes to amend Section 230 of the Communications Act to stop censorship by narrowing platform immunities to only ‘unlawful material’ and by adding a new protection for actions that give users the option to restrict access to any other material, including content that is constitutionally protected. The bill’s core maneuver is to replace the current phrasing about content that is obscene, lewd, or otherwise objectionable with a focus on unlawfulness, and to create an explicit shield for user-access control features.
Practically, this means platforms could face liability for moderation decisions involving content that is not clearly unlawful but is deemed objectionable under existing norms. The measure also requires platform operators to consider and potentially implement user-restriction options, which could shape how services design filters, parental controls, and content-divestment tools.
The bill does not define “unlawful material” within the text, leaving interpretation to governing law and potentially raising questions about cross-jurisdictional content.
At a Glance
What It Does
It changes the heading of Section 230(c) from addressing screening of offensive material to screening of unlawful or objectionable material. It replaces the list of specific content categories with a single “unlawful material” standard (A) and adds a new subparagraph (C) to shield platform actions that provide users with restriction options for any material.
Who It Affects
Online platforms hosting user-generated content, including social media, video-sharing sites, and other digital services that rely on Section 230 immunity to host third-party content.
Why It Matters
It narrows the scope of protected content and introduces a new obligation- or at least a new protected action- for offering user-based content restrictions, altering the risk calculus for moderation decisions and product features.
More articles like this one.
A weekly email with all the latest developments on this topic.
What This Bill Actually Does
HB 908 targets the scope of immunity under Section 230 of the Communications Act. The bill would strike the current reference to material that platforms or users find objectionable and replace it with a standard tied to unlawful material.
It also adds a new provision that protects platform actions designed to give users the option to restrict access to any material, even if that material is constitutionally protected. In short, immunity would be tethered to unlawfulness, while user-facing content-restriction tools would receive statutory protection.
For platforms, the practical upshot is a potential liability shift for moderating content that isn’t clearly unlawful but is deemed objectionable, depending on how the law is interpreted in practice. For users, the bill could enhance controls over what they see by codifying protections for restriction features, but it may also complicate how platforms decide what to remove or demote.
The text does not define what constitutes “unlawful material,” which leaves open questions about enforcement and cross-border content considerations.Because the bill’s language is narrowly focused on the immunity language in Section 230 and adds a specific, new restriction-option protection, it creates a policy space where moderation practices and user-control features could expand or contract, depending on how courts and regulators interpret the new standard.
The Five Things You Need to Know
The heading of Section 230(c) would change to address “unlawful or objectionable material.”, Subparagraph (A) would replace the old list describing objectionable content with a standard keyed to “unlawful material.”, A new subparagraph (C) would shield actions that give users the option to restrict access to any material, including constitutionally protected content.
The text narrows immunity for content that is not clearly unlawful, potentially increasing platform exposure to liability for certain moderation decisions.
The bill does not provide definitions for “unlawful material” or the scope of “restrict access,” leaving important interpretive questions unresolved.
Section-by-Section Breakdown
Every bill we cover gets an analysis of its key sections.
Short Title
The act is named the Stop the Censorship Act. No substantive policy changes are contained in this section beyond the naming convention.
Amendment to Section 230(c) Heading
The heading would be amended by striking ‘AND SCREENING OF OFFENSIVE MATERIAL’ and inserting ‘OF UNLAWFUL OR OBJECTIONABLE MATERIAL,’ signaling a shift in the scope of protected content at the core of Section 230 immunity.
Amendment to 230(c)(2)(A)
Subparagraph (A) would change the material protected by immunity from content that is obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable to material that is unlawful. This narrows the set of materials that would enjoy immune status and reduces protection for broadly defined objectionable content.
New Subparagraph (C) – User Restriction Options
A new subparagraph (C) would add immunity for any action taken to provide users with the option to restrict access to any other material, including content that is constitutionally protected. This creates a protected feature—the ability to apply access restrictions—even when the content would otherwise be protected.
This bill is one of many.
Codify tracks hundreds of bills on Technology across all five countries.
Explore Technology in Codify Search →Who Benefits and Who Bears the Cost
Every bill creates winners and losers. Here's who stands to gain and who bears the cost.
Who Benefits
- Users who want tangible controls over what they see (e.g., parents or guardians using restriction tools) and the advocates who push for greater user control over content exposure.
- Online platforms that offer built‑in restriction or parental‑control features could leverage explicit immunity for those tools, encouraging broader deployment of such controls.
- Digital rights or transparency groups that favor clearer limits on moderation without compromising access to lawful content may view the new clause as clarifying user empowerment.
Who Bears the Cost
- Platforms face higher liability risk for moderation choices involving non‑unlawful but objectionable content, potentially increasing compliance burdens and chilling effects on moderation philosophy.
- Smaller platforms with limited legal resources may struggle to navigate the changed immunity landscape and related enforcement risks.
- Content creators and publishers who rely on broad immunity for user‑generated content may see increased exposure to liability if content is deemed objectionable but not unlawful.
Key Issues
The Core Tension
The central dilemma is whether it is better to shield platforms broadly from liability for significant moderation decisions or to define narrowly what content remains immunized while granting users explicit control tools, knowing both approaches can curtail or invite different forms of speech and market behavior.
The core tension in this bill arises from balancing free expression with the need to curb harmful or unlawful content. By narrowing immunity to unlawful material only, moderation decisions about content that stakeholders deem objectionable—but not clearly unlawful—could face greater scrutiny or liability exposure.
At the same time, the new protection for user-access restriction features could encourage more robust user controls, potentially reducing exposure to unwanted content for individual users. However, without precise definitions of “unlawful material” and without limits on what constitutes a permissible restriction option, the bill risks inconsistent enforcement and enforcement gaps across platforms.
The practical implementation would require careful alignment with existing case law and regulatory interpretation to avoid a patchwork of platform‑specific standards.
Try it yourself.
Ask a question in plain English, or pick a topic below. Results in seconds.