Codify — Article

Stop the Scroll Act requires FTC-backed mental-health warning on covered platforms

Mandates conspicuous, in-app mental-health warning labels with Surgeon General concurrence, hourly redisplay rules, and FTC/state enforcement—effective one year after enactment.

The Brief

The Stop the Scroll Act directs the Federal Trade Commission, with concurrence from the Secretary of HHS acting through the Surgeon General, to require covered platform providers to display a conspicuous mental-health warning each time a user in the U.S. accesses the platform. Platforms must not bury the label in terms or hyperlinks, must keep it visible until the user exits or acknowledges the risk, and must re-display it after each hour of continuous use; the FTC must issue implementing regulations within 180 days.

Enforcement sits primarily with the FTC under its unfair-or-deceptive-practices authority but the bill also creates a state parens patriae enforcement path with civil penalties calculated by days out of compliance or number of affected end users. The statute applies to platforms defined by the Trafficking Victims Prevention and Protection Act’s social-media definition and to anonymous content-sharing apps, and it takes effect one year after enactment—creating immediate design, compliance, and enforcement questions for platform operators and regulators.

At a Glance

What It Does

The bill requires covered platforms to show a prominent mental-health warning to U.S.-based users every time they access the service, prevents hiding the label in links or terms, and forces an hourly redisplay after users acknowledge and continue. The FTC must write regulations within 180 days and review them every five years with the Surgeon General’s concurrence.

Who It Affects

Covered platform providers as defined by the bill (social media per 42 U.S.C. 1862w and anonymous content-sharing apps) must change onboarding and session UX, link to federal mental-health resources such as 988, and stand up compliance, monitoring, and reporting processes. Regulators—FTC and HHS—and state attorneys general gain enforcement roles.

Why It Matters

This is a federal compelled-disclosure regime tailored to digital platforms that treats mental-health risk communication like traditional product warnings. It sets a technical and legal precedent for UX-level regulation of engagement mechanics and creates a new litigation vector and penalty structure for noncompliant platforms.

More articles like this one.

A weekly email with all the latest developments on this topic.

Unsubscribe anytime.

What This Bill Actually Does

The Stop the Scroll Act creates a federally required mental-health warning that must appear to U.S.-based users of certain online platforms. The law draws on an existing statutory definition of “social media platform” and separately covers anonymous content-sharing apps; together these are “covered platforms.” Each access must trigger a clear, conspicuous label warning of potential negative mental-health impacts and offering links to federal resources such as the 988 Lifeline.

The label must remain visible until the user leaves the platform or affirmatively acknowledges the risk and chooses to proceed.

If a user acknowledges and continues, the platform must redisplay the label after every hour of continuous use. The bill forbids implementing the warning only via hyperlinks or burying it in terms and conditions, prevents extraneous content that dilutes the label’s prominence, and limits user ability to disable the label other than the specified acknowledgment flow.

Covered platform providers therefore must instrument session tracking to detect “continuous use” and implement an interruptive UI that cannot be trivially bypassed.Procedurally, the Commission must issue implementing regulations within 180 days of enactment and must review and update those regulations at least every five years with the Surgeon General’s concurrence. Enforcement treats violations as unfair or deceptive acts under the FTC Act, giving the FTC its usual investigatory and remedial tools; the statute also expressly allows state attorneys general to sue on behalf of residents (parens patriae) and to recover civil penalties for knowing or repeated violations.

The civil-penalty calculation ties the fine to either days out of compliance or affected end users multiplied by the FTC Act’s maximum civil penalty, a formula that can scale quickly depending on the facts.The Act becomes effective one year after enactment, creating a compressed clock for platform teams to change UX flows, for the FTC and HHS to craft joint regulations, and for legal teams to evaluate exposure and First Amendment or administrative-law challenges. The bill also extends the FTC’s enforcement reach to nonprofit organizations and common carriers for purposes of this law and includes extraterritorial jurisdiction where the violation touches U.S. users or acts in furtherance occur in the U.S.

The Five Things You Need to Know

1

The covered-label must remain visible on-screen until the user exits the platform or actively acknowledges the potential for harm and chooses to continue.

2

If a user acknowledges and proceeds, the platform must redisplay the warning after each hour of continuous use.

3

The Commission must promulgate implementing regulations within 180 days of enactment and review them at least every five years with the Surgeon General’s concurrence.

4

State attorneys general can sue parens patriae; courts may impose civil penalties calculated by multiplying either the number of noncompliance days or number of affected end users by the FTC Act’s maximum civil penalty (adjusted for inflation).

5

The Act takes effect one year after enactment, giving platforms a single-year compliance window from enactment to live deployment under the new rules.

Section-by-Section Breakdown

Every bill we cover gets an analysis of its key sections. Expand all ↓

Section 1

Short title

Provides the Act’s name, the 'Stop the Scroll Act.' This is purely a caption but signals the public-health framing that runs through the findings and regulatory approach.

Section 2

Findings supporting federal action

Lists Congress’ factual predicates: associations between social media and mental-health risks, the Surgeon General’s 2023 advisory, and the analogy to tobacco/alcohol warnings. While nonbinding, these findings justify using the FTC’s consumer-protection authority and frame label conspicuousness and public-health rationale for courts and agencies.

Section 3

Key definitions (covered platforms, users, providers)

Defines 'covered platform' by reference to 42 U.S.C. 1862w’s social-media platform definition and adds 'anonymous content sharing platform' that does not require registration. The definition set matters because it determines which sites and apps must implement the label; operators whose services sit near the margins of these definitions will need legal analysis to assess coverage.

5 more sections
Section 4(a)–(c)

Label display rules and content restrictions

Mandates that covered-platform providers show a mental-health warning to U.S.-based users every access, prevents hiding the label in hyperlinks or terms, prohibits extraneous content that dilutes prominence, and bars a user from disabling the label other than the two statutorily permitted paths (exit or acknowledgement). It also requires linking to federal resources like 988. Practically this forces platforms to build session-detection, interruptive UI, and controls to ensure labels reappear after one hour of continuous use.

Section 4(d)–(e)

Rulemaking and periodic review

Gives the FTC, with Surgeon General concurrence, 180 days to promulgate regulations under the Administrative Procedure Act and requires a review at least every five years. This creates a joint technical-health regulatory process: the Surgeon General’s concurrence injects medical authority but also the potential for policy friction; the five-year review ties regulation to evolving science and market changes but may lag fast-moving product innovations.

Section 5(a)

FTC enforcement and scope

Treats violations as unfair or deceptive acts under the FTC Act and imports the FTC’s enforcement tools, investigatory powers, and penalties. Notably, the section waives typical jurisdictional limits to cover nonprofits and common carriers for purposes of this law, meaning entities normally outside FTC reach for some matters may be swept in here.

Section 5(b)–(c)

State enforcement, civil penalties, and extraterritoriality

Authorizes state attorneys general to sue as parens patriae, requires notice to the FTC and allows FTC intervention, and sets a civil-penalty formula that multiplies days noncompliant or number of affected end users by the FTC Act maximum penalty. The Act asserts extraterritorial jurisdiction where the violation involves a U.S. individual or acts in furtherance occur in the U.S., expanding reach over offshore platforms accessing U.S. users.

Section 6

Effective date

Makes the statute effective one year after enactment, giving platforms a fixed transition period to implement UI/UX changes and for the FTC/HHS to draft and finalize regulations.

At scale

This bill is one of many.

Codify tracks hundreds of bills on Healthcare across all five countries.

Explore Healthcare in Codify Search →

Who Benefits and Who Bears the Cost

Every bill creates winners and losers. Here's who stands to gain and who bears the cost.

Who Benefits

  • U.S.-based platform users who receive standardized warnings and direct links to federal mental-health resources (e.g., 988), increasing the visibility of crisis assistance at moments of online engagement.
  • Public-health agencies and mental-health service providers that may see improved referrals and measurable touchpoints from platform interactions because the label must link to federal resources.
  • Regulators and public-interest advocates seeking a concrete, enforceable tool to reduce harms from excessive or harmful online engagement—this statute formalizes a disclosure approach they can measure and litigate around.

Who Bears the Cost

  • Covered platform providers (including major social platforms and anonymous-sharing apps) that must redesign onboarding and session flows, implement session-tracking to detect 'continuous use,' and maintain compliance programs and reporting—an operational and engineering cost with potential ad- and engagement-revenue impacts.
  • Startups and smaller platforms that will face disproportionate UX and engineering burdens to implement interruptive, non-bypassable labels and to track continuous-use metrics, potentially reducing innovation or shifting user acquisition strategies.
  • The FTC and HHS (Surgeon General) which must craft joint regulations, perform reviews every five years, and carry enforcement workloads; the statute also expands FTC jurisdictional reach, which may require additional enforcement resources and interagency coordination.
  • State attorneys general who may litigate under the parens patriae authority and manage complex proof-of-harm and large-scale compliance accounting, increasing state-level enforcement costs and docket pressure.

Key Issues

The Core Tension

The central dilemma is public health versus practicability: policymakers want a visible, enforceable warning to reduce online harm, but forcing interruptive, non-bypassable labels at scale imposes technical, constitutional, and commercial costs and may produce habituation or user migration to less-regulated corners of the internet; choosing where to draw that line—and how prescriptive the remedy should be—has real trade-offs and no risk-free solution.

The bill presumes conspicuous warnings will influence behavior the way tobacco or alcohol labels have, but digital attention dynamics differ: frequent redisplay (hourly) risks habituation or interface workarounds by users and platforms, potentially blunting the intended effect. Measuring the statutory standard of 'continuous use' raises technical questions—does background audio/video count; how are multiple tabs or cross-device sessions treated; and how will platforms reliably log U.S. presence without creating new privacy or geolocation burdens?

Those implementation details fall to FTC rulemaking but they are central to whether the label is effective or merely cosmetic.

The enforcement construct imports the FTC’s unfair-or-deceptive-practices framework and augments it with a heavy state-litigation tool and a penalty formula that scales with days or end-user counts. That formula can create enormous exposure for large platforms during short outages or misconfigurations; it also invites strategic litigation by states.

The requirement that the Surgeon General concur in regulations adds medical legitimacy but risks politicization of what is framed as a technical disclosure (e.g., disputes over label language or linkage). Finally, extraterritorial reach and the definition choices (borrowing 42 U.S.C. 1862w and adding 'anonymous' apps) create coverage uncertainty that will fuel pre-enforcement testing or narrow compliance postures from platforms.

Try it yourself.

Ask a question in plain English, or pick a topic below. Results in seconds.