Codify — Article

California AB56 mandates black-box warning labels on social media platforms

Requires covered platforms to show daily and repeated large-format warnings about youth mental-health risks, with an age-exemption and no private right of action.

The Brief

AB56 (Bauer-Kahan) adds a Social Media Warning Law to California's Health and Safety Code that forces covered social platforms to display prominent “black box” warnings about mental-health risks to users each day they access the service and again after extended use. The statute prescribes formatting, minimum display durations, and a short, state-approved warning text; it exempts users reasonably determined to be over 17 and bars a private right of action.

This is a regulatory nudge aimed at reducing youth exposure to potentially harmful social-media features by inserting repeated, hard-to-ignore warnings into the user experience. Practically, the bill creates immediate compliance requirements for platforms that meet the statute’s definition of a covered (addictive) service and raises operational questions about age verification, cross-device usage tracking, and enforcement mechanisms starting January 1, 2027.

At a Glance

What It Does

The bill requires covered platforms to display a black-box warning when a user first accesses the platform each calendar day and again after three hours of cumulative active use, then at least once per hour of cumulative active use thereafter. The initial daily warning must occupy at least 25% of the screen for at least 10 seconds and may be dismissed; subsequent warnings must occupy at least 75% of the screen for at least 30 seconds and cannot be bypassed.

Who It Affects

The rule applies to platforms defined as an 'addictive internet-based service or application' under state law, excluding services whose primary function is commerce, cloud storage, email, one-to-one direct messaging without public dissemination, or internal organizational tools. Major social networks, user-facing apps with engagement-driven feeds, and operators that employ algorithmic content recommendation will be directly affected.

Why It Matters

The measure imports a black-box warning model used for other public-health risks into the platform environment and forces platforms to change the user interface and potentially age-verification practices. For compliance officers, the bill imposes precise UX and timing specifications; for product teams, it introduces operational challenges around measuring 'cumulative active use' and managing cross-device sessions.

More articles like this one.

A weekly email with all the latest developments on this topic.

Unsubscribe anytime.

What This Bill Actually Does

AB56 inserts a new Chapter 25 into California’s Health and Safety Code called the Social Media Warning Law. It ties the definition of covered platform to the existing statutory concept of an 'addictive internet-based service or application' and then carves out ordinary enterprise and utility services — for example, e-commerce sites, cloud storage, email, single-recipient direct messaging, and internal collaboration tools — from coverage.

If your product is a public-facing, engagement-driven social platform that fits the state’s 'addictive' definition, the law applies.

On any calendar day a covered-platform user accesses the service, the platform must show a black-box warning the first time the user opens the platform that day. That initial warning must be visible for at least 10 seconds, occupy at least 25 percent of the screen, and may be dismissed by clicking a conspicuous “X.” Separately, once a user hits three hours of cumulative active use, the platform must display a second-tier warning that lasts at least 30 seconds, occupies at least 75 percent of the screen, and cannot be bypassed; the statute then requires at least one such 30-second, non-bypass display for every additional hour of cumulative active use.The bill prescribes the exact text and basic formatting of the notice (black text on white background) — the notice cites the Surgeon General and warns that social media is associated with significant mental health harms and has not been proven safe for young users.

Platforms do not have to display either warning if they have 'reasonably determined' the user is over 17. The statute also states that providing the notice, or a user’s affirmative dismissal of that notice, does not create a waiver of other claims except claims premised on violating this section, and it explicitly bars a private right of action under the chapter.

The chapter is severable and becomes operative January 1, 2027.Several operational details are left to implementers: the statute refers to 'cumulative active use' but does not define how to count activity across devices or sessions, and it does not prescribe methods for the 'reasonable determination' that a user is over 17. That leaves platform operators to choose how to measure active time and how to handle age assessment while balancing privacy, accuracy, and regulatory risk.

The Five Things You Need to Know

1

Initial daily warning: the first access each calendar day must show a black-box notice for at least 10 seconds, occupying at least 25% of the screen; the user may dismiss it by clicking a conspicuous “X.”, Extended-use warning: after three hours of cumulative active use the platform must show a 30-second, non-bypass warning that occupies at least 75% of the screen, and must show at least one such non-bypass warning per additional hour of cumulative active use.

2

Exact message and format: the law requires the text in black on white: “The Surgeon General has warned that while social media may have benefits for some young users, social media is associated with significant mental health harms and has not been proven safe for young users.”, Age exemption: a platform is not required to show either warning if it has 'reasonably determined' the user is over 17 — the statute does not define what methods satisfy that standard.

3

Enforcement and timing: the chapter bars a private right of action, is declared severable, and becomes operative January 1, 2027.

Section-by-Section Breakdown

Every bill we cover gets an analysis of its key sections. Expand all ↓

Section 28001

Covered platform definition and carve-outs

This section makes the statute apply to the same class of services California already labels 'addictive internet-based service or application' (a reference to Section 27000.5). It then lists six explicit exclusions: platforms whose primary function is selling goods/services, cloud storage, email, one-to-one direct messaging without public dissemination, communications internal to an organization, and internal collaboration tools not offered to the public. Practically, operators must map their product to that definition to determine coverage; borderline products that combine commerce with public feeds will need careful legal analysis.

Section 28002(a)(1)

Daily initial warning: timing, size, and dismissal

Requires a prominent warning the first time a user accesses a covered platform each calendar day. The warning must be displayed clearly and continuously for at least 10 seconds and occupy at least 25% of the screen, but the user may dismiss it by clicking a conspicuous 'X.' A platform can avoid showing the notice only if it reasonably determines the user is older than 17. This provision imposes a strict UX specification that product and design teams will need to implement across apps, mobile browsers, and desktop.

Section 28002(a)(2)

Cumulative-active-use warnings after three hours and hourly thereafter

Mandates a more intrusive warning after three hours of cumulative active use and requires at least one such 30-second, non-bypass display for every hour of additional cumulative active use. Those displays must occupy at least 75% of the screen and cannot be bypassed or clicked through. Because the statute counts 'cumulative active use,' platforms are responsible for tracking user activity across sessions and possibly across devices to determine when the threshold is met.

2 more sections
Section 28002(b)

Required text and basic formatting

Specifies the exact black-box wording — black text on a white background referencing the Surgeon General’s warning that social media is associated with significant mental health harms and has not been proven safe for young users. The law fixes both content and minimal legibility standards, which limits platforms’ ability to substitute alternative wording or interactive educational content in place of the mandated phrasing.

Sections 28002(c)–(f)

Legal effects, no private right of action, severability, and operative date

States that providing the notice or a user dismissing it does not waive other claims (except claims premised on violating this section), expressly prohibits creating a private right of action under this chapter, declares the provisions severable, and sets an operative date of January 1, 2027. The absence of an express private enforcement mechanism means compliance and enforcement will rest with public authorities or administrative remedies, not private plaintiffs.

At scale

This bill is one of many.

Codify tracks hundreds of bills on Healthcare across all five countries.

Explore Healthcare in Codify Search →

Who Benefits and Who Bears the Cost

Every bill creates winners and losers. Here's who stands to gain and who bears the cost.

Who Benefits

  • Parents and guardians — receive a repeated, standardized prompt they can use to open conversations with teens about platform risks and to monitor time spent on apps.
  • Public-health advocates and clinicians — gain a uniform, state-mandated prompt that supports outreach and education efforts about social-media–related harms.
  • Regulators and policymakers — obtain a visible compliance lever that signals state action on youth mental health without immediately altering platform mechanics such as recommendation algorithms.
  • Researchers and evaluators — the law creates a predictable intervention point (timing and wording) that can be studied for effectiveness in reducing time spent or changing behaviors.

Who Bears the Cost

  • Covered platforms and operators — must redesign UX to meet size/duration rules, implement cross-session and cross-device 'cumulative active use' tracking, and build or enhance age-assessment methods, all of which carry engineering and product costs and potential revenue impacts.
  • Smaller platform operators — even if not large, any app that meets the 'addictive' definition will face compliance complexity that can be disproportionately costly relative to resources.
  • Users' privacy and data-minimization goals — platforms may collect additional signals or require age-verification steps to 'reasonably determine' age, creating privacy trade-offs and potential regulatory exposure under privacy laws.
  • State enforcement agencies — because the chapter bars private suits, enforcement will fall to public bodies, which implies resource needs for monitoring, investigations, and potential rulemaking or guidance that the statute does not fund.

Key Issues

The Core Tension

The bill pits the public-health goal of alerting and protecting young users by forcing prominent warnings against the countervailing costs of imposing intrusive UX requirements and age-assessment burdens on platforms—measures that may create privacy trade-offs, implementation complexity, and user workarounds that could blunt or shift rather than reduce harms.

Two practical gaps are likely to drive implementation disputes. First, 'cumulative active use' is central to when the intrusive 30-second warnings trigger, but the bill does not define 'active' or explain how to aggregate time across devices, logged-out sessions, or concurrent tabs.

Platforms will need to pick technical conventions (e.g., session heartbeats, foreground focus) that meet the statutory intent but may be second-guessed by regulators. Second, the 'reasonably determined' age exemption gives platforms wide discretion but no safe-harbor methods; doing nothing risks over-warning minors, while aggressive age checks create privacy and onboarding friction—and potential conflicts with laws restricting collection of minors’ data.

Enforcement is another ambiguity. The law deliberately bars a private right of action, which reduces immediate litigation risk for platforms but places the burden of monitoring and enforcement on public actors.

The statute does not specify penalties, notification processes, or administrative remedies, so agencies will have latitude in enforcement approach—yet that latitude can create unpredictability for compliance planning. Finally, the statute fixes wording and minimal display parameters but does not require accompanying educational resources or evaluate whether repeated, forcibly displayed warnings change behavior; mandatory large-format interruptions may reduce engagement but could also drive users to circumvention tactics (multiple accounts, off-platform messaging), with unclear net effects on youth well-being.

Try it yourself.

Ask a question in plain English, or pick a topic below. Results in seconds.