Codify — Article

STOP HATE Act (H.R.5681): Mandatory TOS disclosures and reporting for major social platforms

Requires large social media companies to publish platform terms linked to designated terrorist actors and to submit detailed triannual enforcement data to the Attorney General, with civil penalties for noncompliance.

The Brief

The STOP HATE Act of 2025 obliges large social media companies to publish the specific terms of service that apply to foreign terrorist organizations and individuals or entities designated under Executive Order 13224, and to disclose how those terms are enforced. It creates a triannual reporting regime to the Attorney General that requires platforms to submit precise metrics on flagged content, actions taken, appeals, reach (views and shares), and how content was flagged and actioned, plus a requirement that DOJ post the reports in a searchable public repository.

This bill targets major platforms (those with at least 25 million unique U.S. monthly users) and ties noncompliance to steep civil penalties — up to $5 million per violation per day. It also forces interagency analysis: the Director of National Intelligence must produce an intelligence estimate on how designated terrorist actors use platforms, and the Comptroller General must report on implementation.

The law sunsets after five years. For compliance officers, platform operators, and policy teams, this law changes what transparency looks like and creates new operational and legal exposures tied to content-moderation practices.

At a Glance

What It Does

The bill requires covered social media companies to publish platform-specific terms of service related to designated terrorist actors and to provide detailed, disaggregated reports to the Attorney General on enforcement actions, flags, appeals, and content reach. The Attorney General must make reports publicly searchable.

Who It Affects

Social media companies with at least 25 million unique U.S. monthly users, their moderation contractors and AI tooling, DOJ and intelligence community analysts, civil-society monitors, and researchers who rely on platform transparency to assess terrorist activity online.

Why It Matters

It creates a statutory transparency regime tied to counterterrorism that standardizes what moderation data platforms must produce and exposes companies to large per-day civil penalties for omissions or misrepresentations. The bill forces platforms to operationalize new reporting pipelines and changes the evidentiary base for oversight and research.

More articles like this one.

A weekly email with all the latest developments on this topic.

Unsubscribe anytime.

What This Bill Actually Does

The STOP HATE Act compels major platforms to make explicit which portions of their terms of service apply to organizations designated as foreign terrorist organizations under the Immigration and Nationality Act and to individuals or entities sanctioned under Executive Order 13224. Platforms must publish those policies (or state they lack them) and provide accessible contact and flagging information for users, including a description of how to report content or accounts and the platform’s stated response and resolution timelines.

Beyond publication, the Act establishes a recurring reporting duty to the Attorney General. Covered platforms must submit, starting within a year of enactment, triannual reports that include the active version of the platform’s applicable terms and a detailed set of enforcement metrics: counts of flagged items, counts of actioned items (removed, demonetized, deprioritized), actions taken against accounts, views and shares of actioned content, appeal volumes and reversal rates, and contextual disaggregation (content category, type, media, how flagged, and how actioned).

Each report must analyze changes since the prior filing and identify trends.The Department of Justice must publish submitted reports in a searchable repository, making the enforcement data available to the public. The bill also charges the intelligence community: the Director of National Intelligence must deliver a National Intelligence Estimate on how the identified actors use platforms, and the Government Accountability Office must review implementation on a roughly 18-month cadence.

The Act includes a five-year sunset for its authority and preserves First Amendment protections and existing confidentiality and privacy law obligations, including the Privacy Act of 1974.Operationally, the law sets a user-threshold definition for covered platforms (25 million unique U.S. monthly users for a majority of months in the prior year) and defines key terms such as “actioned” and “content.” Noncompliance triggers an aggressive enforcement tool: the Attorney General may sue for civil penalties capped at $5 million per violation per day for failures to publish, file timely reports, or for material omissions or misrepresentations in reports. The combination of public reporting, oversight studies, and significant financial exposure seeks to make moderation practice both more observable and more accountable.

The Five Things You Need to Know

1

Covered platforms are those under FTC jurisdiction that have at least 25,000,000 unique U.S. monthly users for a majority of months in the most recent 12‑month period.

2

Platforms must publish platform-specific terms of service (or state they lack them) related to FTOs and SDGTs within 180 days of enactment and include contact and flagging procedures and promised response timelines.

3

Platforms must submit the first comprehensive report to the Attorney General within 360 days, then file triannual reports (no later than January 31, April 30, and October 31 each year) with disaggregated metrics and trend analysis.

4

The Attorney General may seek civil penalties up to $5,000,000 per violation per day for failures to publish terms, late filing, or materially omitting or misrepresenting required information.

5

The Act requires a National Intelligence Estimate on how designated actors use platforms (DNI) and periodic Comptroller General reviews; the authority sunsets five years after enactment.

Section-by-Section Breakdown

Every bill we cover gets an analysis of its key sections. Expand all ↓

Section 1

Short title

Provides the Act’s short title as the "Stopping Terrorists Online Presence and Holding Accountable Tech Entities Act of 2025" (STOP HATE Act of 2025). This is a formal naming provision that signals the bill’s counterterrorism and platform-accountability focus but carries no operational mandates.

Section 2(a)

Publication of terms of service and user-facing procedures

Requires each covered social media company to publish, within 180 days, the terms of service that apply to foreign terrorist organizations and SDGTs for each platform it owns or operates. Companies must also publish contact details for inquiries, a plain-language description of how users flag content or accounts, the company’s response and resolution commitments, and a menu of actions the company may take (removal, demonetization, deprioritization, bans, etc.). Practically, platforms must map internal moderation rules to the specified designations and make that mapping user-facing and searchable.

Section 2(b)

Triannual reporting to the Attorney General with disaggregated enforcement data

Establishes a recurring reporting schedule: the first report is due within 360 days, then reports are due triannually on set calendar dates. Each report must include the active terms version and granular enforcement metrics: counts of flagged items, counts of actioned items and account actions, removals/demonetizations/deprioritizations, view and share metrics prior to action, appeal volumes and reversal rates, and an analysis of trends since the previous report. Importantly, the statute requires disaggregation by content category, content type (post, comment, message, profile, group), media type (text, image, video), how the content was flagged (employees, AI, community moderators, civil society, or users), and how it was actioned (employees, AI, community moderators, civil society partners, or users).

3 more sections
Section 2(c)

Civil enforcement: penalties for noncompliance, omissions, or delays

Confers authority on the Attorney General to bring civil actions against companies that fail to post required terms, fail to timely submit reports, or materially omit or misrepresent required information. Penalties may reach $5,000,000 for each violation per day. The provision converts procedural reporting duties into enforceable obligations with high-dollar exposure, meaning compliance failures can rapidly become significant financial and reputational liabilities.

Section 2(d)-(e)

Intelligence and oversight reporting; sunset

Directs the Director of National Intelligence to produce a National Intelligence Estimate on platform use by the identified actors within 360 days and requires the DNI to publish an unclassified version. It also tasks the Comptroller General with reporting on implementation roughly every 540 days. The section places a five-year sunset on the authority created, meaning reporting and enforcement obligations automatically expire unless renewed by statute.

Section 2(f)-(g)

Definitions and rule of construction

Defines critical terms used throughout the Act—'actioned,' 'content,' 'social media platform,' 'social media company,' and 'terms of service'—and sets the coverage threshold (FTC jurisdiction plus 25 million U.S. monthly users). It also clarifies that the Act is not intended to infringe First Amendment rights and requires compliance with federal, state, and local confidentiality and privacy laws when publishing reports, flagging the intersection with the Privacy Act of 1974.

At scale

This bill is one of many.

Codify tracks hundreds of bills on Technology across all five countries.

Explore Technology in Codify Search →

Who Benefits and Who Bears the Cost

Every bill creates winners and losers. Here's who stands to gain and who bears the cost.

Who Benefits

  • Department of Justice and law enforcement — gains structured, machine-readable reporting and public datasets that can improve detection, investigations, and measurement of platform enforcement against designated terrorist actors.
  • Researchers and civil-society monitors — receive standardized, disaggregated, and publicly available enforcement data that supports independent analysis of platform moderation effectiveness and trends.
  • Advertisers and brand-safety teams — get greater visibility into how platforms handle content tied to terrorist designations, helping them assess exposure and make placement decisions.
  • Platform users concerned about terrorist content — gain clearer flagging pathways, published response commitments, and public accountability when platforms fail to act.

Who Bears the Cost

  • Large social media companies meeting the 25 million U.S. monthly user threshold — must build or expand compliance, data‑collection, and reporting pipelines, absorb operational costs, and face significant financial exposure for violations.
  • Platform moderation vendors and AI-tooling suppliers — will need to produce audit trails and integrate with platform reporting systems, increasing contractual and technical obligations.
  • Department of Justice and the intelligence community — must ingest, store, and analyze large volumes of operational data and respond to public requests, imposing staffing and technical burdens.
  • Users and privacy advocates — may bear indirect costs if platforms over-collect or retain user metadata to satisfy reporting requirements, increasing privacy risk and administrative complexity for redaction and legal compliance.

Key Issues

The Core Tension

The central dilemma is whether mandating public, standardized transparency will improve accountability and reduce terrorist harm online — at the cost of imposing heavy operational burdens, privacy trade-offs, and legal exposure that may push platforms toward defensive behavior (over-removal, excessive aggregation, or withholding data), which in turn could undermine the goal of clear, comparable oversight.

The bill creates multiple tensions that will drive implementation decisions. First, it mandates public, granular enforcement data about potentially sensitive accounts and content while simultaneously requiring compliance with the Privacy Act and other confidentiality obligations; platforms and DOJ will have to reconcile transparency with the need to protect personal data, investigatory materials, and classified information in the DNI report.

Second, the statute treats reporting and accuracy as legally material — exposing firms to draconian per-day penalties for omissions or misstatements — which will incentivize conservative reporting (over-redaction, aggregation, or delayed publication) and could chill willingness to provide granular data.

A second practical challenge is data quality and comparability. The Act requires disaggregation by how content was flagged and actioned (e.g., AI vs. employee), but platforms use different internal taxonomies and AI thresholds.

Translating operational signals into the prescribed categories will require substantial definitional work, risk inconsistent cross-platform comparisons, and invite disputes over what constitutes a "material" misrepresentation. Finally, the Act links transparency to counterterrorism designations that shift over time; platforms will need live mapping between legal designations (FTO lists, SDGT lists) and internal moderation labels.

That dynamism increases compliance costs and litigation risk when a platform’s public policy choices lag or diverge from government lists.

Try it yourself.

Ask a question in plain English, or pick a topic below. Results in seconds.