The bill amends section 230 of the Communications Act to remove immunity for large social media platforms that intentionally or knowingly host “false election administration information” — defined as objectively incorrect, publicly accessible content about the time, place, manner, or voter eligibility for federal elections. It excludes political advocacy directed at candidates, officeholders, or parties.
The statute imposes a notification-and-takedown regime: platforms that receive a written complaint must assess correctness and, if the content is objectively incorrect, remove it within 48 hours (24 hours on election days). The Attorney General, state officials, and aggrieved federal candidates can sue for injunctive relief and statutory damages of $50,000 per offending item.
A safe-harbor preserves immunity if platforms remove content within the specified windows.
At a Glance
What It Does
The bill creates a targeted exception to §230 for large platforms that intentionally or knowingly host objectively incorrect information about election administration, and it mandates a written notice process that triggers short removal deadlines (48 hours off–election-day; 24 hours on election day). Enforcement is civil with statutory damages and injunctive relief.
Who It Affects
Interactive online services meeting the bill’s 25 million unique U.S. monthly user threshold, state election officials, federal candidates, and platforms’ content-moderation operations. Smaller services under the threshold are not covered.
Why It Matters
This is one of the first federal bills to link Section 230 liability specifically to operational election misinformation with criminal‑style removal deadlines and monetary penalties, shifting legal and operational risk onto large platforms during critical pre‑ and post‑election periods.
More articles like this one.
A weekly email with all the latest developments on this topic.
What This Bill Actually Does
The bill targets a narrow class of online content: objectively false statements about how elections are run — for example, incorrect claims about when polls open, who is eligible to vote, how and where to submit ballots, or false statements about penalties for voting. It does not sweep in opinionated political advocacy about candidates, officeholders, or parties.
The statutory scope uses the Federal Election Campaign Act’s definition of covered elections, so the focus is federal contests but the definitions reach broadly to the administration of those elections.
A platform that qualifies as a "social media platform" under the bill (the text cross‑references an existing statutory definition and adds a 25 million monthly U.S. user threshold) loses Section 230 protection for that content when the operator "intentionally or knowingly" hosts it. Practically, the bill couples that loss of immunity to a concrete notice-and-takedown process: someone submits a written complaint with contact information and a locatable description; the operator must determine whether the content is objectively incorrect and remove it within 48 hours (or 24 hours on an election day) if so, then notify the complainant within 12 hours of removal.If a platform fails to meet those obligations, federal and state officials — and federal candidates with notice to the relevant state chief election official — can sue in federal court for injunctive relief and statutory damages set at $50,000 per item of nonremoved false information.
The bill also creates a safe harbor: if the operator removes the content within the stated timelines either in response to a complaint or upon discovering it itself, the §230 exception won’t apply. The act takes effect for alleged false election-administration information posted on or after enactment.
The Five Things You Need to Know
The bill amends 47 U.S.C. §230 to deprive qualifying platforms of immunity when they intentionally or knowingly host “false election administration information.”, A covered "social media platform" must have had at least 25,000,000 unique monthly U.S. users for a majority of months in the most recent 12‑month period.
Notice-and-takedown timelines are strict: platforms must remove objectively incorrect election‑administration content within 48 hours after notice (24 hours on an election day) and notify the complainant within 12 hours of removal.
Enforcement is civil: the U.S. Attorney General, a State attorney general or secretary of state, and aggrieved federal candidates can sue for injunctive relief and statutory damages of $50,000 per item not removed.
Safe-harbor preserves §230 immunity only if the operator removes the challenged content within the statutory windows after notice or within 48/24 hours after becoming aware of it independently.
Section-by-Section Breakdown
Every bill we cover gets an analysis of its key sections.
Creates a §230 exception for knowingly hosting false election‑administration content
This provision inserts an explicit exception into §230(c)(1): the general bar on treating providers as speakers or publishers does not apply where a social media operator intentionally or knowingly hosts false election administration information. The language ties the exception to the operator’s state of mind — "intentionally or knowingly" — which narrows the liability trigger compared with a strict‑liability regime but still exposes operators to civil actions where a court finds that mens rea. Practically, the provision brings what has been largely a policy debate about platform moderation into statutory liability.
Defines the key terms: ‘false election administration information’ and ‘social media platform’
The bill defines false election administration information as objectively incorrect, publicly accessible information about the time, place, manner of holding an election or voter qualifications, including registration status and penalties. It expressly excludes political advocacy about candidates, officeholders, or parties. The social media platform definition borrows a statutory cross‑reference and grafts on a 25 million U.S. monthly user eligibility threshold, which limits application to the largest services.
Requires written complaints and fast on‑platform review and takedown
Section 3 establishes the operational mechanics: complaints must be written, include contact details, and describe the content sufficiently for the operator to locate it. Once notified, the operator must determine whether the content is objectively incorrect and, if so, remove it within 48 hours (24 hours on an election day). The operator must also send the complainant a written notice within 12 hours after removal. These timelines compress typical moderation workflows and force rapid factual adjudication.
Civil enforcement by federal and state officials and a private right for candidates with statutory damages
Enforcement channels are civil litigation: the Attorney General, a State attorney general or secretary of state where the covered election is held, and aggrieved federal candidates (after notifying the state chief election official) may sue. Remedies include injunctive relief and damages set at $50,000 per item of nonremoved false information. The damages figure and per‑item framing create a strong financial deterrent and a likely litigation incentive for plaintiffs.
Preserves §230 immunity if an operator removes content within the statutory windows
This subsection reconciles the §230 exception with platform incentives: if the operator removes the challenged content within the 48/24‑hour windows after notice — or within the same windows after learning of it independently — the §230 exception does not apply. The safe harbor therefore channels behavior toward speedy removal to retain immunity, rather than encouraging platforms to litigate first.
Applies to alleged conduct on or after enactment
The effective date provision makes the act prospective: only allegations about content hosted on or after enactment fall within the new rules. That confines liability risk to forward‑looking conduct and informs platform compliance planning tied to implementation timelines.
This bill is one of many.
Codify tracks hundreds of bills on Elections across all five countries.
Explore Elections in Codify Search →Who Benefits and Who Bears the Cost
Every bill creates winners and losers. Here's who stands to gain and who bears the cost.
Who Benefits
- State election officials and secretaries of state — they get a statutory pathway to force removal of demonstrably false administrative instructions that could suppress turnout or cause voter confusion.
- Federal candidates — the bill gives candidates a private right to sue (after notice to state election officials), offering a direct remedy for content that undermines the administration of elections they contest.
- Voters seeking accurate voting information — the statutory focus on objectively incorrect administrative details aims to reduce the circulation of actionable false instructions (for example, wrong polling locations or bogus rules about absentee ballots).
- The Department of Justice and state attorneys general — the bill creates clear standing and statutory damages that make enforcement a practicable avenue to deter platform inaction.
Who Bears the Cost
- Large social media platforms meeting the 25 million monthly U.S. user threshold — they face expedited review obligations, potential multi‑million dollar damages exposure per incident, and the operational costs of expanding moderation and legal teams.
- Platform content‑moderation staff and third‑party contractors — compressed timelines and the need to assess “objective” accuracy of complex election‑administration claims will increase workloads and reliance on legal and local election expertise.
- Users and publishers whose content is removed — swift takedowns and the strong financial penalties for noncompliance raise the risk of over‑removal and disputes over whether content was merely disputed or objectively incorrect.
- Platform insurers and legal budgets — the statutory per‑item damage metric and private suits by candidates create new insurance and litigation costs that may be borne indirectly by platforms and potentially passed on to business models.
Key Issues
The Core Tension
The core dilemma is straightforward: the bill seeks to prevent concrete, operational lies that can suppress or misdirect voters by making platforms legally accountable, but doing so requires rapid factual adjudication and gives private actors potent financial tools — a combination that risks over‑removal, inconsistent application across jurisdictions, and heavy operational burdens on platforms while still leaving open how courts will interpret ‘objectively incorrect’ and the operator’s mental state.
The bill rests on several operational and legal fault lines. First, the standard “objectively incorrect” requires platforms to make factual judgments about election‑administration claims often rooted in state law, local procedures, or evolving guidance; platforms will need rapid access to authoritative, jurisdiction‑specific sources to comply, and disagreements over competing official guidance could produce litigation.
Second, the mens rea phrase "intentionally or knowingly" narrows the §230 exception, but enforcement is tied to a failure to remove after notice rather than requiring plaintiffs to prove mens rea in every case, so courts will need to parse whether failure to remove evidences knowledge or intent.
The $50,000 per‑item damages metric incentivizes plaintiffs to identify discrete “items” (individual posts, threads, or shares) and could expose platforms to huge aggregate liability from coordinated misinformation campaigns or mass reposting. The safe harbor reduces that exposure but creates a practical choice: remove quickly to secure immunity or litigate and risk massive damages.
Finally, the exclusion for political speech about candidates and parties leaves a sizable grey area at the intersection of operational election claims and politically framed allegations (for example, a claim that a candidate’s campaign told voters to use an incorrect absentee process). The bill does not prescribe an administrative review process or standards for evidentiary proof of falsity, which pushes those hard questions into litigation and real‑time content‑moderation operations.
Try it yourself.
Ask a question in plain English, or pick a topic below. Results in seconds.