Codify — Article

CHAT Act (S.2714) requires age verification and parental protections for AI chatbots

Creates federal obligations for operators of companion AI chatbots—accounting, verifiable age checks, parental accounts, suicide-monitoring, and FTC enforcement.

The Brief

The Children Harmed by AI Technology (CHAT) Act requires any entity that offers a "companion AI chatbot" in the United States to force users into accounts, verify ages with a commercially available, reasonably accurate method, and classify users as minors or adults. For minors the bill requires a verified parental account and verifiable parental consent, blocks sexually explicit chatbot output, mandates monitoring for suicidal ideation with notification to parents, and requires periodic on-screen disclosure that the interlocutor is an AI.

The bill matters because it creates a single federal compliance framework for a fast-growing class of consumer AI services and channels enforcement through the Federal Trade Commission (with parallel state authority). Implementation will reshape product design, vendor relationships (age-verification providers, identity services), data-retention practices, and liability exposure for developers and platforms serving minors or general audiences that minors can access.

At a Glance

What It Does

The bill requires covered operators to require user accounts, verify age on new and existing accounts using commercially available verification, and attach minor accounts to verified parental accounts with parental consent. It also mandates monitoring for suicidal ideation, parental notification of such incidents, blocking sexually explicit communications with minors, and a visible popup that reminds users they are chatting with an AI at the start and every 60 minutes.

Who It Affects

Any person that owns, operates, or makes available a companion AI chatbot to individuals in the United States, plus third-party age-verification and identity vendors that operators will rely on. Compliance, legal, trust & safety, and product teams at consumer AI companies will be squarely responsible for implementing the requirements.

Why It Matters

The Act sets a federal baseline for how AI chatbots must treat minors and ties violations to the FTC’s unfair or deceptive practices authority while preserving state enforcement. That combination creates regulatory risk and a commercial market for age-verification services, and forces platforms to balance child safety, privacy, and product usability.

More articles like this one.

A weekly email with all the latest developments on this topic.

Unsubscribe anytime.

What This Bill Actually Does

The bill defines the regulated universe tightly around "companion AI chatbots"—software whose primary purpose is simulating interpersonal, emotional, or therapeutic interactions. Any operator making such a chatbot available in the U.S. must require user accounts for access.

For accounts that exist when the law takes effect the operator must freeze them on day one of effect and restore functionality only after a verifiable age check. New accounts must be verified at creation with a commercially available method that the bill says should be reasonably designed to ensure accuracy.

When verification identifies a user as a minor, the operator must link the minor’s account to a verified parental account and obtain verifiable parental consent before allowing access. The law also builds specific safety controls: operators must monitor conversations for suicidal ideation and, if such ideation appears, immediately provide resources and notify the parental account.

Operators must block sexually explicit chatbot communications with minors. To reduce confusion the operator must display a clear on‑screen popup at the start of any interaction and at least once every 60 minutes stating the user is talking to an AI.The bill limits how age-verification data can be handled: operators may only collect, process, use, and store the minimum information strictly necessary to verify age, secure parental consent, or maintain compliance records.

For enforcement, the FTC gets a central role: it must issue guidance within 180 days of enactment and will enforce the statute under its unfair-or-deceptive-acts authority; states may sue in parens patriae as well. The bill creates a safe harbor if a covered entity can show it relied in good faith on user-provided age information, complied with FTC guidance, and reasonably conformed to accepted industry standards or Commission-identified practices.Operationally, the law imposes a one-year runway from enactment to take effect.

That timeline, plus the requirement to freeze and reverify existing accounts on the effective date, forces companies to decide now how to integrate third-party verification, how long to retain verification artifacts, and how to design parental-account workflows that are robust against fraud while preserving minors’ privacy where appropriate.

The Five Things You Need to Know

1

Section 3(b)(1) requires covered entities to freeze all existing companion chatbot accounts on the Act’s effective date and restore them only after verifiable age information is provided and used to classify the user as minor or adult.

2

Section 3(c) mandates that minor accounts be attached to a separately verified parental account, that operators obtain verifiable parental consent before access, and that parents be immediately informed if the minor expresses suicidal ideation.

3

Section 3(f) compels a visible popup at the start of any chatbot interaction and not less frequently than every 60 minutes thereafter stating the user is interacting with an AI.

4

Section 4(a) directs the FTC to issue compliance guidance within 180 days of enactment and Section 6 creates a safe harbor when an operator relied in good faith on user-supplied age data, followed the FTC guidance, and conformed to industry standards.

5

Section 5 treats violations as unfair or deceptive acts under the FTC Act, giving the FTC full enforcement powers while preserving state attorneys general’ authority to sue in parens patriae (with procedural notice and intervention rules).

Section-by-Section Breakdown

Every bill we cover gets an analysis of its key sections. Expand all ↓

Section 2

Key definitions that narrow scope

Section 2 defines critical terms: "companion AI chatbot" (systems built primarily for emotional/therapeutic or interpersonal interaction), "covered entity" (anyone who makes those chatbots available in the U.S.), "minor" (under 18), plus technical terms like "popup" and substantive terms like "sexually explicit communication" and "suicidal ideation." Those definitions determine which services the Act applies to and also frame later obligations—if a product’s primary purpose isn’t companionship or emotional interaction it may fall outside the statute.

Section 3(a)

Account requirement for all users

Section 3(a) requires operators to gate companion chatbots behind user accounts. Requiring accounts changes how services handle authentication, session management, and identity-proofing: anonymous or purely ephemeral sessions are disallowed for covered services, pushing operators toward persistent identifiers and associated compliance workflows.

Section 3(b)

Age verification for new and existing accounts

Section 3(b) separates treatment of existing accounts and new accounts. Existing accounts must be frozen on the Act’s effective date until the user supplies age information verified by a commercially available method reasonably designed for accuracy. New accounts must request age data at creation and verify it similarly. The provision leaves the choice of verification vendors and specific methods to operators but requires a commercially available process that can be defended as reasonably accurate.

6 more sections
Section 3(c)

Protections and parental controls for minors

If verification designates a user as a minor, operators must (1) affiliate the minor’s account with a verified parental account, (2) obtain verifiable parental consent before permitting access, (3) notify the parental account immediately upon any expression of suicidal ideation by the minor, and (4) block sexually explicit chatbot communications. These are operationally heavy requirements: the law mandates both verification of parents’ identities and near-real-time notification of sensitive safety events.

Sections 3(d)–3(f)

Data minimization, suicide monitoring, and AI disclosure

Section 3(d) limits collection, processing, use, and storage of age-verification data to what is strictly necessary for verification, parental consent, or compliance records. Section 3(e) compels monitoring of interactions for suicidal ideation and delivery of National Suicide Prevention Lifeline information to both the user and the parental account. Section 3(f) requires a visible popup at the start of an interaction and every 60 minutes thereafter informing the user they are talking to an AI—an explicit transparency duty that affects UX and session management.

Section 4

FTC guidance and constrained reliance on guidance in enforcement

Section 4(a) requires the Federal Trade Commission to issue guidance within 180 days after enactment to help covered entities comply. Section 4(b) limits the Commission from treating inconsistent practices as violations solely because they conflict with guidance—the FTC must allege a specific statutory violation in enforcement actions. The bill also allows entities to use the guidance as evidence of compliance, creating an informal safe harbor dynamic even prior to Section 6’s explicit safe harbor rules.

Section 5

Enforcement framework: FTC plus state parens patriae actions

Section 5 designates violations as unfair or deceptive practices under the FTC Act and incorporates the FTC’s enforcement powers and remedies. It also preserves state attorneys general’ right to sue on behalf of residents, subject to procedural notice to the FTC and intervention/removal mechanics that coordinate federal and state enforcement. That dual track increases exposure for defendants but also sets boundary rules for concurrent actions.

Section 6

Safe harbor for good-faith compliance

Section 6 creates a safe harbor where operators are deemed not liable if they show good-faith reliance on user-supplied age information, compliance with the FTC guidance from Section 4, and reasonable conformity to industry standards or Commission-identified practices. The safe harbor turns recordkeeping and demonstrable process into a primary defensive strategy in enforcement.

Section 7

Effective date and transition

The Act takes effect one year after enactment. That delay gives operators a finite compliance window but, importantly, requires freezing and reverification of existing accounts on the effective date—forcing a coordinated operational cutoff that will shape product roadmaps and vendor procurement timelines.

At scale

This bill is one of many.

Codify tracks hundreds of bills on Technology across all five countries.

Explore Technology in Codify Search →

Who Benefits and Who Bears the Cost

Every bill creates winners and losers. Here's who stands to gain and who bears the cost.

Who Benefits

  • Minors interacting with companion AI chatbots — the law blocks sexually explicit chatbot content and requires suicide-risk referrals, reducing certain direct harms from chatbot interactions.
  • Parents and guardians — the statute mandates verified parental accounts, verifiable parental consent, and immediate notification for suicidal ideation, giving parents legal control and visibility into minors' chatbot use.
  • Mental-health crisis services (e.g., National Suicide Prevention Lifeline) — the law institutionalizes referral pathways that will likely increase the volume and timeliness of crisis contacts.
  • Age-verification and identity vendors — operators must adopt commercially available verification processes, creating near-term market demand for verification-as-a-service products and fraud-prevention tools.
  • Compliance and legal advisers — companies will need procedural documentation, risk assessments, and vendor contracts to rely on the bill’s safe harbor, creating demand for advisory services.

Who Bears the Cost

  • Operators of companion AI chatbots — compliance will require account gating, integration with verification vendors, monitoring tools, parental-account workflows, and data-minimization controls, all of which increase development and operating costs.
  • Small developers and startups — the account and verification requirements disadvantage low-resource entrants who cannot absorb verification costs or complex parental-consent flows, potentially increasing market concentration.
  • Identity-verification vendors and data processors — while they gain business, they also face reputational and legal risk because operators will rely on their methods to meet the "reasonably designed to ensure accuracy" standard.
  • Privacy and security teams — collecting and storing verifiable age or parental identity data raises new privacy and breach-risk obligations and will require tighter data governance and potentially higher cybersecurity investment.
  • State and federal regulators — the FTC and state attorneys general gain new enforcement responsibilities that require guidance drafting, oversight, and coordination, imposing resource burdens on agencies.

Key Issues

The Core Tension

The central dilemma: protect children by imposing rigorous age verification, parental control, and safety monitoring, but in doing so require collection and retention of sensitive identity data, introduce risks from imperfect automated monitoring and parental notification, and impose compliance costs that may limit innovation and push services out of reach for smaller providers.

The Act trades a strong, prescriptive child-safety approach for several practical and policy ambiguities. It requires "commercially available" and "reasonably designed" age-verification without specifying acceptable technologies or accuracy thresholds; that vagueness pushes operators to rely on private vendors and the FTC’s forthcoming guidance for safe harbor, but it also risks uneven implementation and variable privacy trade-offs depending on vendor practices.

Monitoring for suicidal ideation places a heavy operational burden on automated classifiers that are imperfect: false positives can trigger unnecessary parental alerts, while false negatives could fail to protect vulnerable minors. The mandatory parental-notification rule also raises safety questions for minors who seek confidential help or who may be endangered by parental disclosure.

The Act’s definition of "companion AI chatbot" narrows the scope but leaves edge cases—hybrid chatbots, general-purpose assistants with 'companion' modes, and international services—open to interpretation. Cross-border availability complicates enforcement: operators outside the U.S. could still make services available in the U.S., and the requirement to verify and retain age data interacts unpredictably with other privacy laws.

Finally, the safe harbor’s reliance on FTC guidance and industry standards means compliance is as much about process and documentation as substance; firms that cannot demonstrate consistent application of standards may face enforcement despite following good-faith industry practices.

Try it yourself.

Ask a question in plain English, or pick a topic below. Results in seconds.