Codify — Article

CHAT Act: Age verification for AI chatbots to protect minors

Requires user accounts, parental controls, and crisis resources for companion AI chatbots.

The Brief

The CHAT Act would require covered entities that offer companion AI chatbots to create user accounts and implement age verification for both existing users and new signups. If a user is identified as a minor, the operator must affiliate the account with a verified parental account, obtain verifiable parental consent, and implement safeguards that limit a minor’s access to sexually explicit content while monitoring for suicidal ideation.

The act also requires a user-facing popup at the start of each interaction and at least every 60 minutes thereafter, clarifying that the chatbot is not human. Enforcement is led by the FTC, with state enforcement possible, and a compliance pathway built around FTC guidance to be issued within 180 days of enactment.

It becomes effective one year after enactment.

At a Glance

What It Does

The bill requires each companion AI chatbot to operate under a structured user account system, mandates age verification for both existing and new users, and imposes protective measures for minors (parental affiliation, consent, and explicit content blocks). It also requires ongoing safety monitoring for suicidal ideation and a persistent non-human interaction notice.

Who It Affects

Covered entities that own or operate companion AI chatbots in the United States, individual users (including minors), and parents/guardians who manage parental accounts.

Why It Matters

It establishes a federal baseline for age verification and child-safety protections in AI chatbots, guiding industry practices and enabling targeted enforcement while emphasizing user privacy and data minimization.

More articles like this one.

A weekly email with all the latest developments on this topic.

Unsubscribe anytime.

What This Bill Actually Does

The CHAT Act targets the design and operation of companion AI chatbots—software-based chat systems built to simulate emotional interaction—by mandating user accounts and age verification. Under Section 3, a covered entity must create and require a user account to access a chatbot, and it must verify the user’s age for both existing accounts (as of a date set later by regulation) and new accounts created after enactment.

If verification places a user in the minor category, the operator must affiliate that user with a verifiably parental account and obtain verifiable parental consent before allowing continued access. The Act also requires that the operator monitor for suicidal ideation in interactions and provide resources (including the National Suicide Prevention Lifeline) to the user and the linked parental account when such ideation is detected.

To ensure user awareness, a clear popup stating that the user is speaking with a chatbot (not a human) must appear at the start of every interaction and at least every 60 minutes during the session. Data handling is restricted to what is necessary to verify age, obtain parental consent, and maintain compliance.

The FTC will issue implementing guidance within 180 days, and enforcement will occur under the FTC Act, with state attorneys general permitted to file suit in civil actions. The act also includes a safe harbor for entities that comply with the Commission’s guidance, and the statute takes effect one year after enactment.

The Five Things You Need to Know

1

The bill requires every companion AI chatbot to use a user account model.

2

Existing user accounts must be age-verified and reclassified as minor or adult on a set date.

3

Minor users must be linked to a verified parental account and obtain parental consent to continue use.

4

The chatbot must display a non-human interaction popup at the start and at least every 60 minutes; suicidal ideation triggers resources to be shown.

5

FTC guidance will be issued within 180 days, with enforcement aligned to the FTC Act and states empowered to enforce; a safe harbor exists if guidance is followed.

Section-by-Section Breakdown

Every bill we cover gets an analysis of its key sections. Expand all ↓

Section 3(a)

Creation of user accounts

A covered entity must require each individual seeking to use a companion AI chatbot to establish a user account. This establishes the basic governance framework for age verification, consent, and ongoing interaction controls, ensuring there is an auditable linkage between a user and their account activity.

Section 3(b)

Age verification of existing accounts

For accounts that exist as of the regulatory date described in Section 7, entities must freeze those accounts, require verifiable age information, and classify users as minors or adults before restoring functionality. This creates a transitional path to bring legacy users into the new safety regime.

Section 3(c)

Minor account protections

If a user is determined to be a minor, the entity must affiliate that account with a verified parental account, obtain verifiable parental consent to access and use the chatbot, inform the parental account about any interaction involving suicidal ideation, and block access to sexually explicit communications. These steps are designed to preserve parental oversight and limit exposure to explicit content.

3 more sections
Section 3(d)

Confidentiality of age verification data

Age verification data must be limited to what is strictly necessary to verify age, obtain parental consent, or maintain compliance records. This constrains data collection and use to protect privacy while enabling enforcement and traceability.

Section 3(e)

Monitoring for suicidal ideation

Covered entities must monitor interactions for suicidal ideation and provide appropriate resources (including National Suicide Prevention Lifeline) to the user and the affiliated parental account when ideation is detected, enabling timely support.

Section 3(f)

Pop-up non-human notification

At the start of any interaction and at least every 60 minutes thereafter, a clear popup must inform the user that the chatbot is not a human. This reinforces user awareness of the artificial nature of the interaction and reduces misperception.

At scale

This bill is one of many.

Codify tracks hundreds of bills on Technology across all five countries.

Explore Technology in Codify Search →

Who Benefits and Who Bears the Cost

Every bill creates winners and losers. Here's who stands to gain and who bears the cost.

Who Benefits

  • Minors who interact with companion AI chatbots enjoy stronger safeguards and restricted exposure to risky content.
  • Parents or guardians gain verifiable oversight through parental accounts and consent requirements.
  • Covered entities that operate chatbots receive a clearer compliance framework and predictable enforcement expectations.
  • Child-safety advocacy groups obtain a higher baseline of safety standards and accountability in AI-enabled interactions.

Who Bears the Cost

  • Covered entities incur development, integration, and ongoing monitoring costs to implement accounts, age verification, parental linking, and content controls.
  • Small chatbot developers or startups face additional compliance and technical integration burdens that may affect time-to-market.
  • Compliance teams within entities will require training and process changes to handle age verification, consent workflows, and data minimization protocols.
  • Users (and in particular some parents) may experience onboarding friction due to account creation and consent processes.

Key Issues

The Core Tension

The central dilemma is balancing robust protection for minors against the potential for increased friction, privacy concerns, and exclusion from beneficial AI tools. Age verification can improve safety but risks privacy harms and accessibility barriers, and the reliance on external verification methods introduces questions about accuracy and consent mechanisms.

The bill introduces tensions around privacy, usability, and the practicality of widespread age verification. Requiring commercially available age checks raises questions about data collection, retention, and potential data breach exposure.

While the act prioritizes minor safety by limiting certain content and mandating parental involvement, there is risk of excluding legitimate, beneficial uses for younger users or creating false negatives/positives in age classification. The safe harbor tied to Commission guidance provides a potential relief path for entities that align with best practices, but it also depends on the quality and durability of the guidance.

Operationally, the requirement to monitor for suicidal ideation and to surface crisis resources places a new continuous-duty obligation on operators, coupled with privacy and accuracy considerations in data handling and reporting to parents.

Try it yourself.

Ask a question in plain English, or pick a topic below. Results in seconds.