The SAFE BOTs Act (H.R. 6489) sets a federal floor for how consumer chatbots must interact with minors. It requires chatbot providers to clearly disclose that the system is an artificial intelligence rather than a human and to provide suicide/crisis hotline resources when minors raise suicidal ideation; it also bars chatbots from claiming to be licensed professionals unless that claim is true.
The bill requires providers to adopt policies addressing prolonged interactions (a mandatory advisory to take a break after 3 continuous hours) and to handle sexual material harmful to minors, gambling, and illegal substance content.
Enforcement is assigned to the Federal Trade Commission (treated as unfair or deceptive acts or practices), while states retain parens patriae authority subject to notification rules and limitations when the federal government sues. The Act also commissions a 4‑year longitudinal NIH study on chatbots’ effects on youth mental health and preempts state laws that cover the same matters in subsections (a)–(c).
At a Glance
What It Does
The bill requires providers who knowingly serve users under 17 to disclose at the first interaction (and when asked) that the chatbot is an AI, and to supply crisis hotline resources if the user raises suicide. Providers must implement reasonable policies to limit continuous interactions (advising a break after 3 hours) and to address sexual material harmful to minors, gambling, and illicit‑substance content. Violations are treated as unfair or deceptive acts under the FTC Act.
Who It Affects
Applies to any person who provides a chatbot directly to consumers (websites, mobile apps, or other online means) where the provider has actual knowledge a user is a minor or would know but for willful disregard; incidental chat functions are excluded. It also directs HHS/NIH to run a 4‑year study, affecting researchers and public health agencies.
Why It Matters
The Act establishes specific operational obligations (timing and plain‑language disclosures, a discrete break rule, content categories) and makes noncompliance an FTC enforcement matter—shaping product design, moderation, and labeling for consumer chatbots while preempting state regulation on the same items.
More articles like this one.
A weekly email with all the latest developments on this topic.
What This Bill Actually Does
The SAFE BOTs Act focuses narrowly on chatbots marketed to and used by minors. It defines a chatbot as an AI system that engages in natural‑language conversation and is provided directly to consumers; a ‘covered user’ is someone under 17 where the provider has actual knowledge of the user’s age or would have known absent willful disregard.
The bill bars chatbots from telling a covered user they are a licensed professional unless that statement is true, and it requires clear, age‑appropriate disclosures that the system is an AI and not a human.
Disclosures have specific timing rules: the AI disclosure must appear at the start of the first interaction with a covered user and again if the minor asks whether the system is an AI. The crisis resources disclosure must appear whenever the user prompts the chatbot about suicide or suicidal ideation.
All disclosures must be clear, plain, and age appropriate—language the bill leaves to providers to operationalize but which the FTC will ultimately evaluate in enforcement actions.On engagement limits and content controls, the bill requires providers to maintain reasonable policies that (1) advise a covered user to take a break after 3 continuous, uninterrupted hours of interaction and (2) address three content categories for covered users: sexual material harmful to minors (defined with a Miller‑style test plus an explicit child‑pornography carve‑out), gambling, and distribution/ sale/use of illegal drugs, tobacco, or alcohol. The statute does not prescribe specific technical measures (like age verification methods or filtering algorithms); it requires ‘‘reasonable’’ policies, leaving implementation choices to providers.Enforcement is assigned to the FTC: violations are treated as unfair or deceptive acts or practices, giving the Commission its usual investigatory and remedial tools and penalties.
States can bring parens patriae actions on behalf of their residents but must notify the FTC and cannot pursue defendants already named in a federal action while that action is pending. The Act also includes a one‑year delayed effective date for these obligations and a rule stating the Act does not require affirmative collection of age information beyond what providers already collect in the normal course of business.Finally, HHS (via NIH) must run a 4‑year longitudinal study on chatbots’ risks and benefits for minors’ mental health, consulting mental‑health experts, technologists, ethicists, and educators, and report findings and recommendations to congressional committees.
The law preempts state laws that regulate the same matters covered in the key operational subsections, while retaining severability and standard boilerplate.
The Five Things You Need to Know
Effective date: subsections imposing provider obligations (prohibition on false professional claims, disclosure rules, and policy requirements) begin 1 year after enactment.
Age trigger and coverage: a 'covered user' is any user under age 17 where the provider has actual knowledge or would have known absent willful disregard.
Disclosure timing: the AI disclosure must appear at the first interaction with a covered user and again whenever the minor asks whether the chatbot is an AI; suicide/crisis resources must be provided when the user prompts about suicide or suicidal ideation.
Engagement limit: providers must adopt policies that advise a covered user to take a break after 3 continuous, uninterrupted hours of interaction.
Enforcement and remedies: violations are treated as unfair or deceptive acts under the FTC Act; the FTC enforces with full FTC powers, and states may sue as parens patriae but must notify the FTC and are limited if a federal action is pending.
Section-by-Section Breakdown
Every bill we cover gets an analysis of its key sections.
Short title
Names the statute the 'Safeguarding Adolescents From Exploitative BOTs Act' or 'SAFE BOTs Act.' This is purely stylistic but signals the bill’s targeted purpose: interactions between chatbots and adolescents.
Prohibition on false professional claims
Bars any chatbot provided to a covered user from stating it is a licensed professional unless that statement is true. Practically, providers must audit prompts and system messages to remove or restrict any claim of being a therapist, doctor, lawyer, or other licensed role unless there is a verifiable human professional behind the response. The provision is categorical—liability attaches regardless of intent—so providers will likely adopt conservative stances on role‑play or persona features when minors are present.
Disclosure requirements and timing
Requires two disclosure types: (A) the chatbot is an artificial intelligence system and not a natural person, and (B) resources for contacting suicide/crisis hotlines. AI disclosure must be given at the first interaction with a covered user and again if the user asks whether the system is an AI. Crisis resources must be disclosed whenever the user prompts the chatbot about suicide or suicidal ideation. The statute mandates clear, age‑appropriate, plain language but leaves form and modality (text banners, vocal disclaimers, onboarding screens) to providers—subject to later FTC assessment of adequacy.
Policy obligations: 3‑hour break and content categories
Obligates providers to 'establish, implement, and maintain reasonable policies' that: (1) advise a covered user to take a break when a continuous, uninterrupted interaction reaches 3 hours; and (2) address sexual material harmful to minors, gambling, and the distribution/sale/use of illegal drugs, tobacco, or alcohol. The bill sets the duty of process (policies, practices, procedures) rather than prescribing technical controls, so implementations can range from automated session timers and warnings to moderation and content filters—each with different operational costs and effectiveness.
Effective date and enforcement framework
Subsections (a)–(c) take effect one year after enactment. Enforcement is by the FTC: violations are treated as unfair or deceptive acts under section 18(a)(1)(B) of the FTC Act, giving the Commission subpoena, civil‑penalty, and injunctive tools. The FTC’s full powers and remedies apply; the Act also preserves any other FTC authority. This structure centralizes rulemaking and enforcement through a single federal agency rather than a regulatory standard‑setting process.
State actions, notification, and preemption
States retain parens patriae authority to sue on behalf of residents for violations, but must notify the FTC and provide a copy of the complaint; if the FTC or U.S. Attorney General has already sued the same defendant on the same allegations, states cannot bring parallel suits while the federal action is pending. Separately, the Act preempts any state or local law that covers the same matters as the key operational subsections (a)–(c), creating a federal uniform standard and limiting state experimentation on these specific topics.
NIH study, definitions, and limiting provisions
Directs HHS/NIH to conduct a 4‑year longitudinal study on chatbots’ risks and benefits for minors’ mental health, consulting mental‑health experts, technologists, ethicists, and educators, and to report findings to congressional committees. Defines core terms—including 'chatbot,' 'chatbot provider,' 'covered user,' 'minor' (under 17), and 'sexual material harmful to minors'—and includes a rule of construction that the Act does not require affirmative collection of age information beyond what providers already collect in the normal course of business.
This bill is one of many.
Codify tracks hundreds of bills on Technology across all five countries.
Explore Technology in Codify Search →Who Benefits and Who Bears the Cost
Every bill creates winners and losers. Here's who stands to gain and who bears the cost.
Who Benefits
- Minors under 17 and their families — The disclosure and crisis‑resource rules aim to reduce misperception that a chatbot is a human or licensed professional and provide a direct pathway to crisis help when suicidal ideation is raised.
- Mental‑health researchers and HHS/NIH — The mandated 4‑year longitudinal study creates dedicated federal funding and institutional attention to the empirical effects of chatbot interactions on youth mental health.
- Crisis hotlines and mental‑health service providers — The law’s requirement to surface suicide/crisis resources when triggered could increase referrals and early interventions.
Who Bears the Cost
- Chatbot providers (platforms and independent developers offering chatbots directly to consumers) — Must design and implement disclosure flows, content controls, session‑management systems, moderation rules, and staff or technical solutions to meet the 'reasonable policies' standard, creating development and compliance costs.
- Federal agencies (FTC, HHS/NIH) — FTC will absorb enforcement workload; NIH/HHS will manage and fund the 4‑year longitudinal study, both requiring budget and staffing resources.
- State attorneys general and consumer protection offices — While states retain parens patriae authority, preemption and the notification/limitation scheme constrain enforcement strategies and may shift enforcement burdens to the federal level.
Key Issues
The Core Tension
The central dilemma is how to protect minors from deceptive, exploitative, or harmful AI interactions while avoiding unworkable obligations that either push providers to collect intrusive age data, bluntly disable useful features, or chill innovation—especially when enforcement hinges on subjective standards (what’s 'reasonable' or 'age‑appropriate') and federal agencies must fill gaps that states can no longer regulate in the same domain.
The bill leaves several implementation questions unresolved that create real operational trade‑offs. First, the 'covered user' test depends on a provider’s actual knowledge or willful disregard of a user’s age, but the Act’s separate rule of construction says providers are not required to collect age information beyond normal business practices.
That combination creates a perverse incentive: providers may choose not to collect age data to avoid triggering obligations, while regulators will have to prove knowledge or willful disregard in enforcement actions. Second, the 3‑hour continuous interaction clock is a blunt instrument—systems that naturally produce long continuous sessions (interactive tutoring, structured therapy programs, or accessibility services) may need exception logic or complex session tracking, and simple workarounds (session resets, shorter interstitial prompts) could defeat the policy objective.
Third, the statutory definition of 'sexual material harmful to minors' uses a Miller‑style standard and an explicit child‑pornography carve‑out; applying those criteria to automatically generated text, images, or multimodal outputs will be technically and legally challenging. The FTC enforcement model centralizes oversight but depends on the agency’s bandwidth and its case‑by‑case assessment of what constitutes 'clear, age‑appropriate' language or 'reasonable' policies—potentially leading to uneven enforcement and litigation as standards develop.
Finally, the NIH study will produce evidence only after four years, meaning providers and regulators must operate under precautionary rules while the empirical basis for long‑term mental‑health effects remains incomplete.
Try it yourself.
Ask a question in plain English, or pick a topic below. Results in seconds.