Codify — Article

GUARD Act (SB3062) requires age verification and disclosures for AI chatbots

Mandates account-based age checks, recurring non-human and non‑professional disclosures, and bans minors from AI ‘companions’ while creating civil and criminal penalties.

The Brief

The GUARD Act requires any person who makes an artificial intelligence chatbot available in the U.S. to force users to create accounts and to verify ages through ‘‘reasonable’’ measures before restoring or granting access. The bill requires clear disclosures that chatbots are non‑human and not licensed professionals, imposes strict limits on how age‑verification data may be stored or shared, and forbids minors from using AI systems defined as ‘‘AI companions.’'.

The statute creates new criminal prohibitions aimed at developers and operators—punishing designs that solicit minors into sexual content or that encourage self‑harm or imminent violence—and gives the Attorney General and state attorneys general civil enforcement authority, including penalties for covered‑entity violations. The measure centers child safety but raises immediate compliance, privacy, and operational questions for platform operators and verification vendors.

At a Glance

What It Does

Requires account creation and verified age for users of AI chatbots; freezes existing accounts until age is verified. Mandates recurring, conspicuous disclosure that chatbots are non‑human and do not provide licensed professional services. Creates criminal offenses for chatbot designs that solicit minors into sexual content or promote self‑harm or imminent violence.

Who It Affects

Any person or company that owns, operates, or makes available an AI chatbot in the U.S., age‑verification vendors, and providers of AI companion features designed to simulate interpersonal or emotional relationships.

Why It Matters

It imposes operational and data‑security duties on platforms (including limits on sharing age verification data), adds new liability exposures (criminal and civil penalties), and sets a federal baseline that permits states to enforce similar or stronger laws.

More articles like this one.

A weekly email with all the latest developments on this topic.

Unsubscribe anytime.

What This Bill Actually Does

The bill defines two core concepts: an ‘‘artificial intelligence chatbot’’ as an interactive service that generates adaptive responses to open‑ended input, and an ‘‘AI companion’’ as a subset of chatbots designed to simulate friendship, emotional support, or therapeutic dialogue. Those definitions determine which systems trigger the Act’s most intrusive restrictions.

The law distinguishes narrow, single‑purpose bots (excluded) from general‑purpose or conversational systems that must comply.

Covered entities must require user accounts for chatbot access and must verify ages for both new and preexisting accounts. For accounts existing at the Act’s effective date, the bill requires an immediate freeze until the user supplies verifiable age data via a ‘‘reasonable age verification process.’' A covered entity may rely on third‑party services to perform verification but remains liable for compliance.

The statute disallows trivial checks (for example, mere self‑attestation or relying on shared IP/hardware) and directs periodic re‑verification. It also prescribes minimum data‑security obligations: collect only what is necessary, encrypt transmissions, limit retention to what is reasonably necessary, and prohibit sharing or sale of verification data.On content and labeling, the bill forces chatbots to disclose at the start of each conversation and at regular intervals (the text specifies a 30‑minute interval) that the system is not human.

It also requires statements that the chatbot is not a licensed professional and cannot deliver medical, legal, financial, or psychological services. The law bars chatbots from falsely claiming to be human or licensed professionals.The Act creates two enforcement tracks.

First, it adds new criminal prohibitions aimed at designers, developers, and operators who, with knowledge or reckless disregard, create or make available chatbots that solicit minors to engage in sexually explicit acts or that encourage suicide, self‑harm, or imminent physical or sexual violence; the criminal text attaches a monetary fine up to $100,000 per offense. Second, the Attorney General may bring civil actions to enjoin violations of the account, age‑verification, and disclosure rules and to obtain civil penalties (also up to $100,000 per violation), restitution, and other relief; state attorneys general may sue as parens patriae.

The Act takes effect 180 days after enactment.

The Five Things You Need to Know

1

The bill requires covered entities to freeze all existing chatbot user accounts on the effective date and to restore functionality only after users submit verifiable age data through a ‘‘reasonable age verification process.’', It mandates that chatbots disclose they are non‑human at the start of each conversation and at 30‑minute intervals, and that they must not represent themselves as licensed professionals.

2

New criminal provisions make it unlawful, with knowledge or reckless disregard, to design or make available chatbots that solicit minors into sexually explicit conduct or that promote suicide, self‑harm, or imminent violence; fines can reach $100,000 per offense.

3

Covered entities must limit collection of age‑verification data, encrypt its transmission, retain it only as necessary, and may not share, transfer, or sell that verification data to third parties.

4

The Attorney General has investigatory tools, rulemaking authority, and may seek civil penalties (up to $100,000 per violation) for failures to implement account creation, age verification, or the minors‑ban for AI companions; states may bring parens patriae suits.

Section-by-Section Breakdown

Every bill we cover gets an analysis of its key sections. Expand all ↓

Section 3

Definitions: AI chatbot and AI companion

Section 3 draws the boundaries of coverage. ‘‘Artificial intelligence chatbot’’ is defined by functionality (open‑ended natural language or multimodal input and adaptive output) and excludes narrow, contextualized reply systems. ‘‘AI companion’’ is a functional subcategory aimed at simulating interpersonal or therapeutic interaction. That split matters: general‑purpose chatbots will trigger age verification and disclosure requirements, while single‑purpose assistants may not.

Section 4

New criminal offenses for solicitation and promotion of harm

Section 4 inserts a new chapter into title 18 creating two offenses: knowingly or with reckless disregard designing or making available a chatbot that solicits minors into sexually explicit conduct, and similarly making available a chatbot that encourages suicide, non‑suicidal self‑injury, or imminent violence. Both attach per‑offense fines (statute language limits fines to $100,000). The mens rea—knowledge or reckless disregard—targets designers/operators, not users, but the text leaves open how courts will evaluate ‘‘reckless disregard’’ in complex model‑behavior cases.

Section 5

Covered entity operational duties: accounts, verification, and data security

Section 5 requires account creation for any chatbot interaction and sets a multi‑stage age verification program: freeze existing accounts on day one, verify new and existing accounts with ‘‘reasonable’’ measures, and perform periodic re‑verification. The bill permits third‑party verifiers but keeps the covered entity legally responsible. It also lays out minimum data‑security guardrails—minimal collection, encryption during transmission, time‑limited retention, and an explicit ban on sharing or selling age‑verification data—that will affect identity providers and data processors.

3 more sections
Section 6

Ban on minor access to AI companions

If verification establishes a user is a minor, Section 6 requires covered entities to prohibit that user from accessing AI companions. The prohibition is absolute as written; the statute does not provide for parental consent exceptions or graduated features for minors, so operators will need technical gating to distinguish companion features from other chatbot functionality.

Section 7

Enforcement, investigatory powers, and remedies

Section 7 empowers the Attorney General to sue for injunctive relief and civil penalties for violations of the operational duties and the minors ban, and grants subpoena and rulemaking authority. Civil penalties are capped at $100,000 per violation, and states may sue on behalf of their residents. The dual criminal and civil machinery means operators face both monetary and potential criminal exposure under different provisions of the law.

Section 8

Effective date

The Act becomes operative 180 days after enactment. That delay creates a finite compliance window for platforms and verification vendors to design account flows, integrate verification, implement data‑security controls, and modify model behavior and disclosure systems.

At scale

This bill is one of many.

Codify tracks hundreds of bills on Technology across all five countries.

Explore Technology in Codify Search →

Who Benefits and Who Bears the Cost

Every bill creates winners and losers. Here's who stands to gain and who bears the cost.

Who Benefits

  • Minors and parents — by forcing age‑gating and barring minors from AI companions, the bill aims to reduce exposure to sexually explicit content, grooming, and AI‑driven encouragement of self‑harm.
  • Consumers concerned about privacy of identity data — the statute restricts sharing, sale, and retention of age‑verification data and requires encryption and minimal collection.
  • Licensed professionals and professional boards — the explicit ban on chatbots representing themselves as licensed therapists, physicians, lawyers, or financial advisors cuts off a route for AI misrepresentation and potential consumer harm.

Who Bears the Cost

  • Platform operators and chatbot providers — must implement account systems, freeze and re‑verify existing users, integrate or contract verification services, and harden data‑security practices; small operators will face proportionally higher compliance costs.
  • Verification vendors — will face demand spikes and legal scrutiny; they may need to upgrade proofing standards and assume cybersecurity obligations while operating under tight non‑sharing rules.
  • Startups and research teams — the definitions and liability risk (criminal and civil) may chill deployment of conversational features, particularly emotional or companionship use‑cases, reshaping product roadmaps.

Key Issues

The Core Tension

The central dilemma is between child protection and friction/privacy. Effective age gating requires reliable identity proofing that can be privacy‑invasive and costly; weaker checks are easier and cheaper but fail the statute’s mandate and leave children exposed. The bill protects minors but at the cost of user privacy, higher compliance burdens, and potential limits on legitimate uses of conversational AI.

The bill sets a federal floor that prioritizes child safety but creates hard operational trade‑offs. Requiring ‘‘reasonable’’ age verification without a clear standard (beyond examples like government ID) forces platforms to choose between privacy‑intrusive checks and weaker methods that may not satisfy regulators or courts.

The statute’s ban on sharing or selling verification data reduces downstream privacy risk but also limits legitimate verification workflows (for example, federated age‑assertion services) and could increase redundant data collection.

Liability language is another area of friction. Covered entities remain liable even when using third‑party verifiers, while the criminal offenses target design and availability with a mens rea of knowledge or reckless disregard.

In practice, attributing a chatbot’s harmful output to a developer or operator—especially when models self‑generate unexpected content—will require new evidentiary frameworks. Finally, the blanket prohibition on minor access to AI companions leaves no safe, supervised pathway for minors to use beneficial features and may push use toward unregulated off‑platform alternatives or workarounds (shared accounts, device‑level bypasses).

Try it yourself.

Ask a question in plain English, or pick a topic below. Results in seconds.