SB 243 creates a targeted regulatory framework for “companion chatbots” — AI systems that give adaptive, human‑like responses and can sustain relationships across interactions. The bill requires operators to disclose when a chatbot could reasonably be mistaken for a human, to maintain and publish protocols to prevent the chatbot from producing suicidal ideation or self‑harm content, and to adopt specific protections for users the operator knows are minors.
The law also mandates annual, non‑identifying reporting to California’s Office of Suicide Prevention starting July 1, 2027, and gives injured persons a private right of action with injunctive relief, statutory or actual damages (greater of actual or $1,000 per violation), and attorneys’ fees. For compliance officers and legal teams, SB 243 imposes concrete design, monitoring, and documentation obligations that will affect product features, content moderation, age‑verification practices, and litigation risk.
At a Glance
What It Does
SB 243 requires operators to clearly notify users when a companion chatbot could be mistaken for a human, to keep and publish protocols that prevent the chatbot from producing suicidal ideation/self‑harm content, and to adopt special safeguards for known minors including periodic reminders and limits on sexual content. Operators must report defined data points annually to the Office of Suicide Prevention beginning July 1, 2027.
Who It Affects
Any person or company that makes a companion chatbot platform available to California users; exemptions cover narrowly scoped customer‑service bots, certain video game NPCs, and simple voice assistants that don’t sustain relationships. The law targets platform product teams, compliance and safety engineers, legal departments, and public‑health agencies.
Why It Matters
This is one of the first state statutes to combine consumer‑protection labeling, mandated safety protocols for suicidal content, public reporting, and a private right of action specifically for companion chatbots. It sets operational requirements (protocol publication, evidence‑based measurement, and reporting) that will shape design choices and compliance programs.
More articles like this one.
A weekly email with all the latest developments on this topic.
What This Bill Actually Does
SB 243 defines a companion chatbot narrowly: an AI with a natural‑language interface that responds in adaptive, human‑like ways and can sustain relationships across multiple interactions. The law carves out three categories of excluded bots — simple customer‑service/operational bots, video‑game features limited to in‑game dialogue that cannot discuss mental health or self‑harm, and voice‑activated virtual assistants that do not sustain relationships or elicit emotional responses.
Those definitions determine whether the rest of the law applies.
If a reasonable person interacting with a chatbot would be misled into thinking they were speaking to a human, the operator must issue a clear, conspicuous notice that the chatbot is artificially generated. Separately, an operator may not permit a companion chatbot to engage users unless the operator has a protocol to prevent producing suicidal ideation, suicide, or self‑harm content.
That protocol must include, at minimum, a mechanism to provide referrals to crisis service providers (such as hotlines or text lines) when users express suicidal ideation; the operator must publish details of the protocol on its website.The statute imposes additional duties when the operator knows a user is a minor. For known minors the operator must disclose that the exchange is with AI, present a default reminder at least every three hours during ongoing sessions to take a break and that the chatbot is not human, and take steps to prevent the chatbot from producing sexually explicit visual material or instructing the minor to engage in sexually explicit conduct.
Operators must also display a general suitability notice that companion chatbots may not be appropriate for some minors.On the reporting side, beginning July 1, 2027, operators must file an annual report with the Office of Suicide Prevention listing three items: the number of times the operator issued a crisis referral notification in the previous calendar year, the protocols used to detect/remove/respond to suicidal ideation, and the protocols used to prohibit chatbot responses about suicidal ideation or actions. Reports must exclude any personal identifiers; the Office will publish the aggregated data on its website.
The bill also requires operators to use evidence‑based methods to measure suicidal ideation.
The Five Things You Need to Know
The bill requires a clear, conspicuous label when a reasonable person could be misled into thinking a companion chatbot is human.
Operators may not allow companion chatbots to engage users unless they maintain and publish protocols to prevent producing suicidal ideation, and must provide crisis‑referral notifications when users express suicidal ideation.
For users the operator knows are minors, the operator must disclose the AI nature, display a default reminder at least every three hours during continued interactions, and prevent the chatbot from producing sexually explicit visuals or telling the minor to engage in sexual conduct.
Starting July 1, 2027, operators must annually report to the Office of Suicide Prevention: the number of crisis referrals issued, detection/removal/response protocols for suicidal ideation, and protocols that prohibit chatbot responses about suicidal ideation; reports may not include user identifiers and the Office will post the data publicly.
SB 243 creates a private right of action allowing injured persons to seek injunctive relief, damages equal to the greater of actual damages or $1,000 per violation, and reasonable attorneys’ fees and costs.
Section-by-Section Breakdown
Every bill we cover gets an analysis of its key sections.
Definitions and exclusions
This section sets the operational perimeter: it defines “artificial intelligence,” “companion chatbot,” “companion chatbot platform,” “operator,” and related terms. Critically, it excludes three categories from coverage — narrowly scoped customer‑service bots, certain video‑game bots limited to in‑game replies, and simple voice assistants that do not sustain relationships or produce emotional responses. These exclusions limit the statute’s reach but leave a broad middle ground where social, relationship‑oriented bots fall squarely under the law.
Transparency, suicide‑prevention protocols, and special rules for minors
This is the operative compliance section. It imposes a labeling duty when a reasonable person could be misled into thinking the chatbot is human, and it conditions chatbot engagement on the operator maintaining a protocol to prevent production of suicidal ideation/self‑harm content — including issuing crisis referrals. The operator must publish the protocol details on its website. For known minors the section requires explicit AI disclosure, automatic reminders at least every three hours during continuing interactions, and technical measures to block sexually explicit visuals or advice instructing sexual conduct. Together these mechanics dictate product behavior, logging, content filtering, and public documentation practices.
Annual reporting to the Office of Suicide Prevention
This section mandates annual reporting beginning July 1, 2027. Operators must report three discrete items: how many crisis‑referral notifications they issued in the prior calendar year, the protocols they use to detect/remove/respond to suicidal ideation, and the protocols they use to prohibit chatbot responses concerning suicidal ideation or actions. The reports must not include user identifiers; the Office will publish the aggregated data. The statute also requires use of evidence‑based methods to measure suicidal ideation, pushing operators to choose or develop validated measurement tools and document their approach.
Suitability notice for minors
Operators must disclose on any access point (app, browser, etc.) that companion chatbots may not be suitable for some minors. This is an information duty — it doesn’t prescribe a specific age‑verification technique but requires the operator to communicate suitability considerations to end users and guardians via the interfaces through which the chatbot is accessed.
Private right of action and remedies
The law creates a private cause of action for any person who suffers an injury in fact from a violation of the chapter. Remedies include injunctive relief, damages equal to the greater of actual damages or $1,000 per violation, and reasonable attorneys’ fees and costs. That combination of statutory damages and fees raises the specter of litigation-driven compliance costs and will incentivize defensive design and documentation practices by operators.
Cumulative obligations
This short provision makes clear the chapter adds to, rather than replaces, other duties under law. Operators remain subject to other state and federal obligations (consumer protection, privacy law, mandatory reporting, etc.), so compliance programs must be layered to meet multiple regimes simultaneously.
Severability clause
Standard severability language preserves the remainder of the statute if a court invalidates any particular provision. Practically, this reduces single‑issue litigation risk from nullifying the entire chapter if one clause is struck down.
This bill is one of many.
Codify tracks hundreds of bills on Technology across all five countries.
Explore Technology in Codify Search →Who Benefits and Who Bears the Cost
Every bill creates winners and losers. Here's who stands to gain and who bears the cost.
Who Benefits
- Known minors and their families — the law forces default reminders, explicit AI disclosure, and limits on sexual content for minors, which reduce the risk of manipulative or sexually explicit interactions.
- Crisis service providers and suicide‑prevention hotlines — statutory referrals and published protocols will increase visibility of these services and may channel more users in crisis to established resources.
- Office of Suicide Prevention and public‑health researchers — the required, non‑identifying annual reports create a new aggregated data source to monitor industry behavior and referral volumes across operators.
- Compliance‑focused operators — companies that already document safety protocols and invest in moderation will gain a clearer regulatory baseline they can use to market safety and to streamline audits.
Who Bears the Cost
- Operator companies (platforms and developers) — they must design and maintain suicide‑prevention protocols, implement detection and filtering systems, publish protocols, and prepare annual reports, all of which are engineering and operational costs.
- Startups and small developers producing relationship‑oriented bots — the burden of building evidence‑based measurement, age‑verification or detection tools, and legal defenses may be proportionally heavier for smaller teams.
- Crisis hotlines and referral services — increased referrals may require scaling capacity and coordination with platforms to handle surge volumes, potentially without new funding.
- Operators’ legal teams and insurers — the private right of action with statutory damages and fee shifting will increase litigation risk and compliance‑driven legal costs, including preemptive settlements and defensive product changes.
- Product UX and trust teams — meeting the three‑hour reminder rule and conspicuous labeling without degrading user experience will require additional design work and may reduce engagement metrics.
Key Issues
The Core Tension
SB 243 pits two legitimate goals against each other: protecting vulnerable users (especially minors) by forcing transparency and restrictive safety protocols, versus preserving conversational breadth, privacy, and innovation in companion AI — stricter safety rules reduce risk but can also overblock beneficial content, require intrusive verification, and raise litigation and operational costs that affect smaller developers disproportionately.
The statute leaves several implementation choices and ambiguities that will drive compliance complexity. First, the “reasonable person” test for when a label is required is subjective; platforms must choose conservative approaches (broad labeling) or risk litigation.
Second, the statute requires operators to act when they “know” a user is a minor but does not prescribe acceptable age‑verification methods; effective age checks tend to require collecting data that raises privacy and regulatory trade‑offs. Third, the mandate to use “evidence‑based methods” to measure suicidal ideation pushes operators toward validated instruments, but the bill provides no list or standard — creating potential disputes about which measures suffice and how to validate system performance.
Operationalizing a protocol that both prevents production of suicidal ideation content and preserves legitimate supportive or therapeutic dialogue is technically and ethically challenging. Aggressive filtering may suppress helpful discussions about mental health; permissive approaches risk allowing harmful prompts.
Publication of protocol details enhances transparency but could reveal weaknesses that bad actors exploit. Finally, the private right of action is a double‑edged sword: it creates enforcement opportunities for harmed individuals but also incentivizes litigation as a compliance lever, likely increasing defensive design, documentation demands, and settlement pressure for operators.
Try it yourself.
Ask a question in plain English, or pick a topic below. Results in seconds.