AB 1988 requires operators who make companion chatbots available in California to build a graduated crisis-response system into those systems. When a chatbot detects a credible crisis expression from a user it must acknowledge distress, encourage immediate human support, present 988 contact options, and warn the user that the system may pause to allow time for de‑escalation.
The bill defines key terms (including “companion chatbot” and “credible crisis expression”), prohibits chatbots from performing diagnoses or labeling the pause as punishment, mandates documentation of incidents, and requires annual reporting of that documentation to the Office of Suicide Prevention beginning in 2028. The law aims to reduce harm by steering users toward human help and imposing concrete safety and transparency obligations on operators.
At a Glance
What It Does
The bill requires operators to implement a graduated response that uses contextual analysis to detect crisis expressions, deliver supportive messaging and 988 contact options, and, if the user reaffirms or escalates, trigger a temporary interruption in chatbot output. It also forbids the chatbot from diagnosing users or framing the pause as punitive.
Who It Affects
Any person or company that makes a companion chatbot available in California — including platform operators, AI vendors that market anthropomorphic conversational agents, and consumer-facing mental‑health tech firms — must comply. The statute excludes narrow categories such as transactional customer‑service bots, limited video‑game NPCs, and simple voice assistants that do not sustain relationships.
Why It Matters
AB 1988 puts state-level operational guardrails at the intersection of AI and public‑health safety: it creates mandated in‑product interventions, standardized messaging (including prominence for 988), and a state reporting stream that will surface how chatbots are detecting and handling crises.
More articles like this one.
A weekly email with all the latest developments on this topic.
What This Bill Actually Does
The bill builds an operational safety layer for companion chatbots rather than creating clinical responsibilities. It begins by defining an AI-based “companion chatbot” as a natural‑language system that can sustain humanlike, relationship-oriented dialogue across interactions; the definition explicitly carves out narrow, transactional bots, videogame characters constrained to in‑game topics, and simple standalone voice assistants that do not foster emotional attachment.
Detection must rely on contextual analysis rather than simple keyword matches. Once the system determines a user has made a credible crisis expression — language that reasonably indicates intent to harm self or others given the conversational context — the chatbot must respond with nonjudgmental acknowledgement, encourage the user to seek immediate human support, present the 988 Suicide and Crisis Lifeline contact options (call, text, chat), and advise the user that a temporary pause may occur to allow space for de‑escalation and human connection.
The statute prohibits chatbots from diagnosing or labeling user risk and bars any representation that the pause is a punishment or enforcement action.If the user reaffirms or escalates the crisis expression after receiving the initial support messages, the system must initiate a crisis interruption pause; during the pause the chatbot must stop generating conversational responses and display messaging that explains the pause’s purpose, tells users that brief conversations with trained crisis counselors often help, and encourages reaching out while prominently offering 988 contact options (with immediate-access links if feasible). Operators must also implement a graduated response architecture so the system escalates only after initial supportive warnings.The bill places transparency obligations on operators: they must document the presence of a graduated response, record every credible crisis expression detected, and log the duration and conditions surrounding any crisis interruption pause.
Those records become subject to annual reporting to the state Office of Suicide Prevention covering the prior calendar year, creating both an audit trail and an information stream for the public‑health agency. The statute repeatedly states its requirements “notwithstanding any law,” signaling that these duties apply even where other legal constraints might otherwise limit them.
The Five Things You Need to Know
The bill defines “credible crisis expression” as statements that reasonably indicate intent to harm oneself or others based on contextual analysis rather than keyword matching.
Operators must present 988 Suicide and Crisis Lifeline contact options (call, text, chat) prominently and include immediate-access links when technically feasible.
The law forbids companion chatbots from diagnosing users or describing a pause as a punishment, enforcement action, or violation.
Operators must document three specific items: (1) whether a graduated response system exists, (2) all credible crisis expressions detected, and (3) the duration and conditions of any crisis interruption pause.
The statute includes multiple “notwithstanding any law” clauses to prioritize these operational safety duties over conflicting state law provisions.
Section-by-Section Breakdown
Every bill we cover gets an analysis of its key sections.
Rationale for time‑based interruption as a safety tool
The preamble summarizes evidence about rumination, crisis counseling, and the limits of automated systems, framing a temporary pause as a proportionate intervention to reduce impulsivity and encourage human connection. For implementers this section signals legislative intent: the pause is a clinical‑adjacent safety measure, not a treatment modality, and lawmakers expect it to be used as a de‑escalation tool rather than a diagnostic step.
Core definitions and exclusions
This section defines key terms—artificial intelligence system, companion chatbot, credible crisis expression, crisis interruption pause, graduated response, and operator—and draws specific exclusions for customer‑service bots, limited videogame bots, and stand‑alone voice assistants that don’t sustain relationships. Those carve-outs matter for scope: they limit compliance obligations to conversational systems that are likely to form ongoing emotional bonds with users, reducing compliance burden on purely transactional or embedded systems.
Required on‑interaction responses and pause mechanics
When a chatbot detects a credible crisis expression it must acknowledge distress, encourage human support, provide 988 contact options, and warn that a pause may occur; if the user reaffirms or escalates the expression the system must initiate a crisis interruption pause. During the pause the agent must stop generating conversational output and display a set of prescribed supportive messages and prominently show 988 resources. The section also bars diagnostic assertions and forbids framing the pause as punitive, constraining both UI wording and back‑end decision‑making.
Documentation and annual reporting
Operators must keep records demonstrating the existence of a graduated response system, catalog every credible crisis expression the chatbot detects, and log the duration and conditions around any interruptions. Those records are then the basis for an annual report to the Office of Suicide Prevention covering the prior calendar year. The reporting obligation creates a data pipeline to the state but does not itself spell out enforcement mechanics or confidentiality protections for reported data.
This bill is one of many.
Codify tracks hundreds of bills on Technology across all five countries.
Explore Technology in Codify Search →Who Benefits and Who Bears the Cost
Every bill creates winners and losers. Here's who stands to gain and who bears the cost.
Who Benefits
- Users experiencing acute suicidal ideation or violent intent — the pause and the mandated 988 prominence steer them toward trained human support and interrupt rumination that can precipitate impulsive harm.
- Crisis services and 988 counselors — clearer in‑product pathways and standardized messaging should increase appropriate referrals and streamline public‑health triage.
- Regulatory and public‑health agencies (Office of Suicide Prevention) — the annual reporting requirement creates a dataset to assess how companion chatbots handle crises and to inform policy or resource allocation.
Who Bears the Cost
- Companion chatbot operators and platform providers — they must build contextual crisis detection, a graduated response system, UI/UX changes to present mandated messages and 988 links, logging infrastructure, and annual reporting processes.
- Startups and smaller developers producing relationship‑oriented chatbots — technical and compliance costs (contextual NLP models, secure logging, reporting pipelines) could be proportionally higher for firms with limited engineering or legal teams.
- Privacy officers and counsel — documenting and reporting identifiable or quasi‑identifiable crisis interactions raises data‑protection questions and will require policy decisions on retention, redaction, and lawful bases for transmission to the state.
Key Issues
The Core Tension
AB 1988 pits two legitimate goals against each other: the need to intervene quickly and reliably to reduce imminent harm versus the risk that automated interruptions and mandatory logging will misclassify normal speech, infringe on user autonomy, expose sensitive data, and impose heavy technical costs on developers. The law favors proactive interruption and reporting as a public‑safety priority, but that choice forces difficult tradeoffs around detection accuracy, privacy, and the practical burdens of compliance.
Operationalizing contextual crisis detection is technically hard and error‑prone. The bill requires contextual analysis rather than keyword matching, but it does not prescribe performance thresholds, explain acceptable false‑positive or false‑negative rates, or require third‑party validation.
That leaves implementers to choose detection models and thresholds that balance safety and overreach, with little statutory guidance on acceptability. High false‑positive rates risk unnecessarily interrupting benign conversations and eroding user trust; high false‑negative rates leave at‑risk users unsupported.
The statute mandates documentation and state reporting but is silent on data format, privacy safeguards, and enforcement. Operators will need to decide what level of de‑identification or aggregation satisfies both reporting and privacy obligations; the repeated “notwithstanding any law” phrasing might be intended to preempt conflicting state constraints, but it does not resolve tensions with federal privacy rules or platform terms of service.
Finally, the law sets behavioral and UX constraints (forbidden language, required prominence for 988), but it lacks an enforcement mechanism or penalties in the text, creating uncertainty about compliance incentives and oversight practices.
Try it yourself.
Ask a question in plain English, or pick a topic below. Results in seconds.