SB 867 adds new definitions for “artificial intelligence” and “companion chatbot” to the Business and Professions Code and forbids, through two parallel provisions, the manufacture, sale, exchange, possession with intent to sell, or offer to a retailer of any toy that includes a companion chatbot. Each prohibition is temporary: the prohibitions are written to expire on January 1, 2031.
The bill also spells out three narrow categories that are not companion chatbots (certain customer-service or business bots, tightly game‑limited bots, and stand‑alone voice assistants that do not sustain relationships). The statutory text contains a drafting inconsistency: one added provision treats “toy” as intended for children under 18, while the new chapter defines “toy” as intended for children 12 or younger — an ambiguity with practical consequences for manufacturers, retailers, and enforcement officials.
At a Glance
What It Does
SB 867 creates statutory definitions for AI and companion chatbots, excludes three categories of bots from that definition, and bars toys that include companion chatbots from manufacture, sale, possession with intent to sell, and offers to retailers. The prohibitions are temporary and automatically repeal on January 1, 2031.
Who It Affects
Toy manufacturers and designers that embed conversational AI, platform operators who supply chatbots for consumer products, retailers who buy or sell children’s products, in‑house counsel and compliance teams, and state regulators charged with consumer safety and enforcement.
Why It Matters
The bill is a targeted state-level response to child-facing conversational AI: it forces design and go‑to‑market choices, narrows the legal definition of companion chatbots via specific exclusions, and creates immediate compliance uncertainty because the text contains conflicting age definitions and limited enforcement language.
More articles like this one.
A weekly email with all the latest developments on this topic.
What This Bill Actually Does
SB 867 rewrites part of California’s Business and Professions Code to treat certain conversational AIs as a regulatory category and then bars those systems from being embedded in toys for children for a defined period. The bill first expands Section 22601 with targeted terms: it defines “artificial intelligence” at a functional level and then sets out what counts as a “companion chatbot” — an AI that uses natural language, produces adaptive, human‑like responses, and is capable of meeting a user’s social needs across multiple interactions.
The statute carves out three specific exceptions that will matter in practice: bots used solely for customer service or internal business purposes; video‑game NPCs that are limited to game topics and cannot sustain off‑topic dialogues; and stand‑alone voice assistants that do not sustain relationships across multiple sessions or generate outputs likely to elicit emotional responses. The bill also defines “operator” as a person who makes a companion chatbot platform available to users in California and references the state Office of Suicide Prevention (the text does not, however, add operational duties for that Office in the sections provided).SB 867 places the substantive ban in two places.
Section 22604.5 prohibits toys that include companion chatbots and states that the provision lasts only until January 1, 2031. The bill separately creates Chapter 22.6.5 with Section 22608, which repeats the same prohibitions but contains a different, narrower age threshold for what constitutes a “toy” (product intended for children 12 and under).
Both the chapter and the standalone section include a sunset provision that repeals the added law on the same date. The statute does not attach new criminal penalties or a private‑right‑of‑action in the text of the chapter as presented; enforcement mechanisms and remedies are left unspecified in these additions.
The Five Things You Need to Know
Section 22601 newly defines “artificial intelligence” as a machine‑based system that infers from inputs how to generate outputs that can influence environments.
The companion‑chatbot definition excludes three categories: bots used only for customer service or internal business purposes; video game bots limited to game‑related replies that cannot discuss mental health, self‑harm, sexual content, or unrelated topics; and stand‑alone voice assistants that don’t sustain relationships or elicit emotional responses.
The bans use a consistent list of prohibited acts: manufacture, sell, exchange, possess with intent to sell or exchange, and expose or offer for sale or exchange to any retailer.
Every prohibition in the bill is expressly temporary: the added provisions are written to expire on January 1, 2031.
The bill contains a drafting conflict: one inserted provision defines a toy as intended for persons under 18, while the new chapter defines a toy as intended for children 12 or younger, creating an unresolved age‑threshold discrepancy.
Section-by-Section Breakdown
Every bill we cover gets an analysis of its key sections.
Definitions: AI, companion chatbot, operator, toy
This section supplies the definitional backbone the rest of the bill relies on. It defines “artificial intelligence” in operational terms and sets out a high‑level definition of “companion chatbot” — highlighting adaptive, anthropomorphic, and relationship‑sustaining capacities. It also lists three specific exclusions that narrow what counts as a companion chatbot for regulatory purposes, defines “operator,” references the Office of Suicide Prevention, and inserts a definition of “toy” as a product intended for use in play by children under 18. The combined effect is to create a targeted regulatory trigger while allowing a range of non‑social or narrowly scoped bots to remain outside the ban.
General prohibition on toys that include companion chatbots
Section 22604.5 implements a broad prohibition: no person may manufacture, sell, exchange, possess with intent to sell or exchange, or expose or offer for sale or exchange to a retailer any toy that includes a companion chatbot. The provision is time‑limited — it explicitly repeals itself on January 1, 2031. The section does not, in the text provided, specify penalties, enforcement authority, or civil remedies tied to violations of this subsection.
Parallel prohibition with a narrower age scope
The new Chapter 22.6.5 repeats the same list of prohibited acts but changes the operative definition of “toy” to mean a product designed or intended for children 12 years of age or less. Drafting this prohibition inside a separate chapter produces overlapping statutory language that could be read together or as conflicting; courts or regulators will have to resolve whether the narrower age threshold overrides the broader definition inserted elsewhere in the same code.
Sunset for the chapter
Section 22609 mirrors the standalone section’s time limit: the chapter is designed to expire automatically on January 1, 2031. Because both the standalone section and the chapter include sunset language, the ban is explicitly temporary; the text contains no mechanism for renewal, review criteria, or post‑sunset transition rules.
This bill is one of many.
Codify tracks hundreds of bills on Technology across all five countries.
Explore Technology in Codify Search →Who Benefits and Who Bears the Cost
Every bill creates winners and losers. Here's who stands to gain and who bears the cost.
Who Benefits
- Child‑safety and mental‑health advocates — the ban removes persistent, relationship‑oriented conversational AIs from toys marketed to children, reducing exposure to anthropomorphic systems that can shape emotional development or elicit dependency.
- Parents and caregivers concerned about persistent data collection and social influence — the prohibition narrows points of contact between children and conversational AI embedded in playthings.
- Manufacturers of traditional (non‑AI) toys — firms that do not embed companion chatbots avoid new compliance costs and may gain a market advantage relative to AI‑enabled entrants during the ban period.
- Regulators and policymakers focused on quick, clear interventions — the sunset ban provides a time‑bounded tool to pause a set of products while evidence and governance frameworks evolve.
Who Bears the Cost
- Toy manufacturers and product designers that planned to embed conversational AI — they must halt or redesign products, absorb development and recall costs, or delay launches to avoid covered activity.
- Chatbot platform operators and third‑party AI suppliers — embedding platforms into consumer toys will be restricted in California, reducing a commercial avenue and complicating contractual arrangements with toy makers.
- Retailers and distributors that stock children’s products — they must revise sourcing, inventory, and procurement practices to ensure they do not offer covered toys to retailers in California.
- Small and mid‑sized firms without large compliance budgets — ambiguous definitions (e.g., what “sustain a relationship” means) will force conservative design choices, legal review, and potential lost product lines.
Key Issues
The Core Tension
The central dilemma is straightforward: the bill prioritizes child protection by removing embedded, social conversational AI from toys for a fixed period, but it does so by erecting broad, ambiguously defined prohibitions that hamper product design and invite legal disputes about scope, enforcement, and the appropriate age cutoff — a clash between precautionary regulation and the need for clear, administrable rules for industry.
The bill’s most consequential drafting problem is internal inconsistency. One added provision defines a “toy” for children under 18; the separate chapter defines a toy for children 12 and under.
That difference matters for numerous products (e.g., tween‑market devices) and creates immediate ambiguity for compliance teams and enforcement agencies. Regulators will face a choice: treat both provisions as operative, interpret the narrower provision as a special rule, or seek legislative or judicial clarification.
The statute also leaves enforcement and remedies largely unspecified in the added sections. The text prohibits a set of acts but does not attach specific penalties, administrative enforcement procedures, or a private right of action in the new chapter as shown.
That gap raises questions about which agency enforces the ban, what proof is required to show a toy “includes” a companion chatbot, and whether remedies will be civil fines, injunctive relief, or product seizures. Finally, the statutory definitions contain borderline concepts — “sustain a relationship,” “eliciting emotional responses,” and the exclusion for video‑game bots limited to certain topics — that are fact‑intensive and likely to produce litigation or rulemaking to set boundaries.
The sunset design is both a feature and a risk. A temporary ban gives the state room to study harms without permanently restricting technology, but it also creates a cliff after which stakeholders must plan for regulatory change: firms may delay investment or take defensive steps now that are costly to unwind later.
Meanwhile, companies may try to engineer around the ban (for example, by separating the chatbot into a detachable module, locating processing offshore, or relying on the statutory exclusions), producing compliance debates about functional equivalence versus formal categorization.
Try it yourself.
Ask a question in plain English, or pick a topic below. Results in seconds.