SB 300 amends California’s companion-chatbot rules in the Business and Professions Code to increase transparency and impose specific operational obligations on chatbot operators. It expands operator duties around crisis response and content controls aimed at protecting minors and users expressing suicidal ideation.
Separately, the bill adds a targeted exception to the Political Reform Act: decisions affecting members of certain nonprofit organizations (not the organizations themselves) are not treated as creating a ‘‘reasonably foreseeable’’ material financial interest for public officials when the only potential financial effect is a change in dues or membership counts. The result is stricter chatbot safeguards alongside a narrow ethics carve‑out for officials tied to specific types of nonprofits.
At a Glance
What It Does
The bill requires operators to disclose when a companion chatbot might be mistaken for a human and to keep a suicide‑prevention protocol that includes user referrals to crisis services and public posting of the protocol. When an operator has actual knowledge that a user is a minor, it must provide default break reminders at least every three hours and block sexually explicit material or the facilitation of its exchange. The amendment also inserts Section 87103.7 into the Government Code, exempting certain member‑level decisions involving 501(c)(5) and 501(c)(6) organizations from being treated as ‘‘reasonably foreseeable’’ financial interests for public officials when the only effect is on dues or membership.
Who It Affects
Companies that operate or host companion chatbot platforms and their compliance, safety, and engineering teams will see the bulk of new operational requirements. Public officials and ethics advisors dealing with nonprofit trade associations, labor organizations, and chambers of commerce must apply a new, narrow exception when evaluating conflicts of interest. Mental‑health/crisis hotlines and online safety service providers may see increased referrals and visibility.
Why It Matters
For platform operators the bill converts general safety principles into concrete duties (disclosure triggers, published protocols, default reminder cadence, and an explicit ban on producing or facilitating sexual material to minors). For government ethics it creates a precise, limited carve‑out that narrows when a public official’s ties to certain nonprofits count as a disqualifying financial interest.
More articles like this one.
A weekly email with all the latest developments on this topic.
What This Bill Actually Does
SB 300 revises the scope and mechanics of California’s companion‑chatbot rules and adds a narrowly drafted ethics exception for certain nonprofit‑related decisions. On the chatbot side, the law keeps the familiar baseline: if a reasonable person would likely believe they are talking to a human, the operator must make that artificial nature clear.
The bill then layers operational obligations on top of that baseline: operators must maintain a written protocol to prevent the chatbot from supplying suicidal‑ideation or self‑harm content without response, and that protocol must be publicly available.
The bill creates a distinct set of protections when the operator has actual knowledge that a user is a minor. In that circumstance the operator must push default reminders during prolonged interactions (a statutory minimum cadence is specified), and the companion chatbot must not produce sexually explicit material, facilitate its exchange, or directly instruct the minor to engage in sexually explicit conduct.
The text ties the suicide‑prevention requirement to user expressions of suicidal ideation and requires that the protocol include referrals to crisis resources such as hotlines and text lines.SB 300 also amends the Political Reform Act by inserting Section 87103.7. That new provision says it is not ‘‘reasonably foreseeable’’ that a public official has a material financial interest when a decision concerns members of a 501(c)(5) or 501(c)(6) organization, the decision affects only those members (not the nonprofit itself), and the only financial result that might follow is a change in dues or membership.
The clause is intentionally narrow: it does not blanket‑exempt officials from all dealings with these nonprofits, only a specific, limited category of member‑level decisions.Taken together the bill formalizes several concrete compliance points for AI operators while simultaneously carving out a discrete ethics safe harbor for officials tied to certain trade and labor nonprofits. That combination creates two different compliance regimes inside a single measure: technical content‑safety duties for private operators and a fine‑grained rule about when a public official’s nonprofit ties generate a disqualifying financial interest.
The Five Things You Need to Know
The bill requires operators to post details of their suicide‑prevention protocol on the operator’s public website.
If an operator has actual knowledge a user is a minor, it must by default issue a clear reminder at least every three hours during ongoing chatbot interactions.
The companion‑chatbot rule bans producing sexually explicit material to minors and bans facilitating the exchange of sexually explicit material or directing the minor to engage in sexually explicit conduct.
Disclosure that a chatbot is artificially generated must be provided when a ‘‘reasonable person’’ interacting would likely be misled into thinking the bot is human.
Section 87103.7 excludes from the ‘‘reasonably foreseeable’’ financial‑interest test decisions tied to members (not the nonprofit itself) of 501(c)(5) or 501(c)(6) organizations, but only when the only possible financial effect is a change in dues or membership.
Section-by-Section Breakdown
Every bill we cover gets an analysis of its key sections.
Human‑likeness disclosure trigger
This subsection keeps the triggering standard anchored to what a reasonable person would believe: when the chat experience would likely mislead, the operator must give a clear, conspicuous notice that the chatbot is artificially generated. Practically, operators will need to document how they assess ‘‘reasonable person’’ risk (conversation logs, UI cues, or testing) and adopt a disclosure that survives real‑world UX scenarios where the bot appears human.
Suicide‑prevention protocol and public posting
The bill mandates that operators maintain a protocol to prevent production of suicidal‑ideation or self‑harm content, and requires that protocol to include referrals to crisis services when users express suicidal thoughts. Importantly, the operator must publish the protocol on its internet website, which creates both a compliance checklist for platforms and a public accountability mechanism; vendors and auditors will likely use that public document to assess conformance.
Minor protections and default reminders
When an operator has actual knowledge that a user is a minor, the statute requires several default protections: a disclosure that the interaction is with AI, periodic break reminders (statutorily set at least every three hours for ongoing interactions), and a strict bar on producing or facilitating sexually explicit material or instructing the minor to engage in such conduct. The operative phrasing—requiring action when the operator has actual knowledge—places the initial burden on the operator’s detection or verification processes, but the statute also creates concrete operational obligations once that knowledge exists.
Narrow nonprofit dues/membership conflict exception
This new Government Code section carves out a limited exception to the Political Reform Act: a public official’s decision is not considered to create a ‘‘reasonably foreseeable’’ material financial interest when the decision concerns members of certain nonprofit categories (501(c)(5) or 501(c)(6)), the decision affects only members (not the nonprofit entity), and the only financial effect that could follow is a change in dues or membership. The provision is tightly scoped and will require ethics officers to parse whether a decision truly affects only members and whether any secondary financial effects exist.
This bill is one of many.
Codify tracks hundreds of bills on Technology across all five countries.
Explore Technology in Codify Search →Who Benefits and Who Bears the Cost
Every bill creates winners and losers. Here's who stands to gain and who bears the cost.
Who Benefits
- Minors and their guardians — the law creates explicit content and interaction limits when an operator knows the user is a minor, reducing exposure to sexual material and imposing periodic break reminders.
- Users expressing suicidal ideation — operators must maintain and make public protocols that include referrals to crisis hotlines or text lines, increasing the likelihood of timely signposting to help.
- Members of 501(c)(5) and 501(c)(6) organizations — public officials will have a clearer, narrower rule when making member‑level decisions, potentially reducing disqualification on routine dues or membership matters.
- Crisis‑response organizations — required referrals could increase visibility and traffic to existing hotlines and text services, creating downstream demand for those resources.
Who Bears the Cost
- Companion‑chatbot operators and platforms — they must design, document, and publish suicide‑prevention protocols, implement age‑detection or verification workflows to determine ‘‘actual knowledge,’’ manage periodic reminder logic, and deploy content filters that block production and facilitation of explicit material.
- Compliance, trust & safety, and engineering teams — expect added development, moderation, and legal review costs to operationalize the statute’s requirements and maintain audit trails to demonstrate compliance.
- State and local ethics offices — although the nonprofit exception narrows some conflict questions, officials and advisors must now interpret the new carve‑out and assess narrow factual scenarios, which can increase advisory and review workload.
- Users and UX designers — default break reminders and disclosures may degrade user experience or engagement, forcing product teams to balance compliance with retention metrics.
Key Issues
The Core Tension
The bill pits two legitimate aims against each other: protecting vulnerable users (minors and people expressing suicidal ideation) by imposing stricter transparency and content controls, versus the operational, privacy, and free‑expression costs of detecting age, filtering content, and issuing repeated reminders; simultaneously, it narrows conflict rules to reduce administrative burdens on officials, which improves government efficiency but risks weakening the principle that personal ties to nonprofits should constrain public decisions.
The statute sets concrete duties but leaves several enforcement and interpretive gaps. It does not specify penalties or an enforcement mechanism for failures to publish protocols, for failing to block prohibited content, or for inadequate disclosures; absent regulatory guidance, operators must guess what documentation suffices.
The ‘‘reasonable person’’ disclosure trigger and the statutory use of ‘‘actual knowledge’’ about minors create practical tension: operators will have to decide how aggressively to detect age (and how to document that detection) without running afoul of privacy or anti‑discrimination laws.
The prohibition on ‘‘facilitating the exchange’’ of sexually explicit material raises scope questions for platform architectures that allow user‑to‑user messaging, file uploads, or links to third‑party content. Technical implementations—automated filters, human moderation, quarantine workflows—carry false positive risks that could overblock lawful material or generate service disruptions.
On the ethics side, the nonprofit dues/membership exception is narrowly written but could be gamed: officials might structure decisions as ‘‘member‑level’’ to fall inside the carve‑out, and the text’s focus on direct financial effects leaves unresolved whether foreseeable secondary effects (e.g., reputational benefits leading to future contracts) reintroduce a material interest.
Try it yourself.
Ask a question in plain English, or pick a topic below. Results in seconds.