SB903 draws a bright line around when and how artificial intelligence may assist in delivering therapy or psychotherapy in California. The bill defines tiers of activity (administrative support, supplementary support, therapeutic/psychotherapeutic communication), forbids AI from performing core therapeutic tasks (independent decisions, direct therapeutic interaction, emotion detection, triage assessment), and requires licensed professionals to retain responsibility for clinical outputs.
It also establishes a strict consent standard for AI use and prohibits companies from sharing, selling, storing, or training AI models on psychotherapy data.
Why this matters: the bill fundamentally narrows the permissible role for generative and other AI tools inside clinical encounters while creating new compliance duties for clinicians, employers, and vendors. It pairs privacy-protective rules with enforcement tools—department investigations, civil penalties, and licensing-board remedies—while leaving several implementation details to regulation or clarification.
At a Glance
What It Does
SB903 defines administrative, supplementary, and therapeutic activities and then restricts AI from engaging in therapeutic communication, making independent clinical decisions, detecting emotions, or performing triage/clinical assessments. It requires explicit, revocable consent for AI use and forbids using psychotherapy data to train models.
Who It Affects
Licensed mental-health professionals (psychologists, LCSWs, MFTs, counselors, psychiatric NPs), behavioral-health employers and clinics, digital mental-health vendors, AI developers, and compliance officers responsible for HIPAA/Federal and state confidentiality rules.
Why It Matters
The bill establishes a California-specific regulatory baseline that could limit in-session AI assistance, block use of psychotherapy data for model training, and reallocate legal responsibility between individual clinicians and their employers—creating practical and commercial consequences for product design and clinical workflows.
More articles like this one.
A weekly email with all the latest developments on this topic.
What This Bill Actually Does
SB903 starts by separating tasks into three buckets: administrative support (scheduling, billing, nontherapeutic logistics), supplementary support (documentation, anonymized analytics, referrals, and workflow tools that increase clinical capacity), and therapeutic or psychotherapeutic communication (any interaction intended to diagnose, treat, or address mental-health concerns). The distinction matters because the bill allows AI to help with administrative and, to a limited extent, supplementary tasks only when a licensed professional remains fully responsible for clinical decisions and communications.
The bill then places clear functional limits on AI used in psychotherapy or triage. AI may not make independent therapeutic decisions, directly engage in therapeutic communication with clients, detect emotions or mental states, or perform assessments for triage or urgency.
AI-generated recommendations, assessments, diagnoses, or treatment plans are expressly disallowed unless a licensed professional reviews and approves them first. When an employer or contractor mandates or provides AI tools, the employer bears responsibility for deployment and compliance; when the licensed professional selects the tool, the clinician bears responsibility.SB903 sets a high bar for consent: the statute requires a clear, explicit affirmative act—documented in the record—that is specific, informed, freely given, and revocable.
The bill excludes broad terms-of-use clickthroughs, passive interactions with content, or deceptively obtained agreements from qualifying as consent. Separately, the text declares psychotherapy records confidential, invokes California Civil Code Section 56.104 confidentiality protections for AI use, and flatly prohibits any company or entity from sharing, selling, storing, or training AI models on data obtained from psychotherapy.Enforcement combines administrative and professional mechanisms.
The Department of Consumer Affairs may investigate violations and levy civil penalties up to $10,000 per violation (with APA hearing rights), while appropriate licensing boards may seek injunctions and adopt implementing regulations. Notably, one provision that would restrict AI use where sessions are recorded or transcribed appears in the bill without the operative conditions listed, creating an explicit statutory gap that will require further rulemaking or legislative clarification.
The Five Things You Need to Know
The bill prohibits AI from making independent therapeutic decisions, directly engaging in therapeutic communication with clients, detecting emotions or mental states, or performing triage/urgency assessments.
Companies and entities may not share, sell, store, or train AI models on any data obtained from psychotherapy; psychotherapy data is afforded special confidentiality under Civil Code §56.104.
Consent for AI use must be a clear, affirmative, informed, and revocable act documented in the client record; passive acceptance (e.g.
click-throughs) or deceptive procurement does not qualify.
If an employer requires or provides AI, the employer is responsible for ensuring deployment complies with the statute; if the clinician selects the AI, the licensed professional is responsible for compliance and clinical appropriateness.
Enforcement is dual: the Department of Consumer Affairs can impose civil penalties up to $10,000 per violation (with APA hearing rights), and health professional licensing boards may pursue injunctions and adopt implementing rules.
Section-by-Section Breakdown
Every bill we cover gets an analysis of its key sections.
Definitions and task tiers (administrative, supplementary, therapeutic)
This section builds the statute’s taxonomy—what counts as administrative support (scheduling, billing), supplementary support (records prep, anonymized analytics, workflow tools), and therapeutic or psychotherapeutic communication (any interaction intended to diagnose or treat). For practitioners and product teams the practical implication is that an AI feature’s permissibility depends on which bucket it falls into; developers should map features to these categories and document how a licensed professional will retain oversight.
Restriction on AI when sessions are recorded or transcribed (text incomplete)
The bill asserts that licensed professionals may not use AI to provide supplementary support when a therapeutic session is recorded or transcribed unless two conditions are satisfied—but the statute as drafted does not list those conditions. That omission creates immediate uncertainty: clinics and vendors cannot determine the permitted workflow for recorded-session processing until the legislature or regulator fills the gap.
Functional limits on AI and allocation of responsibility
This section enumerates prohibited AI functions in psychotherapy and triage—no independent therapeutic decisions, no direct therapeutic interactions, no emotion detection, and no autonomous triage assessments. It also requires that AI-generated clinical content be reviewed and approved by a licensed professional. The section distinguishes who is responsible: an employing entity that mandates or provides AI must ensure compliant deployment and direct its use; otherwise, the selecting clinician bears responsibility for compliance and clinical appropriateness.
Confidentiality and prohibition on using psychotherapy data to train models
The statute requires that AI use in psychotherapy records comply with Civil Code §56.104 confidentiality. It goes further by barring any company or entity from sharing, selling, storing, or training AI models on data obtained from psychotherapy. That is a strict commercial restriction that effectively forecloses the common industry practice of using clinical encounter data to fine-tune models unless data provenance and usage are clearly outside the psychotherapy context.
Enforcement, penalties, and professional-board authority
Enforcement authority is split across the Department of Consumer Affairs and professional licensing boards. The department may investigate and assess civil penalties up to $10,000 per violation (with procedural protections and APA hearing rights). Licensing boards retain traditional disciplinary and injunctive powers and may promulgate implementing regulations. Practically, regulated entities face administrative fines and potential professional discipline, and both paths can be used to compel compliance.
This bill is one of many.
Codify tracks hundreds of bills on Healthcare across all five countries.
Explore Healthcare in Codify Search →Who Benefits and Who Bears the Cost
Every bill creates winners and losers. Here's who stands to gain and who bears the cost.
Who Benefits
- Patients and clients receiving therapy — stronger statutory privacy protections for psychotherapy records and an explicit bar on using psychotherapy data to train AI models reduce risk of unauthorized reuse of sensitive clinical information.
- Privacy and consumer advocates — the bill enshrines a narrow consent standard and limits commercial reuse of psychotherapy data, giving advocates concrete legal tools to challenge abusive data practices.
- Regulatory bodies and licensing boards — the statute centralizes investigatory and enforcement authority and permits boards to adopt rules, giving regulators clearer levers to oversee AI in clinical contexts.
- Vendors that design HIPAA-compliant, FDA-aligned clinical support tools — companies that build products that fit within the bill’s narrow supplementary-support parameters and meet federal guidance will gain a compliance advantage and potentially easier market access.
Who Bears the Cost
- Licensed mental-health professionals and small practices — clinicians bear responsibility for vetting and approving AI outputs, documenting consent, and ensuring clinical appropriateness, creating time and liability costs.
- Digital mental-health startups and AI vendors — a prohibition on training models on psychotherapy data and constraints on in-session functionality narrows product features and eliminates a common training-data source, increasing development and compliance costs.
- Behavioral-health employers and health systems — when employers provide or mandate AI, they assume deployment liability and must develop compliance processes, vendor oversight, and training programs.
- Regulators and enforcement agencies — the Department of Consumer Affairs and licensing boards will need resources to investigate, adjudicate, and promulgate regulations; the statutory bedrock shifts work to regulators without funding detail.
Key Issues
The Core Tension
The central tension of SB903 is between protecting vulnerable therapy clients (privacy, safety, professional accountability) and preserving the practical benefits of AI (efficiency, analytics, model improvement). The bill tilts decisively toward protection—limiting in-session AI and barring psychotherapy data use—but in doing so it creates uncertainty, compliance costs, and a real risk of chilling useful innovations that depend on clinical data or on AI-assisted workflows.
SB903 aims to protect patients and lock clinicians into final clinical responsibility, but the text contains internal ambiguities and conflicting signals that will frustrate implementation. The draft permits ‘‘anonymized data’’ analysis as a supplementary support function yet separately forbids any company from using psychotherapy data to train models—an inconsistency that leaves unanswered whether properly deidentified datasets can power analytics or model improvement.
Vendors and clinics will need regulatory guidance to reconcile these provisions.
Several operational gaps increase legal uncertainty. One provision that conditions AI use when sessions are recorded or transcribed omits the enumerated conditions, making it impossible for practitioners to comply in good faith; the statute also relies on FDA ‘‘guidance’’ for low-risk software and HIPAA compliance as a safety backstop, but many consumer-facing mental-health tools fall outside FDA premarket paradigms and may not fit neatly into HIPAA’s covered-entity framework.
Finally, the bill imposes per-violation civil penalties without defining a violation unit (per patient, per incident, per feature), which could produce wildly different exposure depending on how enforcement is interpreted.
Try it yourself.
Ask a question in plain English, or pick a topic below. Results in seconds.