Codify — Article

California bill requires clinical AI transparency, patient notice, and liability disclosure

AB 2575 mandates detailed, at-use disclosures about AI and clinical decision support tools to clinicians and patients, shifting compliance and liability onto health providers and developers.

The Brief

AB 2575 obligates licensed health facilities, clinics, physician offices, and group practices to disclose detailed information about any artificial intelligence or clinical decision support system used in patient care. The statute lists what must be disclosed — from developer and funding to training data representativeness, validation, inputs, ongoing maintenance, and known limitations — and requires that clinicians receive the disclosure at the time they use or view a tool’s output and that affected patients receive a plain‑language notice linked in their health record.

The bill matters because it converts a set of technical and procurement practices into enforceable compliance duties and creates new legal exposure: it explicitly notifies users that developers and health entities are liable for AI-related harm, preserves clinicians’ right to override recommendations, and assigns enforcement to licensing authorities and unfair‑competition law. That combination will affect procurement, clinical workflow, EHR vendors, and how developers document and publish technical details about their models.

At a Glance

What It Does

The bill requires covered health entities to provide specified disclosures about any ‘covered tool’ (AI or clinical decision support) to clinicians at the time they use or view outputs, and to provide plain‑language disclosures linked in the patient’s record when a patient’s care or data was affected. Required content ranges from developer/funding and model type to training data demographics, validation, performance metrics, maintenance, and known risks.

Who It Affects

The rule applies to licensed health facilities, clinics, physician offices, and offices of group practice that use or deploy covered tools, and therefore touches clinicians, health IT/EHR vendors, medical device and software developers, and patients whose data or care is implicated.

Why It Matters

AB 2575 establishes a baseline transparency regime for clinical AI in California and ties noncompliance to licensing actions and Section 17200 unfair‑competition liability. That combination raises compliance costs, creates a documentation race among developers, and alters legal risk calculations for providers deploying AI in care.

More articles like this one.

A weekly email with all the latest developments on this topic.

Unsubscribe anytime.

What This Bill Actually Does

AB 2575 draws a bright line: if a licensed California health entity uses an AI system or a clinical decision support system in patient care, it must give people who use or see that system a set of disclosures. The bill defines covered tools broadly to include engineered systems that infer outputs and clinical decision support that produces predictions, classifications, recommendations, evaluations, or analyses.

The disclosure obligation attaches to the entity that uses or deploys the tool in care settings defined elsewhere in California law.

The bill specifies the catalog of content the disclosure must contain. That catalog is unusually detailed: it requires identification of the developer and funding source, any foundation model used, and a plain description of the tool’s output; the tool’s intended use and population; banned or cautioned uses; the inputs consumed by the tool; how the tool generates outputs; and a narrated account of development choices, including training‑set composition, representativeness by demographic groups, known biases tied to protected characteristics, and the fairness processes used in development.

It also requires a description of validation, qualitative performance measures, and the processes for ongoing maintenance, updates, and continued validation or fairness assessment.Timing and delivery are central to the bill’s mechanics. Clinicians or other people who use or view a tool must receive the disclosure at the moment they access recommendations or outputs, and the statute further obliges the entity to provide a plain‑language disclosure that is linked in the health record of any patient whose care was affected or whose data was used.

The bill insists disclosures be provided with ‘ample time’ so clinicians can review them before deciding whether and how to rely on the tool, and it expressly preserves a caregiver’s right to override a tool’s output within the worker’s scope of practice or to comply with law.Enforcement is multi‑track. Violations by hospitals and clinics are subject to the licensing enforcement regimes specified in the Health and Safety Code; violations by individual physicians fall within the Medical Board or Osteopathic Medical Board’s jurisdiction; and the bill treats a violation as an act of unfair competition under Business and Professions Code Section 17200.

The bill also mandates a notice in disclosures that developers and health entities are liable for harm resulting from the use of AI in patient care, which alters the legal landscape for both product makers and care providers.

The Five Things You Need to Know

1

The disclosure must identify the tool’s developer, funding source, any foundation model used, and provide a plain description of the tool’s output.

2

Developers or deploying entities must disclose training‑set details including demographic representativeness and any known biases tied to protected characteristics, plus the process used to promote fairness.

3

Disclosures must be provided to clinicians at the time they use or view any recommendation or output and must be linked in a patient’s record when that patient’s care or data was affected.

4

The bill explicitly permits a direct care worker to override a tool’s output when appropriate within their scope of practice or to meet legal obligations.

5

Enforcement routes include health facility/clinic licensing actions, oversight by the Medical Board or Osteopathic Board for physicians, and private enforcement under Section 17200 unfair‑competition law.

Section-by-Section Breakdown

Every bill we cover gets an analysis of its key sections. Expand all ↓

Section 1339.76(a)

Who must disclose and to whom

This provision imposes the primary duty: any licensed health facility, clinic, physician’s office, or group practice office that uses or deploys a covered tool must disclose the statutorily required information to any licensed health care professional or other person who uses or sees outputs from the tool. Practically, that makes deploying entities responsible for preparing and delivering disclosures to users inside the care delivery workflow rather than shifting the duty to individual clinicians or developers alone.

Section 1339.76(b)(1)–(6)

Tool identity, intended use, inputs, and development transparency

These paragraphs require identification of the developer and funding, any foundation model, the tool’s intended use (including target population and user), cautioned out‑of‑scope uses, and a list of inputs the tool consumes. They also demand development detail: a description of training data or clinical research underlying recommendations, assessment of demographic representativeness, known biases by protected characteristic, relevance of training data to the deployment setting, and a stated fairness process used during development. That forces entities to turn internal model cards, technical briefs, or research supplements into disclosure artifacts consumable by clinicians.

Section 1339.76(b)(7)–(10)

Validation, performance, and maintenance

The statute compels disclosure of the validation process, qualitative performance measures, ongoing maintenance plans, and the update and re‑validation or fairness assessment schedule. For operators, this means preserving a documented lifecycle for each deployed model: initial validation evidence, monitoring metrics, remediation plans for performance drift, and a versioning approach that ties updates to repeated validation or fairness checks.

2 more sections
Section 1339.76(b)(11)–(12) and (c)

Liability notice, clinician override, and timing of disclosure

The bill requires an explicit notice that health entities and developers are liable for harm from AI use and states that a worker may override tool outputs as appropriate within their scope of practice. It also mandates that disclosures be furnished at the time a clinician or other person uses or views outputs and provided in plain language and linked in the health record for any affected patient. The ‘ample time’ requirement, however, is left undefined, placing practical pressure on implementers to set workflow standards that meet the statute’s intent.

Section 1339.76(d)–(e)

Enforcement and definitions

The statute routes enforcement through existing licensing frameworks for facilities and clinics, assigns physician conduct to the respective medical boards, and classifies noncompliance as unfair competition under Section 17200. The definitions section clarifies covered terms — notably defining ‘artificial intelligence’ and ‘clinical decision support system’ broadly, and importing statutory meanings for clinic, health facility, patient clinical information, and physician’s office. Those definitions determine the scope of covered tools and the population of regulated entities.

At scale

This bill is one of many.

Codify tracks hundreds of bills on Healthcare across all five countries.

Explore Healthcare in Codify Search →

Who Benefits and Who Bears the Cost

Every bill creates winners and losers. Here's who stands to gain and who bears the cost.

Who Benefits

  • Patients whose care is affected: they gain a plain‑language disclosure linked in their record, improving visibility into when AI influenced decisions and enabling informed questions and follow‑up.
  • Frontline clinicians and care teams: they receive standardized information at the point of use and retain an explicit statutory right to override tool outputs, which supports clinical judgment and risk management.
  • Regulators and public health authorities: having standardized disclosures tied to validation and performance data will improve oversight capacity and incident investigations when AI‑related harm occurs.
  • Developers and vendors with mature documentation practices: companies that already produce model cards, validation reports, and bias assessments gain a compliance advantage and clearer procurement positioning.

Who Bears the Cost

  • Licensed health facilities, clinics, and physician offices: they must assemble, deliver, and maintain statutory disclosures and integrate them into EHR workflows, which creates upfront and ongoing compliance costs.
  • Smaller practices and rural clinics: these providers face disproportionate burden because they may lack in‑house IT, legal, or procurement teams to translate technical documentation into the required disclosures.
  • Software developers and SaaS vendors: firms will need to disclose potentially sensitive information about training data, models, and fairness processes—raising IP, contractual, and data‑sharing challenges.
  • Medical boards and licensing agencies: increased enforcement and complaint handling will raise administrative load and may require new technical expertise to evaluate disclosed materials.

Key Issues

The Core Tension

AB 2575 forces a choice between two legitimate aims: protecting patients and clinicians through detailed transparency and oversight, versus preserving the practical ability of developers and providers to deploy advanced AI without undue disclosure burdens, IP leakage, or crippling compliance costs. The bill advances patient‑facing clarity but does so by imposing disclosure and liability pressures that could slow adoption or reshape marketplace incentives.

The bill puts transparency front and center, but several implementation frictions could blunt its effects or produce unintended consequences. First, the statute demands disclosures that may be difficult to produce without revealing proprietary training data or model internals—‘how the tool generates outputs’ and detailed training‑set descriptions can clash with trade‑secret protections or third‑party licensing agreements.

Second, many modern models (including foundation models and systems trained on aggregated or de‑identified datasets) cannot neatly map training examples to deployed behavior, making representativeness claims and bias attributions technically challenging.

Operationally, the timing and delivery requirements raise tough questions. The law requires disclosures at the moment of use and a patient‑linked plain‑language notice, but it leaves ‘ample time’ undefined.

Health entities will need to decide whether to embed long technical appendices in EHR links, prepare short point‑of‑care summaries, or both—each choice has trade‑offs for clinician attention and liability exposure. Finally, the statute’s liability notice and the routing of enforcement across licensing regimes and Section 17200 create a legal landscape in which providers and developers may be more litigious or more conservative in deploying beneficial tools; smaller vendors may withdraw rather than disclose sensitive process details, and providers may avoid tools that introduce perceived legal risk.

Try it yourself.

Ask a question in plain English, or pick a topic below. Results in seconds.