SB 468 imposes a statutory duty on any "covered deployer" whose high‑risk artificial intelligence systems process personal information to develop, implement, and maintain a written comprehensive information security program. The program must be tailored to the deployer’s size, resources, and data holdings and include administrative, technical, and physical safeguards such as designated staff, risk assessments, employee training, third‑party supervision, access limits, encryption, monitoring, and incident documentation.
The bill matters because it links AI deployment to explicit data‑security obligations and makes violations a deceptive trade practice under California’s Unfair Competition Law (UCL). That combination creates both a compliance baseline for AI systems that handle personal data and a civil‑enforcement pathway that could produce injunctions, restitution claims, and litigation risk for deployers operating in California.
At a Glance
What It Does
Requires covered deployers whose high‑risk AI systems process personal information to establish and maintain a written information security program with administrative, technical, and physical safeguards tailored to the deployer’s size, resources, and data volume. The statute enumerates specific program elements—designation of responsible employees, risk assessments, training, access controls, encryption, monitoring, vendor contracts, regular reviews, and incident post‑mortems.
Who It Affects
Any entity qualifying as a covered deployer that conducts business in California and operates high‑risk AI systems processing personal information — from cloud and AI platform providers to companies embedding high‑risk models in products or services — plus their contractors and third‑party service providers who handle that data. IT, security, legal, compliance, and procurement teams will carry the operational burden.
Why It Matters
The bill sets a statutory, granular baseline for AI‑related data security rather than leaving protections to general privacy law or voluntary standards. By folding violations into the UCL it creates a concrete enforcement lever that increases litigation and regulatory exposure for deployers and their vendors.
More articles like this one.
A weekly email with all the latest developments on this topic.
What This Bill Actually Does
SB 468 creates a state duty for covered deployers of high‑risk AI systems that process personal information: you must have a written, comprehensive information security program. The statute ties the scope of that program to four practical factors — the deployer’s size, available resources, amount of stored data, and the sensitivity/need for confidentiality of that data — so the law is meant to scale from larger platforms to smaller operators in principle.
The bill specifies what must be inside the written program. At the administrative level it requires designating one or more employees to maintain the program, documenting reasonably foreseeable internal and external risks, instituting ongoing employee and contractor training (including temps), mandating policy compliance, and implementing disciplinary measures and controls to block terminated employees from accessing records.
It also requires written policies controlling off‑site storage, access and transport of physical records.On third‑party relationships the statute forces deployers to take ‘‘reasonable steps’’ when selecting vendors and to contractually require vendors to implement and maintain appropriate security measures. The bill therefore shifts part of the compliance chain onto procurement and contract management: covered deployers must both vet and bind third parties to commensurate protections.Technically, the bill lists concrete controls the program should include where feasible: secure user‑authentication protocols (unique IDs, controlled credentials, password practices, blocking after multiple failed attempts), access restrictions to only those who need data to perform their jobs, encryption of data in wireless transmission and across public networks, encryption of personal information on portable devices, current firewall protection and operating‑system patches for internet‑connected systems, and reasonably current malware protection.
It also requires regular monitoring, documented incident response with mandatory post‑incident review, and periodic review of safeguards at least annually and whenever a material business change occurs.Finally, SB 468 makes noncompliance actionable under the Unfair Competition Law. That placement exposes covered deployers to civil enforcement tools available under California law and makes information security a business‑law obligation rather than just a best practice.
The Five Things You Need to Know
The bill requires a written comprehensive information‑security program for any covered deployer whose high‑risk AI systems process personal information and directs that the program be tailored to the deployer’s size, resources, and data volume.
The program must designate one or more employees to maintain it and include documented risk assessments, ongoing employee and contractor training (including temporary staff), and disciplinary policies to enforce compliance.
Technical controls enumerated include secure authentication (unique IDs, controlled credentials, blocking after failed logins), access restrictions by job need, encryption for wireless and public‑network transmissions, and encryption of personal information on portable devices.
Covered deployers must take reasonable steps to select third‑party service providers capable of protecting personal information and must require vendors by contract to implement and maintain appropriate security measures.
A violation of the section is treated as a deceptive trade practice under California’s Unfair Competition Law, creating a civil‑enforcement route for regulators and private parties.
Section-by-Section Breakdown
Every bill we cover gets an analysis of its key sections.
Statutory duty to protect personal information
Subsection (a) establishes the basic statutory obligation: a covered deployer doing business in California has a duty to protect personal information held in connection with high‑risk AI systems. That sentence functions as the law’s hook — it converts what might otherwise be guidance into a legal duty, making compliance an affirmative obligation rather than optional.
Requirement to develop and maintain a written program
Subdivision (b) requires covered deployers whose high‑risk AI systems process personal information to develop, implement, and maintain a comprehensive information‑security program in writing. The program must cover administrative, technical, and physical safeguards and be scaled according to four enumerated factors: size, resources, amount of data, and need for confidentiality. Practically, that pushes organizations to document decision‑making about scope and resourcing so that auditors or litigants can assess whether safeguards were proportionate.
Administrative controls, staffing, policies, and training
These subsections require specific program elements: alignment with other applicable state or federal safeguards, designation of responsible employee(s), formal identification and assessment of foreseeable risks, mandatory employee and contractor education (including temporary workers), enforceable policies and disciplinary measures, and controls to prevent terminated employees from retaining access. For compliance teams this means formalizing governance, HR processes, and training records as part of the security program.
Third‑party supervision, physical safeguards, monitoring, and incident documentation
The statute requires written policies for supervising third‑party service providers: reasonable selection steps plus contractual obligations for vendors to implement appropriate security measures. It also mandates reasonable physical access restrictions (e.g., locked storage for physical records), ongoing monitoring to detect unauthorized access, and documentation of response actions after security incidents with a mandatory post‑incident review. Those provisions directly implicate procurement, facilities, and incident‑response procedures.
Enumerated technical controls
Subdivision (c)(12) catalogues technical measures that the program should, to the extent feasible, include: secure user authentication (credential controls, password practices, biometric or token options), access controls that limit access by job necessity, encryption of transmissions and wireless data, encryption of portable devices, current firewall and OS patches for internet‑connected systems, and up‑to‑date malware/agent software. The phrasing "to the extent feasible" signals some flexibility, but the list functions as a practical checklist for auditors and litigants.
Enforcement mechanism: Unfair Competition Law
Subdivision (d) converts violations into a deceptive trade act or practice under California’s Unfair Competition Law (UCL). That placement means enforcement can proceed through the UCL’s remedies and procedure, potentially including injunctive relief and restitution via public enforcement or private suits, rather than a bespoke statutory penalty framework.
This bill is one of many.
Codify tracks hundreds of bills on Privacy across all five countries.
Explore Privacy in Codify Search →Who Benefits and Who Bears the Cost
Every bill creates winners and losers. Here's who stands to gain and who bears the cost.
Who Benefits
- California residents whose personal information is processed by high‑risk AI systems — they gain an explicit statutory baseline requiring documented safeguards, training, encryption, and access controls.
- Information‑security and compliance teams at deployers that already meet or exceed the enumerated controls — the statute gives them a clear standard to cite in audits and contractual negotiations and reduces uncertainty about expectations.
- Security and managed‑service vendors — demand for third‑party security services, encryption tools, incident‑response support, and vendor‑management solutions is likely to rise as covered deployers implement the program requirements.
Who Bears the Cost
- Covered deployers (particularly smaller firms or startups) — they must invest in written programs, designated staff, training, technical controls, and vendor management, which may be costly and operationally disruptive.
- Third‑party service providers — vendors will face increased contractual obligations and vetting processes and may need to upgrade their own security posture to win or retain business with deployers subject to the law.
- Legal and compliance teams — enforcement via the UCL creates litigation and regulatory risk, increasing legal monitoring and potential defense costs if plaintiffs or prosecutors challenge a deployer’s program adequacy.
Key Issues
The Core Tension
The central dilemma is between creating a concrete, enforceable baseline of data‑security controls for high‑risk AI (protecting individuals and setting clear expectations) and preserving enough flexibility for diverse deployers and rapidly changing technology (avoiding prescriptive rules that become obsolete or impose disproportionate burdens). Enforcement via the UCL resolves compliance gaps but amplifies litigation risk, turning reasonable technical judgments into potential legal disputes.
Two implementation frictions stand out. First, the excerpt uses operative labels — "covered deployer" and "high‑risk artificial intelligence systems" — but provides no definitions in the text provided.
Compliance hinge points (who is in scope, what counts as high‑risk) will depend on definitions elsewhere in the statutory scheme or on regulator guidance; absent that, businesses face uncertainty about applicability. Second, the statute mixes firm‑specific, qualitative obligations ("reasonable steps," "to the extent feasible," "reasonably current") with a detailed list of technical controls.
That hybrid raises two problems: it invites fact‑intensive litigation over whether particular technical choices were "reasonable" and it risks becoming outdated as security practices evolve.
There are also cross‑cutting operational trade‑offs. The bill encourages stronger authentication modalities — including biometrics — which can improve account security but also raises separate privacy and retention concerns and potential conflicts with other biometric or data‑minimization rules.
Routing enforcement through the UCL increases exposure to public and private litigation but leaves open questions about the appropriate mix of remedies, the role of state regulators versus private plaintiffs, and whether civil litigation will produce consistent technical standards. Finally, small deployers may lack the resources to implement every enumerated control immediately, and the statute does not supply funding or phased timelines; regulators will need to prioritize guidance to avoid imposing disproportionate burdens on lower‑risk actors.
Try it yourself.
Ask a question in plain English, or pick a topic below. Results in seconds.