AB 316 adds Civil Code section 1714.46 and prevents a defendant who developed, modified, or used an artificial intelligence system from asserting that the AI autonomously caused the plaintiff’s harm. The statute defines “artificial intelligence” broadly as engineered or machine-based systems that vary in autonomy and infer outputs from inputs to influence physical or virtual environments.
This change narrows a tactical defense in civil litigation and refocuses legal accountability on human actors and organizations behind AI systems. For businesses, developers, deployers, insurers, and litigators, the statute alters how causal responsibility will be pleaded, defended, and insured when AI is implicated in injury claims.
At a Glance
What It Does
The bill creates Civil Code §1714.46, which bars defendants who developed, modified, or used an AI system from asserting that the AI’s autonomous operation absolves them of liability for a plaintiff’s harm. It also provides a statutory definition of “artificial intelligence.”
Who It Affects
Developers, companies that modify or deploy AI systems in products or services offered to Californians, defense counsel, plaintiffs’ lawyers, and insurers will be directly affected. Compliance and product-safety teams must reassess risk allocation and documentation practices.
Why It Matters
The statute codifies a doctrine of human accountability for AI-related harms, likely increasing exposure for firms tied to system design, deployment, or use. It encourages upstream controls (design, training, oversight) and will change discovery and litigation strategy in cases involving complex models.
More articles like this one.
A weekly email with all the latest developments on this topic.
What This Bill Actually Does
The new Civil Code provision has three moving parts: a definition of artificial intelligence, a blunt prohibition on one particular defense, and an explicit preservation of all other defenses and comparative-fault evidence. The definition is intentionally capacious: an “engineered or machine-based system” that can have different levels of autonomy and that infers how to produce outputs from inputs to influence environments.
That wording pulls in a wide range of models and systems used in both physical devices and software services.
The core rule is procedural-and-substantive: in any civil action alleging that an AI caused harm, defendants who developed, modified, or used the AI may not assert that the AI’s autonomous action was the cause of the harm. Practically, this prevents a defendant from resting its defense on the claim that responsibility lies with an inscrutable machine decision rather than with human choices about design, training data, deployment, or monitoring.
The statute, however, does not prevent defendants from adducing evidence on causation, foreseeability, or from using any other affirmative defenses — courts will still consider technical causation evidence, but not the categorical shield of “the machine did it.”For litigators and risk managers, the statute reorders incentives. Plaintiffs will likely focus discovery on human decision points: model training sources, validation and safety checks, change logs, deployment authorizations, and operational overrides.
Defendants must preserve records showing reasonable design, testing, and supervision, and may need to renegotiate indemnities and insurance to reflect increased exposure. Courts will face the task of translating the statutory bar into evidentiary rulings and jury instructions: judges must distinguish between admissible technical explanations of how outputs led to harm and a forbidden assertion that autonomous behavior per se severs human responsibility.Finally, the law leaves open how doctrines like product liability, negligent design, and comparative fault interact with AI’s novel characteristics.
By removing a categorical autonomy defense, the statute pushes parties and judges to address proximate cause and foreseeability within existing tort frameworks — but those frameworks will be tested by opaque, probabilistic, and adaptive systems. The net effect is to make human actors and organizations the focal point of accountability whenever AI plays a causal role in injury claims.
The Five Things You Need to Know
Civil Code §1714.46(a) defines “artificial intelligence” as an engineered or machine-based system that can vary in autonomy and infer outputs from inputs to influence physical or virtual environments.
Section 1714.46(b) bars defendants who developed, modified, or used AI from asserting that the AI autonomously caused the plaintiff’s harm — the statute prohibits that specific defense.
The statutory bar targets three categories of defendants: developers (creators), modifiers (entities that change models/systems), and users (those who deploy or operate AI), not just original manufacturers.
Section 1714.46(c) preserves all other affirmative defenses and allows defendants to introduce evidence relevant to causation, foreseeability, and comparative fault; the bill does not create strict liability or remove traditional causation inquiries.
The law applies within civil litigation only; it neither prescribes damages nor creates regulatory standards or criminal penalties — it alters which defensive theories are legally permissible.
Section-by-Section Breakdown
Every bill we cover gets an analysis of its key sections.
Broad statutory definition of ‘artificial intelligence’
This paragraph defines the covered technology as engineered or machine-based systems that can vary in autonomy and that infer outputs from inputs to influence environments. The phrasing covers both systems embedded in physical devices and cloud-based models, and it intentionally focuses on functional behavior (“inferring outputs”) rather than specific architectures. Practically, the definition is expansive enough that many modern ML systems used in consumer products, industrial controls, and online services will fall within its scope.
Prohibition on asserting AI autonomy as a defense
This is the operative sentence: in any action alleging harm caused by AI, a defendant who developed, modified, or used the AI may not assert that the AI autonomously caused the harm. Mechanically, the provision removes a categorical defense strategy that seeks to attribute responsibility solely to machine autonomy. Courts will need to operationalize that bar — deciding when an argument crosses from admissible technical causation into the prohibited claim that the machine by itself severs human accountability.
Preserves other affirmative defenses and causation evidence
This subsection makes clear that defendants retain the ability to present any affirmative defense besides the barred autonomy claim, including traditional evidence on causation and foreseeability. That preserves the structure of negligence and product-liability litigation: defendants can still argue that the plaintiff failed to prove causation or that the harm was not foreseeable, but they cannot rely on a blanket assertion that responsibility lies with an autonomous system.
Allows evidence of comparative fault
The statute explicitly permits defendants to introduce evidence relevant to the comparative fault of any other person or entity. This preserves allocation of responsibility among multiple human or corporate actors (for example, an integrator, a vendor of training data, or a service operator) and signals that the legislature intended to fit AI cases into existing frameworks for dividing fault rather than creating an immunity pathway.
This bill is one of many.
Codify tracks hundreds of bills on Technology across all five countries.
Explore Technology in Codify Search →Who Benefits and Who Bears the Cost
Every bill creates winners and losers. Here's who stands to gain and who bears the cost.
Who Benefits
- Plaintiffs and personal-injury claimants — by removing a categorical autonomy shield, injured parties gain clearer routes to hold human actors or organizations accountable and to secure discovery focused on human decision points.
- Consumer safety advocates and regulators — the statute aligns legal incentives with safety oversight by encouraging operators and developers to adopt controls, transparency, and testing to reduce litigation risk.
- Compliance and product-safety teams — clearer allocation of responsibility rewards organizations that document design, validation, and monitoring processes; robust governance becomes a competitive risk-management advantage.
- Plaintiffs’ litigators — the bar on the autonomy defense simplifies one line of defense and may increase leverage in early case posture and settlement discussions.
Who Bears the Cost
- AI developers, modifiers, and deployers — these entities face higher exposure because they cannot escape liability by pointing to the system’s autonomy; they must demonstrate reasonable design, testing, and supervision.
- Insurers and underwriters — expect increased claims and re-pricing of coverage for AI-related risks, with potential restrictions or higher premiums for systems judged to lack adequate controls.
- Businesses that integrate third-party AI components — integrators may face greater indemnity and warranty disputes as plaintiffs seek fault across the supply chain and defendants shift blame to other human actors.
- Defense counsel and courts — more complex discovery over models, training data, and decision logs will increase litigation costs and judicial workload as judges gatekeep technical evidence without allowing the autonomy shield.
Key Issues
The Core Tension
The central tension is accountability versus tractable causation: the statute insists on human and organizational responsibility for harms linked to AI, which promotes safety incentives, but it also forces courts and litigants to resolve technically complex causation questions where evidence is often opaque, probabilistic, or protected — a trade-off between moral/legal clarity and practical proof.
The statute resolves one tactical defense but leaves open thorny implementation questions. First, the phrase “autonomously caused” will be litigated: judges must draw lines between admissible expert testimony explaining how system outputs produced harm and impermissible assertions that machine autonomy eradicates human responsibility.
That line will be fact-intensive and may produce inconsistent rulings across cases.
Second, the law increases demand for technical discovery — model weights, training datasets, evaluation logs, and deployment change histories — while existing protections for trade secrets and privacy remain in tension with plaintiffs’ need for evidence. Courts will have to balance disclosure against confidentiality, a process that will shape practical access to the information plaintiffs need to prove causation.
Third, removing the autonomy defense does not eliminate causation problems inherent in probabilistic, adaptive systems. Plaintiffs still must prove causation and foreseeability; in many instances that remains difficult.
The statute may therefore shift litigation toward showings of negligent design, inadequate validation, or faulty operational controls, but it may also push parties into creative procedural strategies (multiple defendants, expanded discovery) and contractual risk-shifting upstream in supply chains.
Try it yourself.
Ask a question in plain English, or pick a topic below. Results in seconds.