Codify — Article

California SB 1248: Limits and Oversight for State Automated Decision Systems

Requires human review, accuracy checks, nondiscrimination monitoring, and data safeguards when California agencies use AI-based systems for licensing or benefits decisions.

The Brief

SB 1248 creates a statutory baseline for how California state agencies may use automated decision systems (ADS) when delivering services — explicitly covering benefit determinations and professional licensing. The bill defines ADS, bars agencies from relying solely on an ADS output for adverse actions, requires human review of adverse recommendations, mandates accuracy verification and bias monitoring, and imposes data-protection limits on inputs to those systems.

It also directs the Government Operations Agency (GovOps) to publish guidance and offer technical assistance.

This matters for licensing boards, social services programs, and any agency planning to automate routine determinations: the bill preserves human oversight and civil‑rights protections while authorizing modernization. At the same time it creates compliance obligations — quality‑control reviews, nondiscrimination monitoring, documentation/disclosure rules, and vendor safeguards — that agencies and third‑party vendors will need to budget for and operationalize.

At a Glance

What It Does

SB 1248 limits how state agencies use ADS for services by (1) prohibiting ADS outputs as the sole basis for adverse service decisions except where law expressly authorizes it, (2) requiring human review of adverse recommendations, (3) mandating accuracy verification and nondiscrimination monitoring, and (4) requiring data safeguards and quality‑control reviews.

Who It Affects

State agencies that administer benefits or professional licenses (including licensing boards under the Department of Consumer Affairs), applicants for those services, and private vendors that supply ADS products or processing services to the state. GovOps is authorized to issue implementation guidance and provide technical assistance.

Why It Matters

The bill establishes a statewide minimum standard for using algorithmic tools in high‑stakes administrative decisions, balancing modernization with due process and civil‑rights protections. For compliance teams and program managers, it converts several best practices into statutory duties that will affect procurement, contracting, staffing, and recordkeeping.

More articles like this one.

A weekly email with all the latest developments on this topic.

Unsubscribe anytime.

What This Bill Actually Does

SB 1248 starts by defining key terms: “automated decision system” covers computational processes from machine learning, statistical modeling, or AI that issue scores, classifications, or recommendations and materially impact people. The bill explicitly excludes low‑risk IT tools (spam filters, firewalls, calculators, simple databases) to keep the focus on systems that replace or assist discretionary judgment.

It also defines the scope of covered “services” to include public benefits and licensing actions, but not competitive determinations.

The core operational rule is simple: agencies may use ADS to inform decisions, including to screen for minimum eligibility thresholds, but may not rely on an ADS output as the sole basis for an adverse determination — for example, denying a license or benefit — unless a separate federal or state law expressly allows doing so. Whenever an ADS indicates noneligibility or suggests an adverse action, the agency must have a human review the output before taking that action.

The statute also bars agency users from presenting work produced solely by an ADS as their own original work, and it requires agencies to document or disclose when an ADS was material to a decision, consistent with other legal constraints.On quality and fairness, the bill requires agencies to verify ADS output accuracy and to promote nondiscrimination by monitoring and periodically evaluating their systems for disparate impacts across a broad set of protected characteristics. Agencies must ensure application submissions meet required formats and fields so ADS inputs are complete.

The agency director (or designee) must run an initial and then periodic quality control review of ADS outputs — or statistically valid samples — to assure “acceptable accuracy.” The statute also places limits on inputting legally protected information, personally identifiable information, and protected health information into ADS, and it requires safeguards when a third‑party system is used, including access controls and appropriate security standards.Finally, SB 1248 gives GovOps the authority to develop and publish guidance to help agencies comply and requires GovOps to notify the Joint Legislative Budget Committee before issuing that guidance. GovOps may also provide technical assistance on request.

The bill therefore sets statutory guardrails and delegates much of the practical specification of standards — sampling methods, accuracy thresholds, monitoring frequency, and vendor contract terms — to future guidance and agency practice.

The Five Things You Need to Know

1

The bill defines “automated decision system” narrowly enough to exclude basic IT tools (spam filters, firewalls, calculators, simple databases) but includes machine‑learning and statistical models that issue scores, classifications, or recommendations that materially affect people.

2

Section 12898.1(c) bars using an ADS output as the sole basis for an adverse service determination (denial, suspension, or revocation of benefits or licenses), except where federal or state law expressly authorizes such use.

3

Section 12898.1(d) requires that any ADS output suggesting noneligibility or other adverse action be reviewed by a human before the agency takes an adverse action.

4

Section 12898.1(i) obligates the agency director or designee to perform an initial and periodic quality control review of ADS outputs or a statistically valid sample to assure “acceptable accuracy.”, GovOps may publish guidance and offer technical assistance to agencies under Sections 12898.2–12898.3, but must first notify the Joint Legislative Budget Committee before issuing that guidance.

Section-by-Section Breakdown

Every bill we cover gets an analysis of its key sections. Expand all ↓

Section 12898

Definitions and scope

This section sets the groundwork by defining “automated decision system,” “services,” “personally identifiable information,” and other key terms. Practically, those definitions determine which tools are regulated and whether a given program — for example, a licensing board’s credentialing workflow or a social services eligibility engine — falls within the chapter. The inclusion of a “materially impacts natural persons” threshold focuses the law on systems used in high‑stakes contexts, while the explicit exclusions (spam filters, firewalls, calculators, datasets) narrow applicability to avoid trivial coverage.

Section 12898.1(a)–(f)

Use limitations, human oversight, and transparency

These subsections establish the behavioral rules for agency users: agencies may use ADS as an input but cannot substitute ADS outputs for human judgment; ADS can screen for minimum eligibility thresholds; users must not claim ADS‑only work as their own; and agencies must provide a way to document or disclose when ADS materially influenced a decision. For practitioners, this creates a recordkeeping and disclosure duty that will intersect with existing administrative‑law transparency obligations and FOIA/public records practices.

Section 12898.1(g)

Accuracy verification and nondiscrimination monitoring

Subsection (g) requires agencies to verify ADS outputs, monitor systems for bias across a long list of protected characteristics, and periodically evaluate ADS use to reduce risks of discriminatory outputs. Practically this pushes agencies to implement bias‑testing regimens, drift monitoring, and remediation pathways. The statute leaves the technical specifics — which fairness metrics to use, acceptable thresholds, or remediation timelines — to agencies and GovOps guidance, so procurement and vendor clauses will need to reflect these future technical standards.

2 more sections
Section 12898.1(h)–(i)

Data protection, third‑party safeguards, and quality control review

Subsection (h) prohibits users from inputting PII, PHI, or other legally protected information into an ADS unless necessary and authorized, and it requires safeguards for third‑party systems (access controls, security standards). Subsection (i) requires the director or designee to perform an initial and periodic quality control review of outputs or statistically valid samples to assure acceptable accuracy. Together, these provisions will drive contract language with vendors, data‑minimization strategies, and the establishment of sampling and audit protocols within agencies.

Sections 12898.2–12898.3

GovOps guidance and technical assistance

GovOps may develop and publish guidance to help agencies implement the chapter and may provide technical assistance on request. Before issuing guidance, GovOps must notify the Joint Legislative Budget Committee. This centralizes the development of implementation detail but stops short of setting binding statewide technical standards in statute; agencies should expect guidance to address operational definitions (e.g., “acceptable accuracy”), monitoring cadence, sampling methods, and vendor contract expectations.

At scale

This bill is one of many.

Codify tracks hundreds of bills on Government across all five countries.

Explore Government in Codify Search →

Who Benefits and Who Bears the Cost

Every bill creates winners and losers. Here's who stands to gain and who bears the cost.

Who Benefits

  • Applicants for licenses and public benefits — they retain a statutory right to human review before an adverse decision and protections against ADS‑only determinations, preserving avenues for appeal and human discretion.
  • Regulated professionals and workforce programs — by requiring quality checks and nondiscrimination monitoring, the bill reduces the risk that automated tools will systematically block credentialing or benefit access for protected groups.
  • Civil‑rights and consumer advocates — the statute creates a statutory foothold for algorithmic accountability, mandating bias monitoring and limiting opaque automation in high‑stakes government decisions.
  • Agency compliance, procurement, and IT teams — GovOps’ authority to publish guidance and provide technical assistance gives these teams a centralized resource to translate statutory duties into procurement specifications, monitoring plans, and audit regimes.

Who Bears the Cost

  • State agencies administering benefits and licensing boards — they must staff human review roles, design quality‑control and bias‑monitoring programs, update procurement and vendor management practices, and carry the attendant operational costs.
  • Third‑party ADS vendors and integrators — vendors will face stricter contracting requirements, security and access control obligations, and potential demands for explainability, audit logs, or data export for agency quality control.
  • GovOps and oversight bodies — GovOps will need capacity to draft technically specific guidance and provide ongoing assistance without a dedicated funding stream, absorbing coordination and expertise costs.
  • Taxpayers and fiscal officers — implementing periodic sampling, human review, and monitoring programs likely increases operational budgets and could slow throughput on automation initiatives that previously reduced staffing needs.

Key Issues

The Core Tension

The bill tries to resolve an unavoidable trade‑off: accelerate government service delivery with automated tools while preserving due process, privacy, and equity. Speed and efficiency push toward greater automation; accountability and civil‑rights concerns push toward human oversight and strict safeguards — and the statute leaves agencies to balance those goals without enumerating neat technical thresholds.

SB 1248 builds a protective framework but leaves important technical and enforcement detail to subsequent guidance and agency practice. Terms like “materially impacts,” “acceptable accuracy,” and what constitutes a “statistically valid” sample are undefined in the statute; agencies must await GovOps guidance or develop their own technical standards, which risks uneven implementation across departments.

The bill requires human review and accuracy verification but does not set objective metrics, timelines for remediation, or enforcement remedies (civil penalties or administrative sanctions are absent), leaving ambiguity about compliance consequences.

Operational tensions will appear in practice. The prohibition on inputting PII/PHI except when necessary may conflict with systems that need identifiable data to make correct determinations, forcing agencies into careful narrowings of inputs or complex mitigation in vendor contracts.

Requiring human review of every adverse recommendation preserves due process but can reintroduce staffing bottlenecks and negate some efficiency gains from automation unless agencies design selective human‑in‑the‑loop triggers. Finally, vendor relationships will become more complex: agencies will want audit access and explainability, while vendors may claim trade‑secret protections or limit data access, producing procurement and legal friction.

Try it yourself.

Ask a question in plain English, or pick a topic below. Results in seconds.