Codify — Article

California AB 410 requires bots to identify themselves and expands definition to include generative AI

Updates state bot law to force automated accounts to disclose identity, answer identity queries, and face public-enforcement penalties—broadening coverage to generative-AI outputs.

The Brief

AB 410 revises California's bot disclosure statute by (1) widening the legal definition of “bot” to explicitly capture accounts or applications whose outputs are produced by generative artificial intelligence and that a reasonable person could mistake for a human, and (2) imposing affirmative disclosure and conduct duties on anyone who uses a bot to autonomously communicate with a person. The measure requires bots to identify themselves when they first communicate, to answer truthfully if asked whether they are a bot or a human, and to refrain from attempting to mislead about their identity.

It also preserves an exemption where another law already imposes a more prescriptive disclosure scheme.

The bill replaces the prior operative-language section with an enforcement provision giving the Attorney General and certain local public prosecutors the ability to seek injunctive relief or civil penalties. For professionals managing automated accounts, generative-AI deployments, or platform compliance, the bill changes both the technical and legal risk landscape: it narrows allowable anonymous automation and creates a government-enforced baseline for disclosure and responsiveness.

At a Glance

What It Does

AB 410 requires a bot that autonomously communicates with a person to disclose at the first point of contact that it is a bot, to answer truthfully any later question about whether it is a bot or a human, and to avoid attempts to mislead about its identity. It redefines “bot” to include automated accounts or applications whose outputs are generated by generative AI and that a reasonable person could believe are human.

Who It Affects

Operators of automated online accounts and applications (including companies deploying chatbots, virtual assistants, or social-media automation), creators and integrators of generative-AI content, and large online platforms referenced by the statute. The bill’s obligations apply to any person using a bot to autonomously interact with someone in California, subject to a narrow exemption for bots already covered by a more prescriptive disclosure law.

Why It Matters

AB 410 establishes a state-level transparency floor for AI-driven communications and gives public prosecutors tools to enforce it. That combines definitional expansion (bringing generative-AI outputs into scope) with affirmative behavioral duties, increasing compliance complexity for businesses that deploy conversational agents or automated posting in consumer-facing contexts.

More articles like this one.

A weekly email with all the latest developments on this topic.

Unsubscribe anytime.

What This Bill Actually Does

The bill tightens the rules around automated online communicators by doing two things at once: making the label “bot” cover modern AI outputs, and then making it unlawful to let those automated agents pose as humans when they speak to Californians. The definition change is forward-looking: a bot now includes automated accounts or applications that a reasonable person could mistake for a human, and explicitly covers outputs produced by generative AI (text, images, audio, video and other synthetic content).

That means many modern chatbots, assistant services, and generative-content pipelines fall within the law’s reach.

On conduct, the statute requires that if a bot “autonomously communicates” with a person, it must disclose its nonhuman identity when it first communicates, answer truthfully if a person later asks whether the agent is a bot or a human, and not try to mislead about that identity. The disclosure must be clear, conspicuous, and reasonably designed to inform the person.

The bill leaves room for other, more specific statutes to govern in their own domains: if a bot is already subject to a more prescriptive disclosure law, this chapter does not apply.Practically, compliance means operators must engineer both front-end and back-end behaviors: display or vocalize a disclosure at first contact, program the system to reply truthfully to identity queries, and avoid dialog flows or content that would create a plausible human impersonation. The statute also defines key terms such as “artificial intelligence,” “generative artificial intelligence,” and “online platform” (the latter keyed to a 10 million unique U.S.-monthly-user threshold), which will affect which intermediaries and deployments need to assess risk.

Finally, the bill swaps the prior operative-date language for an enforcement scheme that lets state and local public prosecutors seek injunctions or penalties, making compliance a matter of regulatory risk rather than just reputational exposure.

The Five Things You Need to Know

1

The bill sets civil penalties at $1,000 per violation for persons found to have violated the chapter, and allows injunctive relief.

2

“Online platform” is defined as a public-facing internet website, web application, or digital application with 10,000,000 or more unique monthly United States visitors or users for a majority of months in the preceding 12 months.

3

The new definition of “generative artificial intelligence” explicitly covers systems that produce derived synthetic content—text, images, video, and audio—that emulate the structure and characteristics of their training data.

4

The chapter declares its duties cumulative with other laws and includes a severability clause, preserving other disclosures and allowing individual provisions to survive if parts are invalidated.

5

The statute removes the prior operative-date provision and replaces it with Section 17943, which expressly authorizes the Attorney General, district attorneys, county counsels, city attorneys, or city prosecutors to bring enforcement actions.

Section-by-Section Breakdown

Every bill we cover gets an analysis of its key sections. Expand all ↓

Section 17940 (Definitions)

Updated definitions for AI, bots, generative AI, online, platform, and person

This section revises the definitional backbone the rest of the chapter uses. It adds a broad, functional definition of “artificial intelligence” and narrows the practical question of what counts as a bot to two prongs: (1) an automated account or application that a reasonable person could mistake for a human, and (2) substantially all actions or posts are not the result of a human or are outputs of generative AI. The generative-AI definition lists content types—text, images, video, audio—and frames them as synthetic content derived from training data. The “online platform” definition introduces a bright-line audience threshold (10 million+ unique U.S. monthly users) that will matter when determining which intermediaries are directly addressed by platform-related language in the statute.

Section 17941(a) (Prohibition on misleading bot use)

Bars using bots to deceive consumers or influence elections unless disclosed

Subsection (a) preserves the preexisting prohibition on using a bot to mislead a person about its artificial identity when the intent is to deceive about communication content to incentivize a commercial transaction or influence an election. Liability under this subsection is avoided where there is disclosure that the communicator is a bot. Practically, this ties the prohibited conduct to intent and to specific end-uses (commerce and elections), rather than making all impersonatory automation per se unlawful.

Section 17941(b)-(c) (Affirmative disclosure duties)

Affirms positive duties: identify at first contact, answer queries, and avoid misleading conduct

These subsections impose affirmative operational duties on any person whose bot autonomously communicates: disclose that the agent is a bot when it first communicates, answer truthfully to subsequent questions about whether it is a bot or a human, and refrain from attempting to mislead about its identity. Subsection (c) requires disclosures be clear, conspicuous, and reasonably designed—language that pushes compliance officers to document design choices and UX placement. Subsection (d) preserves an exemption where a more prescriptive disclosure law applies, which will trigger compliance analysis when multiple laws intersect (for instance, sector-specific disclosure rules).

2 more sections
Section 17942 (Cumulative duties and severability)

Affirms interplay with other laws and preserves surviving provisions if parts are struck down

This section makes explicit that the chapter’s duties are cumulative with other legal obligations—so compliance programs must map this statute against existing consumer-protection, advertising, and election laws. The severability clause increases the statute’s durability by allowing courts to preserve functioning pieces if a provision is invalidated, which matters for litigants challenging specific terms like the “reasonable person” standard.

Section 17943 (Enforcement)

Creates a public-enforcement mechanism with injunctive relief and a $1,000-per-violation civil penalty

The newly added Section 17943 authorizes the Attorney General, district attorneys, county counsels, city attorneys, or city prosecutors to bring civil actions seeking injunctions or civil penalties of $1,000 per violation. The enforcement design places enforcement power in public offices rather than creating a private right of action, channeling disputes into government-initiated enforcement rather than private litigation by consumers or competitors.

At scale

This bill is one of many.

Codify tracks hundreds of bills on Technology across all five countries.

Explore Technology in Codify Search →

Who Benefits and Who Bears the Cost

Every bill creates winners and losers. Here's who stands to gain and who bears the cost.

Who Benefits

  • Californian consumers and voters — gain clearer information about whether an interlocutor is a human or an automated agent, improving ability to assess credibility and reducing deceptive persuasion in commercial and political contexts.
  • Companies that deploy transparent bots — obtain a compliance advantage by following explicit disclosure practices and reducing litigation or enforcement risk compared with opaque operators.
  • Researchers and journalists studying online influence operations — benefit from clearer labeling that makes it easier to identify and analyze synthetic or automated accounts for reporting or academic work.

Who Bears the Cost

  • Businesses operating chatbots and automated customer-service systems — must implement UX and programmatic changes to provide first-contact disclosures and identity-responses, incurring engineering, design, and compliance costs.
  • Developers and integrators of generative-AI content — face additional vetting and monitoring duties to ensure outputs don’t create misleading human impersonation, increasing content governance burdens.
  • Platforms and intermediaries below the 10M-user threshold or multi-jurisdictional operators — may need to build geo-targeting and compliance filters to avoid California-directed exposures, creating operational complexity and monitoring costs.

Key Issues

The Core Tension

The central tension in AB 410 is between protecting people from deceptive automated communications and avoiding overly blunt rules that suppress useful automated interactions or impose unworkable technical obligations: transparency protects consumers, but subjective standards, categorical definitions of generative outputs, and public-enforcement penalties create compliance and enforcement challenges that may chill beneficial automation or produce uneven application.

The statute combines broad, subjective standards with narrowly defined remedies, producing a number of predictable implementation and litigation flashpoints. Key terms—“reasonable person,” “substantially all,” and “autonomously communicate”—leave room for dispute about when an actor crosses the line into regulated conduct.

For example, automated systems that mix human oversight with algorithmic outputs will force courts and regulators to parse whether the activity is “result[ing]” from a human. The ‘‘answer truthfully’’ requirement is operationally vague: does it require back-end audit logs proving the system’s nonhuman generation, or merely that the bot state a truthful sentence when asked?

Enforcement agencies will have to decide how to measure compliance and what remedial steps count as cures.

Enforcement is limited to public prosecutors and does not create a private right of action, which concentrates discretion and may lower the volume of litigation but increases the political character of enforcement. The per-violation civil penalty ($1,000) is a blunt instrument that may under-deter high-volume violations (an operator that sends millions of messages may face modest aggregate penalties unless regulators aggregate claims), but it is also easy to calculate and litigate.

The exemption for more prescriptive schemes avoids duplication but creates ambiguity about which laws qualify as ‘‘more prescriptive,’’ requiring agencies to build interstitial guidance or risk inconsistent application. Finally, the 10 million–user platform threshold carves out many smaller services, producing compliance asymmetries between large and midsize platforms and shifting the compliance burden toward smaller bot operators rather than platform hosts.

Try it yourself.

Ask a question in plain English, or pick a topic below. Results in seconds.