AB 1651 requires the State Bar of California to identify when artificial intelligence is used in the creation or administration of its licensing exams or in Bar‑published study materials. The measure supplies statutory definitions for “artificial intelligence,” “generative artificial intelligence,” and “artificial intelligence‑generated content.”
The bill is aimed at making the licensure process more transparent for applicants and the public. It creates a small set of procedural rules about where and when disclosures must appear, and it closes a potential loophole by applying the rule even when a human reviews or edits AI output.
At a Glance
What It Does
AB 1651 obligates the State Bar to disclose use of AI‑generated content involved in exam development or administration and in State Bar study materials, and it defines key AI terms for that purpose. It sets where disclosures must appear and makes the requirement apply regardless of subsequent human revision.
Who It Affects
State Bar staff and contractors who write, score, or otherwise prepare bar examination content; publishers and units within the State Bar that produce practice materials; applicants for the general bar, first‑year law students’ exam, and attorneys’ exam; and commercial bar‑prep providers that may want to align their materials with Bar disclosures.
Why It Matters
The measure inserts transparency into high‑stakes professional licensing at a time when generative AI is entering educational content and assessment production. For compliance officers and exam vendors, it creates new labeling rules and operational deadlines; for applicants and regulators, it creates a traceable record of AI use in credentialing.
More articles like this one.
A weekly email with all the latest developments on this topic.
What This Bill Actually Does
The statute supplies short, functional definitions so the State Bar and its vendors know what counts as AI activity: a broad ‘artificial intelligence’ definition that covers systems that infer how to generate outputs; a narrower label for ‘generative artificial intelligence’ that emphasizes content synthesis; and a definition for ‘artificial intelligence‑generated content’ limited to visual or textual output produced in whole or part by generative models. It also clarifies which examinations fall under the rule by referencing the general bar exam, the first‑year law students’ exam, and the attorneys’ exam.
On scope, the law separates two operational contexts. One applies to materials used in developing or administering the exams themselves — that includes questions, performance tests, answer keys, and scoring rubrics.
The other applies to State Bar–published learning and preparatory resources, such as sample questions, model answers, outlines, and explanations. The statute requires different disclosure placements depending on context: web posting for exam‑related uses and a cover‑page label for study materials.A key substantive choice in the text is its insistence that human review does not nullify the duty to disclose.
If a generative model produced any part of a question, an answer key, or a sample explanation, the State Bar must disclose that fact even if a person later edited or approved the material. Finally, the law sets a clear start date: the provisions become operative on January 1, 2028, giving the Bar and its vendors a transition window to identify systems, update procurement and review practices, and change publication workflows.
The Five Things You Need to Know
The bill defines three terms: “artificial intelligence,” “artificial intelligence‑generated content,” and “generative artificial intelligence,” tying the disclosure duty to generative models that synthesize text, images, audio, or video.
For AI used in developing or administering an examination (questions, performance tests, answer keys, scoring rubrics), the State Bar must post a disclosure on its website at least 60 days before the affected exam.
For State Bar study material prepared, published, endorsed, or distributed to applicants, the statute requires a disclosure on the cover page of the material.
The disclosure obligation applies even when AI output is revised or reviewed by a natural person; human editing does not eliminate the duty to disclose original AI contribution.
The statute becomes operative January 1, 2028, and it explicitly applies to the general bar examination, the first‑year law students’ examination, and the attorneys’ examination.
Section-by-Section Breakdown
Every bill we cover gets an analysis of its key sections.
Definitions: AI, generative AI, and AI‑generated content
This subsection supplies the working vocabulary the rest of the section uses. It casts “artificial intelligence” broadly as machine systems that infer how to generate outputs to meet objectives, then narrows to “generative artificial intelligence” for systems that synthesize derived content (text, images, audio, video). It also labels outputs produced in whole or part by those systems as “artificial intelligence‑generated content.” The practical effect is to tie disclosure duties specifically to generative models rather than to any algorithmic tool.
Scope: which State Bar examinations are covered
This clause defines the set of examinations within the rule: the general bar exam, the first‑year law students’ exam, and the attorneys’ exam. By enumerating those exams, the bill avoids ambiguity about whether other State Bar assessments fall under the requirement, while leaving open administrative decisions about additional assessments.
Disclosure requirement for exam development and administration
This provision directs the State Bar to disclose when AI‑generated content played a role in creating or administering exam components — questions, performance tests, answer keys, or scoring rubrics. Operationally, this will require the Bar to track AI usage across contractors and internal teams that develop exam items and scoring procedures and to decide the level of detail to include in the public disclosure.
Disclosure requirement for Bar‑published study materials
This clause extends disclosure duties to instructional and preparatory materials the State Bar prepares, publishes, endorses, or distributes to applicants: sample questions, model answers, outlines, explanations, and similar content. That creates a labeling obligation for the Bar’s editorial and publication processes and may influence how the Bar sources or vets content for instructional use.
Human review does not remove the disclosure duty
This short but consequential subsection states that disclosure duties apply regardless of whether a natural person revised or reviewed the AI‑generated content. The provision prevents circumvention by post‑production human editing and places the onus on the Bar to trace back authorship to determine when AI had any role.
Where and when disclosures must appear
The statute mandates two placement rules: exam‑use disclosures must be posted on the State Bar’s website at least 60 days before the exam where AI content is used, while disclosures for study materials must appear on the cover page. These are procedural, not descriptive, requirements — the bill does not prescribe the wording of the disclosure, enforcement mechanisms, or sanctions for noncompliance.
Operative date
The law does not take immediate effect; it becomes operative on January 1, 2028. That gives the State Bar and its vendors a lead time to identify generative AI systems in use, update procurement and content‑creation workflows, and design public notices and publication processes that comply with the statute.
This bill is one of many.
Codify tracks hundreds of bills on Justice across all five countries.
Explore Justice in Codify Search →Who Benefits and Who Bears the Cost
Every bill creates winners and losers. Here's who stands to gain and who bears the cost.
Who Benefits
- Exam applicants and licensees — they gain transparency about whether AI influenced questions, model answers, or grading instruments, which helps them assess fairness and prepare accordingly.
- Regulators and oversight bodies — public disclosures create an auditable record if concerns arise about exam validity, score reliability, or procurement of AI tools.
- Researchers and public‑interest technologists — the mandate produces data points for study about how generative AI is being used in high‑stakes assessment and educational materials.
Who Bears the Cost
- The State Bar — it must establish tracking, vendor reporting, and publication processes and may need staff time or systems to identify AI contributions across contracting and content workflows.
- Exam development vendors and contractors — item writers and vendors will need to document tool usage, modify contracts, and potentially alter content‑creation practices to avoid triggering disclosures if they prefer not to disclose.
- Publishers and in‑house editorial teams — labeling requirements and proofing workflows for study materials will require added editorial controls and quality checks, raising production costs.
Key Issues
The Core Tension
The central dilemma is between transparency for examinees and the public versus preserving exam integrity and workable administrative processes: disclosing AI use supports accountability and informed preparation but may complicate item security, contract management, and operational capacity for the State Bar and its vendors.
The statute intentionally leaves several consequential details unresolved. It does not define the content or specificity of required disclosures (for example, whether the notice must identify the AI system, describe how it was used, or quantify the share of content generated by AI).
The absence of enforcement language or penalties means the Bar will need to adopt internal compliance measures or regulations to operationalize the duty. That creates an administrative question about how the Bar will verify vendor self‑reports and whether it will inspect model logs or rely on attestations.
Another unresolved issue is the tension between transparency and test security. Publicly labeling items or exams as AI‑generated could aid critics seeking to game the exam or could reveal item‑development practices.
Conversely, minimal disclosures may frustrate stakeholders seeking substantive information about model role and provenance. The statutory definitions are serviceable but leave edge cases: derivative or assisted outputs (e.g., AI suggestions that substantially reshape a human draft) may be difficult to classify, and the statute’s focus on generative AI leaves out other machine aids (analytic scoring algorithms) that could influence exam outcomes.
Try it yourself.
Ask a question in plain English, or pick a topic below. Results in seconds.