Codify — Article

AI Workforce PREPARE Act creates federal AI labor forecasting, research, and reporting

Directs Labor and partner agencies to collect AI adoption data, run benchmark and forecasting programs, add WARN disclosures, and temporarily hire AI experts to inform training and adjustment policy.

The Brief

The AI Workforce PREPARE Act tasks the Department of Labor, Commerce, NIST, NSF, and partner agencies with building a persistent data and analytic capability to measure how artificial intelligence changes occupations and worker flows. The bill funds an Artificial Intelligence Workforce Research Hub, a job-to-job flows pilot, benchmark and forecasting prize competitions, voluntary private-sector reporting of anonymized AI adoption data, and updates to federal surveys to capture AI use and impacts.

Why it matters: the bill is an operational playbook for turning AI signals into actionable workforce policy. It creates new data sources and modeling requirements (including prediction-interval forecasts for selected occupations), temporary hiring authority for technical talent at Labor, and a mandated AI-related disclosure line in WARN notices — all designed to steer training investments, adjustment assistance planning, and grant selection toward labor-market realities driven by AI adoption.

At a Glance

What It Does

Requires public comment and workshops to scope data collection, establishes an AI Workforce Research Hub inside DOL, authorizes prize competitions at NIST and NSF to build benchmarks and forecasting, pilots job-to-job flow statistics with Census, and creates a voluntary program for AI developers/deployers to share anonymized usage data for research.

Who It Affects

Affects the Department of Labor and partner statistical agencies, state and local workforce boards, training- and apprenticeship-program administrators, AI developers and deployers who may participate in voluntary data sharing, and employers subject to WARN notices who must add AI-related statements if AI substantially contributed to a mass layoff.

Why It Matters

Builds federal technical capacity and recurring forecasting products (2-, 4-, 8-year prediction intervals) to inform grantmaking and rapid adjustment assistance design; it also creates secured researcher access pathways and formal mechanisms to compare forecast methods so policymakers can evaluate what forecasting approaches actually improve outcomes.

More articles like this one.

A weekly email with all the latest developments on this topic.

Unsubscribe anytime.

What This Bill Actually Does

The Act begins by directing the Secretary of Labor to solicit public comment and convene workshops to prioritize what data and tools would most improve forecasts and policy choices. Those early-stage activities are time-limited and designed to produce a short list of high-value datasets, analytic products, and implementation recommendations, plus a publicly released workshop report that ranks the highest-value follow-on efforts.

The Department of Labor must stand up an Artificial Intelligence Workforce Research Hub that coordinates with Census, BEA, and BLS to produce recurring analyses, scenario planning, and actionable insights on occupations and worker transitions. To increase in-house capacity, the bill lets Labor appoint up to 20 highly qualified AI and data experts into excepted-service roles for limited terms (renewable once), with pay and incentive authority capped at an aggregate limit tied to the Vice President's annual pay.

The Hub and these experts are intended to improve measurement, forecasting, and policymaking on workforce impacts from AI.On data, the bill funds several complementary streams: (1) a Census-led job-to-job flows pilot that targets occupations selected by Labor to provide detailed occupational transition statistics; (2) a voluntary data-sharing program, managed by BLS in coordination with Commerce and OSTP, where AI developers/deployers can license or transfer anonymized adoption-and-use metrics for statistical use only; and (3) revisions to core federal surveys (Annual Business Survey, CPS, BLS occupational and time-use surveys) to capture AI types, intensity of use, tasks impacted, skill changes, and outcomes attributable to AI.To create objective measures of AI capability relevant to labor impacts, NIST must run prize competitions (and may fund companion grants) to develop reproducible benchmarks that quantify automation or augmentation potential. Separately, NSF will operate a recurring forecasting prize to crowd-in scored forecasts and rationales for short-horizon labor-market questions.

Forecasting at Labor includes a required product: prediction-interval forecasts for at least 15 occupations (6-digit SOC) with 2-, 4-, and 8-year horizons, public transparency on methods and benchmarks, and routine evaluation using proper scoring rules. Several provisions include sunset or phaseout timelines (many after 4–5 years) and explicit authorizations of appropriations for the covered activities.

The Five Things You Need to Know

1

Within 45 days of enactment Labor must post a request for comment and then hold a 60-day written comment period to scope data, tools, and partnerships.

2

The Secretary may appoint up to 20 AI/data experts into the excepted service for up to 24 months (renewable once), with total annual pay and awards capped at the Vice President’s pay; $6 million is authorized to support this authority through FY2030.

3

NIST must run at least one prize competition to produce reproducible AI benchmarks that measure automation or augmentation capacity for labor forecasting, with up to $7 million authorized for FY2026–2030.

4

The bill amends WARN to require employers issuing notices to state when AI was a substantial factor in a mass layoff and to describe the type, usage, estimated percent attributable to AI, and prior upskilling efforts — with Labor guidance due within 300 days.

5

Labor must publish annual prediction-interval employment forecasts for at least 15 designated occupations at the 6-digit SOC level, covering 2-, 4-, and 8-year horizons, and publicly evaluate forecast performance against benchmarks using proper scoring rules; $18 million is authorized for forecasting activities through FY2030.

Section-by-Section Breakdown

Every bill we cover gets an analysis of its key sections. Expand all ↓

Section 101

Public scoping, workshops, and a ranked priority report

Requires Labor to solicit public comments within 45 days and hold workshops (initial workshop within 180 days) bringing together economists, technical AI experts, statistical agencies, labor organizations, and state grant recipients. The provision mandates that the initial workshop quantify or rank the expected value of data or tools and deliver a report within 45 days that lists at least five high-value datasets, metrics, or analyses Labor could produce within two years. Practically, this is an explicit prioritization step designed to constrain scope and identify cost-effective actions before significant agency commitments.

Section 102

Temporary hiring authority for up to 20 AI experts

Grants Labor authority to appoint up to 20 covered individuals into the excepted service, set pay up to GS-15 step 10 equivalence, and pay recruitment/retention incentives — with aggregate yearly compensation capped at the Vice President’s salary. Appointments are capped at 24 months with a potential 24-month extension if certified. The provision waives many title 5 personnel constraints to speed hiring, but preserves security suitability checks and requires annual reporting to relevant Congressional committees on numbers, qualifications, duties, and impact.

Section 103

Artificial Intelligence Workforce Research Hub

Creates a Hub within DOL to coordinate recurring research, scenario planning, and policy-relevant analysis in collaboration with Census, BEA, and BLS. The Hub can receive details of staff from federal, state, local, or private sectors and must operate from existing funds (no new appropriations authorized for the Hub itself). The Hub’s outputs are intended to translate technical findings into policy guidance for training programs and adjustment assistance, and the Hub sunsets after four years unless extended by statute.

6 more sections
Section 104

Job-to-job flows pilot and researcher access assessment

Directs Census, in consultation with Labor, to run a pilot producing detailed job-to-job transition series for occupations designated by Labor (not fewer than 15 occupations at a detailed SOC level), using federal surveys, admin records, and voluntary private partnerships. The Director must publish the first series within 18 months, or if infeasible, provide a public report on barriers, changes needed, and a cost-benefit assessment. Separately, BLS must assess secure remote access proposals (e.g., NSDS) to facilitate researcher access to unit-level data and publish that report within a year.

Section 201

NIST benchmarking prize competition

Requires the NIST Director to run prize competitions (and may fund companion grants) that produce reproducible methods or benchmarks to quantify AI’s automation/augmentation ability for tasks or occupations. The section instructs NIST to design competition categories, mitigate common benchmarking problems (data contamination, rapid obsolescence), and to consult with Commerce, Labor, BLS, and NSF. The deliverable is public benchmarks tailored to improve labor-impact forecasting and retraining needs.

Section 202

Voluntary AI adoption reporting and data licensing

Establishes a voluntary BLS-managed program to accept anonymized AI adoption/use data from developers and deployers under MOUs or licensing agreements. Data must be used exclusively for statistical purposes, treated as confidential, and not be used for regulation or antitrust action. The Secretary must publish machine-readable aggregate statistics at least semiannually, maintain a public roster of participants, and report to Congress after two years on participation, data quality, and barriers.

Section 203–204

Survey questions on AI and WARN disclosure amendment

Directs Commerce and BLS to revise several core surveys (Annual Business Survey, CPS, Business Trends and Outlook, Occupational Requirements Survey, American Time Use Survey) within a year to capture AI type, intensity, task impacts, and skill changes; agencies may narrow the AI scope pragmatically. Separately, amends WARN to require employers issuing notices to state if AI was a substantial factor in mass layoffs and include the type, percent attributable estimate, and prior upskilling steps, with Labor guidance due within 300 days.

Section 301–302

Occupational prediction-interval forecasts and forecasting prizes

Requires Labor to publish annual prediction-interval forecasts (20th–80th percentile or other approved ranges) for at least 15 designated occupations at the 6-digit SOC level, covering 2-, 4-, and 8-year horizons; reports must disclose methods, benchmark forecasts, and identify data gaps. Labor must evaluate forecasts against benchmarks using proper scoring rules and maintain a public archive. NSF must run a recurring forecasting prize to solicit scored short-horizon forecasts and rationales. Both programs include phased sunsets for program elements to allow evaluation.

Title IV (Sections 401–404)

Linking forecasts to grants, studying rapid adjustment models, and standardizing data

Mandates a Labor report within two years detailing how new data and forecasts will be incorporated into selection and performance measurement for Workforce Innovation and Opportunity Act grants, apprenticeship programs, and other training grants; requires a study on design options for a Rapid AI Adjustment Assistance Program; and instructs Labor, NSF, and partners to lead voluntary efforts to develop standards for AI-related workforce data elements and production to enable consistent reporting and researcher access.

At scale

This bill is one of many.

Codify tracks hundreds of bills on Employment across all five countries.

Explore Employment in Codify Search →

Who Benefits and Who Bears the Cost

Every bill creates winners and losers. Here's who stands to gain and who bears the cost.

Who Benefits

  • Workers in AI-impacted occupations — by producing targeted forecasts, job-to-job flow data, and scenario planning that can inform training programs and adjustment assistance to smooth transitions.
  • State and local workforce boards and training providers — because the bill supplies higher-resolution occupational forecasts and recommended data elements that can be used to update in-demand occupation lists and design curriculum aligned with actual labor-market shifts.
  • Academic researchers and independent analysts — who gain pathways (pilot, secure remote access assessments, and voluntary datasets) to richer anonymized unit-level data and new benchmarks/prize outputs to evaluate AI labor impacts.
  • Policy makers and federal grantmakers — who receive structured prediction-interval forecasts, benchmark comparisons, and prioritized datasets to make more evidence-based decisions about grant targeting, upskilling investments, and adjustment programs.
  • Participating AI developers and deployers — those who volunteer data benefit from public recognition, a confidential-statistics framework, and from benchmarks that may reduce market uncertainty about capability claims.

Who Bears the Cost

  • Department of Labor and partner agencies — responsible for implementing workshops, standing up a Hub, hiring technical staff, running and evaluating forecasts, and integrating outputs into grant processes, which will demand staff time and the appropriated resources.
  • Private employers subject to WARN notices — must add AI-specific statements when AI substantially contributes to mass layoffs, creating new reporting and potentially legal or reputational exposure if the determination is contested.
  • AI developers and deployers who participate voluntarily — will need to prepare anonymized data extracts, negotiate agreements, and accept statistical-use terms (though participation is voluntary, the data work and controls have costs).
  • State and local workforce agencies — will need to incorporate new forecasting products and data-element standards into planning and may incur costs updating systems, collection practices, and grant applications.
  • Taxpayers/general appropriations — the bill authorizes multiple specific funding lines (e.g., $18M for forecasting, $7M for NIST benchmarking, $7M for voluntary reporting) and these sums, if appropriated, represent new federal spending priorities.

Key Issues

The Core Tension

The central dilemma is between urgency and rigor: policymakers and workers need timely, fine-grained information to manage AI-driven transitions, but producing valid, representative, and privacy-preserving data and forecasts takes time, resources, and methodological care — accelerating collection and decision-making risks biased or misleading conclusions, while prioritizing methodological rigor delays actionable outputs that workers and training programs require now.

The bill attempts to thread the needle between faster, more granular data and rigorous protections for confidentiality and statistical integrity, but several implementation tensions are unresolved. Voluntary data-sharing will likely produce selection bias: firms willing to share may differ systematically from those that do not, limiting representativeness and complicating inference unless robust statistical adjustments are developed.

The Act addresses confidentiality by restricting use to statistical purposes and exempting datasets from FOIA, but it does not fully resolve how third-party private data will be audited for quality or how to reconcile proprietary concerns with the need for reproducibility.

Forecasting requirements introduce another trade-off: the Act mandates prediction-interval forecasts and formal scoring, but AI-driven labor impacts are inherently uncertain and rapidly evolving. There is a risk that policymakers will overweight a forecast’s appearance of precision or treat calibrated intervals as guarantees; conversely, overly conservative intervals become less useful for planning.

The legislation mitigates this with evaluation and benchmark comparisons, but practical success depends on data quality, model transparency, and sustained resourcing. Finally, many provisions carry statutory sunsets or rely on appropriations; the programs’ long-term utility depends on sustained funding and whether agencies can operationalize the technical work within the constrained timeframes specified.

Try it yourself.

Ask a question in plain English, or pick a topic below. Results in seconds.