Codify — Article

Future of Artificial Intelligence Innovation Act creates NIST AI Center, testbeds, datasets, and prize program

A federal package that centralizes voluntary AI standards and testing at NIST, funds joint testbeds with DOE/NSF, prioritizes public datasets and prizes, and strings research-security limits to participation.

The Brief

The Future of Artificial Intelligence Innovation Act of 2026 directs the National Institute of Standards and Technology (NIST) to establish a Center for Artificial Intelligence Standards and Innovation to develop voluntary testing methodologies, metrics, blue/red teaming practices, and tools for authenticating synthetic content. It requires the creation of a multi-stakeholder consortium, authorizes an interagency testbed program led by NIST, DOE, and NSF, and adds a voluntary foundation-model test program and materials-science testbed capability.

Beyond standards and testbeds, the bill tasks OSTP and the Interagency Committee to prioritize publicly curated datasets and to run grand-challenge prize programs to accelerate R&D. The law builds in FOIA and confidentiality protections for privately contributed data, sets exclusions for entities under the control of certain foreign governments, and ties many activities to reimbursable National Laboratory resources and explicit sunset dates for several programs.

At a Glance

What It Does

The bill requires NIST to set up a Center (within 90 days) that writes voluntary AI testing standards, runs red/blue-team evaluations, curates watermarking and synthetic-content guidance, and convenes a quarterly public-private consortium. Separately, DOE, NIST, and NSF must stand up a joint testbed program (within 1 year), including a voluntary foundation-model test track and automated, reproducible evaluation tools.

Who It Affects

Directly affected are NIST, DOE, NSF, National Laboratories, academic researchers, and private AI developers and deployers (including vendors of foundation models and cloud providers). The bill also affects companies that supply data or proprietary model components to federal testbeds and civil-society groups involved in standards work.

Why It Matters

The legislation formalizes a federal effort to produce interoperable, voluntary AI evaluation infrastructure rather than prescriptive regulation, while creating government-backed datasets and prize incentives to lower barriers for smaller firms and spur domestic research and manufacturing applications.

More articles like this one.

A weekly email with all the latest developments on this topic.

Unsubscribe anytime.

What This Bill Actually Does

Section-by-section, the bill builds federal capacity for voluntary AI measurement and testing rather than handing new enforcement powers to agencies. It inserts a new statutory Center for Artificial Intelligence Standards and Innovation at NIST.

The Center’s playbook covers cataloging metrics, developing reproducible evaluation methods, running structural red- and blue-teaming activities, producing cybersecurity toolkits, and publishing best practices for watermarking and provenance of synthetic content. NIST must also assemble a multi-stakeholder consortium to advise the Center and provide annual reports to the key congressional committees.

The Act pairs standards work with physical and virtual testbeds. Within one year the Under Secretary for Standards and Technology and the Secretary of Energy—coordinating with NSF—must create a program that links National Laboratories, federal labs, NIST, NSF pilots, and public/private partners to run tests, security assessments, and automated evaluations.

The statute requires development of high-, medium-, and low-compute tests where practicable, directs hackathons to surface vulnerabilities, and instructs agencies to develop metrics for the program’s success. Use of National Laboratory compute and facilities is structured under reimbursable agreements, although the Secretary of Commerce may waive reimbursement in limited cases.For research and economic acceleration, OSTP and the Interagency Committee must publish a prioritized list of public datasets to fund or curate (the bill explicitly asks for 20 prioritized datasets) and identify a hosting location.

The bill also authorizes a grant- or prize-based grand-challenge program to drive R&D in specified focus areas—microelectronics, materials, energy efficiency, explainability, security against misuse, advanced manufacturing, maritime and border security, and lab automation—using Challenge.gov notices and prize authorities. Many activities are time-limited: the testbed programs have a seven-year sunset and the grand-challenge authority sunsets after five years.Two important legal guardrails are built in.

Confidential content provided voluntarily by private-sector contributors is protected from public FOIA disclosure and is accessible only to the contributor and the Center; aggregated, deidentified summaries may be shared. The Center must also prevent entities under the ownership, control, or influence of governments of certain ‘covered nations’ from accessing Center resources, and NIST retains no new enforcement authority beyond what it already had.

The Five Things You Need to Know

1

NIST must establish the Center for Artificial Intelligence Standards and Innovation within 90 days of enactment and stand up a supporting consortium within 180 days (Sections 101(d) and (e)).

2

The joint testbed program led by NIST and DOE (in coordination with NSF) must be created within 1 year to run automated, reproducible AI tests, security assessments, and hackathons; that program terminates after 7 years (Section 102(b),(l)).

3

Private-sector material voluntarily contributed to NIST or testbeds is exempt from FOIA under 5 U.S.C. 552(b)(3) as confidential, access is limited to the contributor and the Center, and NIST may publish aggregated, deidentified information (Sections 101(f) and 102(j)).

4

The OSTP-led prioritization of public datasets must identify 20 datasets for Federal investment and report best practices and recommendations to the Commerce and House science committees within 1 year (new 5103A in National AI Initiative Act).

5

Prize-based Federal grand challenges—authorized across agencies using existing prize authorities—focus on applied R&D (examples include microelectronics, energy-efficient models, explainability, and lab automation) and the grand-challenge authority sunsets after 5 years (new section 5107).

Section-by-Section Breakdown

Every bill we cover gets an analysis of its key sections. Expand all ↓

Sec. 100

Definitions

This section collects key technical definitions that the rest of the Act relies on—artificial intelligence, AI model, AI system, foundation model, testbed, watermarking, and critical infrastructure—by cross-referencing prior statutes (notably the National AI Initiative Act of 2020) and adding terms needed for test and standard work. Because the bill uses precise definitions for ‘foundation model’ and ‘testbed’, those terms determine the scope of the voluntary testing and datasets described later.

Sec. 101

Center for Artificial Intelligence Standards and Innovation (NIST)

Amends the NIST Act to create a statutory Center. The Center’s functions are expansive and operational: catalog existing metrics, publish voluntary testing methodologies, conduct and support red/blue-teaming, curate watermarking and provenance practices, build cybersecurity toolkits, and recommend workforce training priorities. The law requires quarterly consultations with a federally convened consortium of private industry, academia, civil society, and federal labs, and annual reports to congressional committees on consortium contributions. Importantly, the Center is expressly limited to voluntary activities and is not given new enforcement authority.

Sec. 102

Interagency testbed program (NIST/DOE/NSF)

Directs the Under Secretary for Standards and Technology and the Secretary of Energy, in coordination with the NSF Director, to establish a testbed program within one year. The program’s mechanics: use National Laboratory and federal lab compute and expertise, prioritize security vulnerability assessments (including classified testbed work where necessary), develop automated and reproducible tests across compute intensities, research ways to reduce testing compute costs, run hackathons, and develop metrics to evaluate program effectiveness. The statute permits using existing programs and requires an evaluation and report back to Congress within three years. Access to contributed confidential content is limited and FOIA-protected; the program terminates after seven years.

5 more sections
Sec. 103

Materials and energy testbed (NIST/DOE collaboration)

Authorizes using the testbed program to accelerate materials science and energy storage work via AI, autonomous labs, and hybrid computing (quantum/robotics). Agencies are directed to support advanced algorithms, uncertainty quantification, and workforce-development tools and to enter public-private partnerships as appropriate. This provision links industrial application—advanced manufacturing, benchmark data, and model comparison—to the testbed infrastructure.

Sec. 104

Coordination, reimbursement, and savings provisions

Requires Commerce to avoid duplicative activities with DOE research entities, and mandates that DOE and National Laboratory resources made available to NIST/NSF be provided under reimbursable agreements unless the Secretary waives that requirement. The section also clarifies money-use limits—Commerce cannot use DOE funds—and preserves existing National AI Initiative Act authorities.

Sec. 111

International coalitions and technology-trust criteria

Directs NIST and DOE to lead formation of alliances with like-minded governments to promote adoption of U.S.-led voluntary standards, cybersecurity best practices, and technology-protection measures. The statute instructs development of technology-trust criteria for partners, requires coordination with State and NSC, and expressly bars the People’s Republic of China from participation until WTO-commitment compliance conditions are met and a detailed interagency justification to Congress is provided.

New 5103A (Sec. 201)

Public datasets: OSTP prioritization and hosting

Adds a statutory requirement that OSTP, acting through the NSTC and Interagency Committee, publish a prioritized list of Federal datasets for AI training and evaluation—explicitly asking for 20 prioritized datasets—and identify a hosting location. The provision lists factors to consider (science, sectoral utility, representativeness, privacy, national security), requires public comment, and mandates a one-year report with best practices and recommendations for secure compute environments and incentives for dataset release.

New 5107 (Sec. 202)

Federal grand challenges and prize competitions

Authorizes OSTP/NSTC and participating agencies to run prize competitions and other challenge-based investments to tackle prioritized technical problems (microelectronics, energy efficiency, interpretability, manufacturing, maritime/border security, lab automation, etc.). The section sets participant eligibility (U.S. incorporation or citizenship/permanent residency), requires Challenge.gov posting, calls for success metrics and validation protocols, mandates agency reporting on winners and impacts, and imposes a five-year sunset on the authority.

At scale

This bill is one of many.

Codify tracks hundreds of bills on Technology across all five countries.

Explore Technology in Codify Search →

Who Benefits and Who Bears the Cost

Every bill creates winners and losers. Here's who stands to gain and who bears the cost.

Who Benefits

  • Small and medium-sized AI vendors: they gain access to government-curated test methodologies, shared testbeds, and public datasets—lowering the technical and capital costs of validating models and competing with larger firms.
  • National Laboratories and universities: the bill unlocks reimbursable partnerships and program funding to run large-scale experiments, materials and energy testbeds, and hybrid-computing projects that leverage lab infrastructure.
  • Federal agencies and procuring entities: agencies receive reproducible evaluation tools, cybersecurity toolkits, and resources to evaluate AI systems internally, which can improve procurement decisions and risk management.
  • Civil-society organizations and detection researchers: publicly produced guidance on watermarking, synthetic-content detection tools, and deidentified aggregated outputs provide new technical inputs for watchdogs and researchers.
  • U.S. manufacturing and materials sectors: the materials/energy testbed and grand-challenge prize focus explicitly target advanced-manufacturing innovations, offering pathways for industry uptake and workforce training.

Who Bears the Cost

  • National Laboratories and DOE: providing advanced compute and testbeds is structured as reimbursable work, increasing administrative overhead and potential budgetary strain unless waivers are used.
  • Private contributors of proprietary datasets or models: they must decide whether to share confidential materials under limited-access arrangements, potentially incurring IP-management and legal costs.
  • Federal agencies with oversight roles: NIST, OSTP, NSF, DOE, and inspectors general face new reporting, interagency coordination, and audit obligations that require staff time and program management resources.
  • Companies under foreign control excluded from resources: multinational firms with entities subject to foreign-government influence may lose access to Center resources, complicating global development strategies.
  • Implementing contractors and temporary fellows: agencies will incur auditing and certification compliance costs for non-Federal personnel working on critical and emerging technology projects.

Key Issues

The Core Tension

The central dilemma is between accelerating broad-based innovation through voluntary, industry-engaged standards and shared testing infrastructure, and the need to contain national-security, IP, and misuse risks that may demand stricter controls; the bill favors a capacity-building, voluntary path that eases commercial adoption but may leave gaps in transparency and enforceable safety.

The Act trades enforcement for capacity: it builds a federally backed, voluntary ecosystem of standards, metrics, and testbeds rather than imposing binding regulatory rules. That design reduces immediate regulatory friction for industry, but it leaves open whether voluntary norms will be broad or rigorous enough to mitigate high-risk failure modes—particularly when adoption is optional and market incentives diverge.

Confidentiality rules and FOIA exemptions protect private contributors and may encourage sharing of proprietary datasets, yet they create transparency trade-offs. Limiting access to contributors and the Center while only publishing aggregated, deidentified results could hinder independent reproducibility and public scrutiny of evaluation methods.

Likewise, the prohibition on entities under the ownership, control, or influence of certain foreign governments reduces security risk but raises practical questions about multinational firms, joint ventures, and downstream subcontracting arrangements.

Funding and continuity are also real risks. Testbed activities are tethered to reimbursable National Laboratory resources and contain sunset dates (7 years for testbeds, 5 years for grand challenges).

The reimbursable model protects DOE budgets but may slow uptake if industry cannot or will not pay. Waivers create ad hoc authority to move faster, but reliance on waivers can produce uneven implementation.

Finally, the interagency and international cooperation provisions aim to align standards globally, but the tension between open scientific collaboration and research-security protections (and export-control coordination) will require detailed policy work that the Act does not itself resolve.

Try it yourself.

Ask a question in plain English, or pick a topic below. Results in seconds.