Codify — Article

AI Grand Challenges Act of 2026 creates NSF prize program for targeted AI R&D

Authorizes the NSF Director to run an 'AI Grand Challenges Program' using prize authority to drive AI solutions across sectors and to coordinate federal datasets for those challenges.

The Brief

The bill authorizes the Director of the National Science Foundation to establish an "AI Grand Challenges Program" that uses prize competitions to stimulate development, validation, and commercialization of AI solutions targeting specific, measurable problems across a long list of sectors. The program must be designed under existing federal prize authorities and include public-facing problem statements, success metrics, and posting of competitions on Challenge.gov.

The Act also directs interagency coordination: the Director must consult with OSTP and other federal research agencies when selecting challenges, and OSTP must coordinate the identification and publication of federal data sets that can be used to address grand challenges. The statute opens the program to non-federal financing while preserving firewalls to prevent donors from influencing award decisions, and it requires reporting to Congress and public disclosure of outcomes.

At a Glance

What It Does

Directs NSF to set up a prize-based AI Grand Challenges Program under the Stevenson‑Wydler prize authority, establish problem statements and validation protocols for each challenge, and post active competitions to Challenge.gov. It also tasks OSTP with coordinating publication of federal data sets suitable for AI-powered solutions to the selected grand challenges.

Who It Affects

NSF program managers, OSTP and other federal research agencies (NIH, DARPA, NIST), research universities, biotech and AI startups, incumbent tech firms, and non‑federal funders that may contribute prize support.

Why It Matters

The bill shifts a significant tool—large, targeted prize competitions—into NSF’s toolkit, channels public and private funding toward defined AI deliverables, and creates a government-wide push to publish data sets that lower the barriers to large-scale AI innovation.

More articles like this one.

A weekly email with all the latest developments on this topic.

Unsubscribe anytime.

What This Bill Actually Does

The Act creates a new program at the National Science Foundation called the AI Grand Challenges Program and requires the Director to stand it up within a statutory window. The Director must work with OSTP and may work with other agencies and advisory bodies to identify a portfolio of 'grand challenges'—specific societal or technical problems that can be advanced through AI.

For each challenge the Director must publish a clear problem statement together with measurable success metrics and verification protocols so competitors know exactly how submissions will be judged.

To run competitions the bill relies on existing federal prize law (Stevenson‑Wydler). The program must post competitions on Challenge.gov to ensure public access and may adopt multi-stage competition formats.

The Director must write eligibility, testing, judging, and verification rules; judges may come from the private sector; and winners must meet statutory nationality and incorporation requirements. The statute permits both cash and non-cash awards and allows prize sizes to vary, including very large awards where justified by the governing prize statute.Funding can come from other federal agencies and from non‑federal sources (states, tribes, localities, nonprofits, for‑profit entities).

The Director cannot consider who provided support when picking winners, preserving a firewall between funders and selection. The bill also prescribes reporting and transparency: the Director must notify relevant congressional committees shortly after awards and provide a publicly posted biennial program report summarizing activities, competitions, and results.

Separately, OSTP must coordinate federal publication of data sets that address fundamental scientific problems and are suitable for AI development, intending to give competitors access to foundational training and evaluation resources.

The Five Things You Need to Know

1

The Director must establish the AI Grand Challenges Program within 12 months after enactment.

2

The bill requires at least one required grand challenge focused on achieving AI-enabled breakthroughs against lethal cancers; the Director must award no less than $10,000,000 in cash prize awards to each winner of that cancer challenge.

3

Except where otherwise provided, the statute sets a floor of $1,000,000 in cash prize awards to each winner of competitions under the program and permits non‑cash awards; the Director may award prizes larger than $50,000,000 consistent with existing prize-law procedures.

4

Eligibility: a private entity winner must be incorporated in and maintain a primary place of business in the United States; an individual winner (alone or in a team) must be a U.S. citizen or lawful permanent resident.

5

The Director may accept funding from federal and non‑federal entities to support the program, but may not consider that support in determining prize winners; active competitions and prize awards must be posted to Challenge.gov for public access.

Section-by-Section Breakdown

Every bill we cover gets an analysis of its key sections. Expand all ↓

Section 1

Short title

Names the measure the 'AI Grand Challenges Act of 2026.' This is a standard short-title clause that anchors the statute's identity and is relevant when referencing the program in policy or regulatory texts.

Section 2(a)–(b)

Program authorization and scope

Authorizes the NSF Director to create the AI Grand Challenges Program using the prize authorities in section 24 of the Stevenson‑Wydler Act. It enumerates a broad universe of possible challenge categories—national security, health, energy, environment, transportation, manufacturing, quantum computing, materials science, and several others—giving the Director discretion to select sectoral priorities while anchoring the program to concrete problem domains.

Section 2(c)

Challenge selection, problem statements, and public posting

Requires consultation with OSTP and other agencies and the solicitation of public input to identify grand challenges. For each selected challenge the Director must publish a specific problem statement, success metrics, and validation protocols, and link challenge information to Challenge.gov. Requiring explicit metrics and validation protocols is designed to make judging reproducible and to limit ambiguity about what constitutes success.

3 more sections
Section 2(e)–(f)

Eligibility, judging, and prize structure

Directs the Director to set eligibility criteria and judging/verification procedures consistent with Stevenson‑Wydler; explicitly makes winners subject to nationality/incorporation rules and allows private‑sector judges. The statute sets a general minimum cash award ($1M) and allows very large awards (over $50M) under existing statutory procedures, while also permitting non‑cash prizes—giving flexibility to match prize size to the scale and commercial value of the targeted problem.

Section 2(g)–(i)

Funding, donor firewalls, reporting, and access

Permits the NSF Director to accept funds from federal and non‑federal sources but prohibits considering such support when selecting winners, creating a formal firewall. The Director must notify key congressional committees within 60 days of an award and produce a publicly posted biennial report about program activities, active competitions, and outcomes; active prize opportunities also must be posted to Challenge.gov to broaden access.

Section 3

OSTP coordination on public data sets

Directs OSTP to coordinate federal science funders to identify and publish data sets that address foundational scientific problems amenable to AI approaches. This provision aims to lower barriers to entry for competitors by making standardized, high‑quality training and evaluation data available across agencies.

At scale

This bill is one of many.

Codify tracks hundreds of bills on Science across all five countries.

Explore Science in Codify Search →

Who Benefits and Who Bears the Cost

Every bill creates winners and losers. Here's who stands to gain and who bears the cost.

Who Benefits

  • Research universities and academic labs — gain new, high‑visibility targets and potential prize funding that can accelerate translational research and attract industry partners.
  • Biotech, medical AI firms, and clinical researchers — the mandated cancer grand challenge and associated datasets create a concentrated market signal and resources for AI applied to diagnostics, therapeutics, and outcome prediction.
  • AI startups and small companies — Challenge.gov posting and prize formats can lower procurement and market‑entry barriers by offering a non‑dilutive route to capital and validation without traditional grant timelines.
  • Federal program offices and agencies (NIH, DARPA, NIST) — benefit from focused outcome‑oriented demonstrations and access to solutions that can be transitioned into agency missions.
  • Data scientists and open‑data communities — OSTP‑coordinated dataset publication increases access to large, curated datasets that can be reused for research and benchmarking.

Who Bears the Cost

  • National Science Foundation — responsible for designing, administering, and validating complex competitions, which will require staff time, technical expertise, and program management resources.
  • Contributing federal agencies and non‑federal funders — those that provide prize support must allocate budgetary resources without influencing selections and may face opportunity costs relative to other funding mechanisms.
  • Foreign firms and multinational consortia — the incorporation and primary place‑of‑business requirements limit eligibility for winners, potentially excluding non‑U.S. entities from receiving prizes.
  • Small nonprofits and underfunded research teams — while competitions can lower certain entry barriers, they also require investment in rapid development and validation; these players may need partnerships or intermediaries to compete effectively.
  • NSF Rotator program participants and private‑sector judges — the statute anticipates rotational staffing and private judges, increasing personnel demands and requiring conflict‑of‑interest management and procedural safeguards.

Key Issues

The Core Tension

The central dilemma is this: the statute prioritizes rapid, targeted delivery of AI solutions through prize incentives and domestic winners, which advances national capability and concentrates resources, but that focus trades off open, collaborative science and risks incentivizing narrow, benchmark‑optimized outputs; policymakers must balance speed and national advantage against scientific openness, inclusivity, and robust evaluation.

The bill deploys prize mechanisms to drive targeted AI outcomes, but prizes are not a turnkey substitute for sustained research funding. Large, singular prizes can concentrate resources on specific benchmarks, which can speed solutions to narrowly defined problems but may produce narrow, metric‑driven engineering rather than broad, generalizable scientific progress.

The requirement to publish problem statements and validation protocols reduces ambiguity but creates incentives to optimize for those benchmarks; designing metrics that reward robust, generalizable advances rather than fragile overfitting will be hard and politically salient.

Accepting non‑federal funds expands the program’s resource base but raises governance questions. The statute places a legal firewall between donors and award decisions, yet practical issues remain: donors might shape problem framing through early consultations or in-kind data contributions.

The incorporation/residency eligibility rules prioritize U.S. domestic winners for policy reasons (national security, economic capture), but they also restrict international collaboration and could reduce the diversity of competing approaches. Finally, publishing federal datasets—especially in health, defense, or other sensitive domains—creates competing obligations around privacy, classification, and commercial use; OSTP coordination will need clear guidance on access controls, de‑identification standards, and reuse licenses.

Try it yourself.

Ask a question in plain English, or pick a topic below. Results in seconds.