Codify — Article

AI Grand Challenges Act of 2026 authorizes NSF-run AI prize program

Creates an NSF-administered, multi-category AI prize program and OSTP-led dataset publication to accelerate targeted AI breakthroughs and commercialization.

The Brief

This bill directs the National Science Foundation (NSF) to establish an "AI Grand Challenges Program" that uses prize competitions to push targeted advances in artificial intelligence across a long list of technical and policy-relevant categories. The program is built on the Stevenson‑Wydler prize authority and is designed to move innovations from research toward practical, measurable results.

The statute also requires interagency consultation in challenge selection, mandates a specific grand challenge focused on AI-driven breakthroughs against lethal cancers, and tasks the Office of Science and Technology Policy (OSTP) with coordinating publication of federal datasets that support grand-challenge work. For R&D managers, compliance officers, and program funders, the bill creates a new federal mechanism for incentivizing measurable AI outcomes while attaching reporting, eligibility, and public‑access rules that shape who can compete and how successes are validated.

At a Glance

What It Does

Authorizes NSF to design and run prize competitions for defined AI "grand challenges" using the Stevenson‑Wydler prize statute, publish problem statements and validation protocols on Challenge.gov, and accept external funds to support prizes. OSTP must coordinate publication of federal datasets that align with those challenges.

Who It Affects

NSF program offices, federal science agencies (including NIH for the required cancer challenge), AI developers and teams that must meet U.S. incorporation or residency rules, universities and labs that supply or consume public datasets, and private funders that may sponsor competitions.

Why It Matters

This creates a standing federal prize mechanism focused specifically on AI, sets baseline prize floors and paths for much larger awards, and links open federal datasets to targeted AI tasks—shaping where private R&D investment and academic effort are likely to concentrate.

More articles like this one.

A weekly email with all the latest developments on this topic.

Unsubscribe anytime.

What This Bill Actually Does

The bill directs the NSF Director to build a standing prize program—the AI Grand Challenges Program—within 12 months. NSF must work with OSTP and may consult other agencies and advisory bodies to pick grand challenges that are specific, measurable, and published publicly.

Each challenge requires a precise problem statement plus success metrics and validation procedures; those materials, and active prize listings, go on Challenge.gov so teams know the bar for winning.

One mandatory challenge targets AI solutions that materially improve outcomes for the most lethal cancers and related comorbidities; the bill ties that challenge to measurable health outcomes and requires cash awards of at least ten million dollars per winner for it. For other challenges the statute sets a minimum cash prize per winner and permits non-cash awards; it also allows the Director to offer very large prizes (greater than $50 million) within existing statutory rules.Eligibility and judging rules follow Stevenson‑Wydler standards: winners that are private entities must be incorporated and based in the United States, and individual winners must be U.S. citizens or permanent residents.

NSF must set testing, judging, and verification procedures for submissions, and it may use private‑sector judges. The Director can accept money from other federal, state, local, tribal, nonprofit, or for‑profit entities to support competitions, but the statute forbids considering such support when choosing winners.NSF must report to congressional committees after each prize award and provide a public, biennial report describing activities, active and completed competitions, and outreach efforts.

Separately, OSTP is charged with coordinating federal science funders to identify and publish datasets that address foundational problems amenable to AI solutions, putting shared data resources in reach of competition participants.

The Five Things You Need to Know

1

NSF must establish the AI Grand Challenges Program within 12 months of enactment and post challenge problem statements and validation protocols publicly.

2

The bill requires at least one cancer-focused grand challenge, with cash prize awards of not less than $10,000,000 to each winner of that competition.

3

For non-cancer challenges, NSF must award at least $1,000,000 in cash prizes to each winner, and the agency may award prizes larger than $50,000,000 under existing Stevenson‑Wydler rules.

4

Eligibility limits winners who are private entities to those incorporated and primarily based in the U.S.; individual winners must be U.S. citizens or lawful permanent residents, and judges may include private-sector experts.

5

NSF may accept funds from public and private entities to support prizes but cannot consider such support when determining winners, and NSF must post active competitions to Challenge.gov and submit biennial public reports to Congress.

Section-by-Section Breakdown

Every bill we cover gets an analysis of its key sections. Expand all ↓

Section 2(b)

Creates the AI Grand Challenges Program at NSF

This subsection compels the Director to stand up a program that uses prize competitions to solve measurable AI problems across a long list of domains (national security, health, energy, etc.). Practically, NSF must adapt its prize-management processes, integrate Rotator Program staff if needed, and align internal contracting and outreach so competitions run at scale without reauthorizing new procurement authorities.

Section 2(c)

Selection process, public problem statements, and cancer challenge

NSF must consult OSTP and relevant agencies to pick challenges and solicit public input before finalizing them. For each selected challenge NSF must publish a detailed problem statement, success metrics, and validation protocols on both the NSF site and Challenge.gov. A discrete requirement compels NSF to launch at least one AI grand challenge specifically targeting high‑mortality cancers—with explicit clinical outcome goals—ensuring one competition has both a public‑health focus and a legally prescribed prize floor.

Section 2(e)

Eligibility, judging, and verification rules

NSF must define eligibility standards and testing/judging procedures consistent with Stevenson‑Wydler. Winners that are private entities must be U.S. incorporated and primarily U.S.‑based; individuals must be citizens or permanent residents. The statute permits private‑sector judges, which lets NSF draw technical reviewers from industry but requires procedures to manage conflicts of interest and maintain judging integrity.

3 more sections
Section 2(f)–(g)

Prize sizes and funding sources

The bill sets a general minimum cash prize of $1 million per winner and a $10 million minimum for winners of the mandated cancer challenge; it also authorizes very large prizes above $50 million under existing law. NSF may accept funds from other federal or non‑federal entities to support competitions, but the Director may not treat such contributions as a factor in selecting winners, which preserves impartiality but raises oversight and accounting needs for co‑funding arrangements.

Section 2(h)–(i)

Reporting, transparency, and public posting

NSF must notify Congress within 60 days of each winning submission and provide biennial, publicly posted reports detailing activities, active and completed competitions, and outreach efforts. The bill also requires posting active competitions to Challenge.gov, which standardizes public access but obliges NSF to maintain Challenge.gov listings and ensure that challenge documentation and validation protocols are sufficiently clear for third‑party evaluation.

Section 3

OSTP coordination on publishing federal datasets

OSTP must lead federal agencies that fund science to identify and publish datasets useful for grand-challenge problems. This creates an interagency pathway to surface shared data resources for competition entrants, but it also imports considerations about data governance—including data cleaning, access controls, privacy protections, and licensing—that agencies must resolve before release.

At scale

This bill is one of many.

Codify tracks hundreds of bills on Science across all five countries.

Explore Science in Codify Search →

Who Benefits and Who Bears the Cost

Every bill creates winners and losers. Here's who stands to gain and who bears the cost.

Who Benefits

  • Early-stage AI startups and small teams — Prize incentives and public problem statements lower market entry barriers and create clear commercialization pathways tied to measurable goals, which can attract follow‑on funding and partnerships.
  • Biomedical researchers and clinicians focused on cancer — A mandated, well-funded cancer grand challenge channels AI talent and public datasets toward translational problems with defined clinical outcome metrics, speeding prototype-to-clinical evaluation pipelines.
  • NSF and federal mission offices — The program gives agencies a new tool to catalyze cross-sector innovation without creating new grant programs, enabling targeted investments in high‑impact AI use cases.
  • Academic institutions and national labs — Publicly published datasets and clear validation protocols create reproducible benchmarks that researchers can use for scholarly work and for training students on applied problems.

Who Bears the Cost

  • National Science Foundation program offices — NSF must absorb administrative burdens to design, run, and validate large, multi-stage competitions, including conflict-of-interest safeguards and long-term verification mechanisms.
  • Non‑U.S. firms and multinational research groups — The incorporation and residency eligibility rules exclude many foreign entities from prize winnings, potentially forcing foreign teams to restructure, partner, or refrain from competing.
  • Federal data-holding agencies — Agencies asked to publish datasets face costs for curation, privacy review, and access infrastructure; they also assume legal and reputational risks if released data contain sensitive information.
  • Taxpayers and congressional appropriations — Large cash prizes and agency administrative costs will either require appropriations or reprioritization of existing funds, shifting budget pressure onto federal spending plans.

Key Issues

The Core Tension

The central dilemma is between accelerating concrete AI breakthroughs quickly through high‑visibility, well‑funded prizes and the risks that come with prescriptive competition design: setting success metrics that are both ambitious and verifiable, protecting privacy and safety when publishing datasets, and balancing domestic advantage against the benefits of global collaboration. The bill favors speed and national control; the hard question it leaves is how to preserve rigorous evaluation, equity of access, and responsible data governance while doing so.

The bill blends two common incentives—prizes and open data—but implementing both without unintended harm will be tricky. Prize competitions require tightly specified success metrics and robust validation to avoid rewarding narrow or easily gamed solutions; the statute pushes NSF to publish validation protocols, but building impartial, replicable testing environments (especially for health or national security tasks) demands technical and legal resources that the bill does not separately fund.

Similarly, OSTP’s push to publish datasets can accelerate progress but raises unresolved questions about data licensing, privacy, and whether datasets will be sufficiently representative and labeled to support fair benchmarks.

The rules on eligibility and external funding reduce some risks (for example, by forbidding consideration of sponsor support in winner selection), but they also introduce trade-offs. The U.S.-centric eligibility rules shield prize dollars for domestic actors and are likely intentional, yet they constrain global collaboration on inherently international problems.

Accepting funds from external entities eases budget constraints but creates administrative complexity and potential perception issues about influence unless NSF maintains strict transparency and firewalls around judging and prize allocation.

Try it yourself.

Ask a question in plain English, or pick a topic below. Results in seconds.