Codify — Article

Creates CISA grant program to fund secure AI cyber-physical grid testbeds

Directs DHS/CISA to stand up awards for national labs and colleges to build protected AI testbeds that simulate grid-scale cyberattacks and support safe model training.

The Brief

This bill establishes a federal grant program to support the development of secure AI-enabled cyber-physical testbeds that can simulate attacks on power grids and provide controlled environments for training AI defenses. Eligible recipients are National Laboratories, institutions of higher education, and consortia composed of those entities.

The measure builds a focused, research-oriented channel within the federal cybersecurity ecosystem to accelerate practical tools and expertise for grid resilience. By funding dedicated testbeds and requiring regular reporting to Congress, it aims to concentrate technical work and surface policy recommendations about AI-driven threats and mitigations.

At a Glance

What It Does

The bill requires the Director of CISA and the Secretary of Homeland Security to establish a grant program to award eligible entities for creating secure AI cyber-physical testbeds designed to simulate grid-scale cyberattacks and enable safe AI model training. It also mandates joint reporting to Congress on threats, mitigation progress, and recommended legislative or regulatory actions.

Who It Affects

Primary targets are National Laboratories and institutions of higher education (including community colleges, public universities, and Hispanic‑serving institutions) and consortia that combine those actors. Indirectly affected parties include utilities, grid operators, AI security researchers, and federal cybersecurity planners.

Why It Matters

The bill channels federal R&D funding into operationally realistic but controlled environments for adversarial testing and AI training—areas where commercial platforms and existing lab facilities are fragmented. That concentrated capability can shorten the time between threat discovery and deployable defenses, while creating a formal feedback loop between technologists and policymakers.

More articles like this one.

A weekly email with all the latest developments on this topic.

Unsubscribe anytime.

What This Bill Actually Does

The heart of the bill is a targeted grant program administered jointly by CISA and the Department of Homeland Security. The program is meant to produce physically realistic, networked testbeds that couple software, control systems, and emulated electrical infrastructure so researchers can run adversarial scenarios without endangering live systems.

Recipients must design these environments with security controls that prevent misuse and enable researchers to train AI models against realistic threat behaviors.

Alongside grant awards, the agencies must compile and send annual assessments to Congress describing evolving threats, how AI-based mitigations are performing, and what further legislative or regulatory steps the agencies recommend. Those reports are intended to translate hands-on R&D outcomes into actionable policy guidance, bridging a gap between lab results and national-level preparedness.The bill defines eligible applicants narrowly to focus funding on organizations with technical capacity and mission alignment: National Laboratories and accredited higher-education institutions, with explicit allowance for consortia.

It ties the statutory definition of “AI” to an existing federal definition in the 2019 National Defense Authorization Act, ensuring consistency with prior federal practice.Practically, grant recipients will face two linked responsibilities: build and operate secure, realistic testbeds; and demonstrate how those testbeds enable safe AI model training and evaluation. The program therefore creates both infrastructure and a testing stewardship function—recipients will need governance, data-handling protocols, and partnerships with utilities or system owners to validate results while keeping sensitive operational details protected.

The Five Things You Need to Know

1

The Director of CISA and the Secretary of Homeland Security must jointly establish the grant program within 180 days after enactment.

2

The bill authorizes $100,000,000 total for fiscal years 2026 through 2030 to fund awards under the program (authorization of appropriations).

3

Eligible entities are National Laboratories, institutions of higher education (including public colleges, community colleges, and Hispanic‑serving institutions), and consortia composed of those entities.

4

Agencies must submit a joint report to Congress not later than one year after enactment and annually thereafter through 2031 covering evolving threats, AI mitigation progress, and recommendations for further legislative or regulatory action.

5

The statutory definition of “AI” refers to the meaning provided in section 238(g) of the John S. McCain National Defense Authorization Act for Fiscal Year 2019, aligning terminology with existing federal law.

Section-by-Section Breakdown

Every bill we cover gets an analysis of its key sections. Expand all ↓

Section 1

Short title

This single-line section names the statute the “AI Cyber Grid Protection Resilient Development Act of 2026.” It carries no substantive duties but frames the bill’s focus on AI, grid protection, and resilience.

Section 2(a)

Establishes the joint grant program

Directs the Director of CISA and the Secretary of Homeland Security to create a grant program to award eligible entities for developing secure AI cyber-physical testbeds. The statutory purpose is twofold: to simulate grid-scale cyberattacks and to provide controlled settings for training AI models. Practically, joint administration implies the agencies will need to design application criteria, security standards for testbeds, and monitoring mechanisms to ensure research does not create new vulnerabilities.

Section 2(b)

Requires joint reporting to Congress

Mandates a joint report to Congress beginning one year after enactment and annually through 2031. Reports must document evolving threats, summarize progress on AI mitigations developed or tested in the program, and provide recommendations for legislative or regulatory follow-up. These reports convert technical progress into policy inputs, but they also create a recurring information burden the agencies must staff and coordinate around.

2 more sections
Section 2(c)

Authorizes funding to make grants

Authorizes $100 million for fiscal years 2026–2030 to fund awards. The provision authorizes appropriations rather than directly obligating funds, so actual grant levels will depend on subsequent appropriations decisions. The explicit multi‑year authorization signals Congress’s intent to sustain program activity across several funding cycles, which affects planning and commitments by prospective recipients.

Section 2(d)

Defines key terms and eligible recipients

Provides definitions for “AI” by cross-reference to the NDAA 2019 and for “eligible entity,” which includes National Laboratories and institutions of higher education (with explicit callouts for public colleges, community colleges, and Hispanic‑serving institutions) and allows consortia. Those choices expand participation beyond elite research universities to institutions with regional ties, but they also raise questions about capacity and the need for partnership models to deploy complex testbeds.

At scale

This bill is one of many.

Codify tracks hundreds of bills on Infrastructure across all five countries.

Explore Infrastructure in Codify Search →

Who Benefits and Who Bears the Cost

Every bill creates winners and losers. Here's who stands to gain and who bears the cost.

Who Benefits

  • National Laboratories — gain dedicated federal funding and a formal role to build and host large-scale, secure testbeds that align with their mission of national security research.
  • Colleges and community colleges (including Hispanic‑serving institutions) — receive access to grant funding and partnership opportunities that can build local cyber‑R&D capacity and workforce pipelines.
  • AI security researchers and vendors — get operationally realistic platforms to evaluate defenses, accelerate prototype validation, and de-risk deployments for utilities.
  • Grid operators and utilities — benefit indirectly from more mature, tested AI tools and clearer mitigation playbooks tailored to grid-scale threats.
  • Federal cybersecurity planners (CISA, DHS) — obtain a structured R&D-to-policy feedback loop and recurring evidence to inform national-level regulations and guidance.

Who Bears the Cost

  • Taxpayers — will fund the authorized $100 million if appropriated, and the value depends on program design and oversight to avoid sunk costs in unusable infrastructure.
  • DHS/CISA — must design, implement, monitor, and report on the program, adding staffing and coordination burdens without dedicated administrative funding spelled out in the text.
  • Smaller higher‑education institutions and community colleges — may need to form consortia or secure matching resources to meet the technical and security requirements of building and operating realistic testbeds.
  • National Laboratories and university recipients — assume operational costs, security compliance, and potential liability exposures tied to hosting adversarial testing and sensitive data.
  • Utilities and system owners participating in exercises — may face costs and operational constraints to engage safely with testbeds and to provide data or system models under strict safeguards.

Key Issues

The Core Tension

The central dilemma is between realism and restriction: the program must create sufficiently realistic, adversarial test environments to find and fix vulnerabilities, yet those same environments risk producing sensitive capabilities or exposing operational details—so policymakers must balance openness for research and rapid defense development against tight controls to prevent misuse and protect critical infrastructure.

The bill leaves several implementation details open that will materially affect outcomes. It mandates secure, realistic testbeds but does not specify security standards, oversight mechanisms, or risk‑limit thresholds for dangerous experiments; those details will fall to CISA and DHS rulemaking, grant guidance, or interagency memoranda.

Similarly, the statute authorizes funding but does not appropriate it; program scale and continuity hinge on appropriations decisions and the agencies’ ability to allocate administrative resources for selection, monitoring, and enforcement.

Operational realism and safety are in tension. Achieving convincing, grid‑scale emulation requires detailed models and potentially sensitive operational data from utilities; protecting that information while enabling effective adversarial testing will require careful data governance, non‑disclosure arrangements, and technical isolation.

The reference to a statutory AI definition ties the program to existing federal language but does not resolve standards for “safe” model training, model evaluation metrics, or handling dual‑use outputs. Finally, the bill does not address coordination with overlapping federal efforts (for example, Department of Energy programs, NIST standards work, or existing lab testbeds), leaving scope for duplication or gaps unless agencies harmonize roles.

Try it yourself.

Ask a question in plain English, or pick a topic below. Results in seconds.