Codify — Article

AI Guardrails Act of 2026 restricts DoD AI use, limits autonomous lethal action

Sets explicit bans on AI for nuclear launch and domestic profiling, requires Secretary-level waivers, and ties autonomous-weapon exceptions to human-error parity and rapid congressional notice.

The Brief

The AI Guardrails Act of 2026 imposes targeted limits on the Department of Defense’s use of artificial intelligence. It bars AI from executing nuclear launches, forbids AI-driven monitoring/targeting of people in the United States without an individualized, articulable legal basis, and requires human judgment for lethal action by autonomous weapon systems; other autonomous uses must comply with DoD Directive 3000.09.

The bill creates a narrow waiver route: the Secretary of Defense, without delegation, may grant one-year waivers for lethal autonomous systems only after a written certification that extraordinary national-security circumstances exist and that the system’s error probability does not exceed the documented error rate of trained human operators. The Secretary must notify congressional defense committees within five days and provide detailed operational, testing, and mitigation information.

The measure aims to preserve operational flexibility while inserting measurable, oversight-oriented constraints into DoD AI deployment decisions.

At a Glance

What It Does

The bill prohibits specific AI uses—nuclear launch automation, domestic profiling without an individualized legal basis, and lethal autonomous action without adequate human judgment—and ties any exception to a Secretary-level waiver showing human-error parity. It also requires swift congressional notifications that include technical and operational test results.

Who It Affects

Primary actors affected are the Department of Defense’s acquisition and operational communities, prime contractors and AI system developers working on weapon or surveillance systems, combatant commanders proposing autonomy deployments, and congressional defense oversight offices that will receive timely waiver notifications.

Why It Matters

This is a procedural and technical constraint designed to lock in civil-liberty and safety guardrails while permitting exceptional operational use under strict certification and transparency rules. For program managers and vendors it creates compliance gates tied to testing, documentation, and political review that can alter procurement timelines and technical design choices.

More articles like this one.

A weekly email with all the latest developments on this topic.

Unsubscribe anytime.

What This Bill Actually Does

The Act sets out a short, targeted set of prohibitions. It draws a bright line around three classes of AI uses: any automation of nuclear weapon launching or detonation; AI-driven monitoring, tracking, profiling, or targeting of people or groups in the United States unless there is an individualized, articulable legal basis; and use of lethal force by autonomous weapon systems absent "appropriate levels" of human judgment and supervision.

The bill also anchors other autonomous-weapon activity to existing DoD policy (Directive 3000.09), making compliance with that directive a statutory touchstone for non-lethal or assisted-autonomy use.

Rather than an absolute ban on autonomous lethal systems, the bill gives the Secretary of Defense a narrow waiver power. The Secretary cannot delegate that power, and each waiver is limited to one year (renewable).

A waiver must be accompanied by a written certification that extraordinary national-security circumstances require the waiver and, critically, that the system’s likelihood of producing results inconsistent with commander intent does not exceed the documented error rate of trained human operators performing equivalent tasks under equivalent conditions. That comparison—tying machine performance to human error—pushes program offices to generate realistic developmental and operational test data and to maintain robust operator training and doctrine.Notifications and transparency are central to the enforcement model.

The Secretary must notify the congressional defense committees within five days of issuing waivers for formal development, fielding, or when significant modifications change algorithms, mission sets, environments, target sets, or anticipated adversary countermeasures. Each notice must set out the rationale, system description, operational parameters and safeguards (including activation/deactivation procedures and post-deployment monitoring), testing results demonstrating error-rate parity with humans, intended timeframe and geography, and measures to minimize unintended engagements.

Notices are unclassified by default but may include classified annexes.Finally, the bill adopts an existing statutory definition of "artificial intelligence" by reference to the National AI Initiative Act of 2020. That choice narrows definitional fights but imports any ambiguities from that earlier statute into this statute’s compliance regime.

Taken together, the Act does not prohibit DoD from using AI broadly, but it channels particularly risky uses through a combination of statutory bans, a high bar for waivers, documentation requirements, and rapid congressional oversight.

The Five Things You Need to Know

1

The bill bars any use of AI to execute the launching or detonation of a nuclear weapon—an absolute categorical prohibition.

2

It prohibits DoD from using AI to monitor, track, profile, or target individuals or groups in the United States without an individualized, articulable legal basis, and forbids using AI solely to monitor First Amendment–protected activity.

3

The bill requires "appropriate levels of human judgment and supervision" for any lethal force by autonomous weapon systems and references DoD Directive 3000.09 for other autonomy uses.

4

The Secretary of Defense, without delegation, may waive the lethal-autonomy prohibition for up to one year (renewable) only after certifying extraordinary national-security circumstances and that the system’s error probability does not exceed documented human operator error.

5

For each waiver the Secretary must notify congressional defense committees within 5 days and include a rationale, system description, operational safeguards, test results showing human-error parity, intended timeframe/geography, mitigation measures, and operator procedures (unclassified with an optional classified annex).

Section-by-Section Breakdown

Every bill we cover gets an analysis of its key sections. Expand all ↓

Section 1

Short title

This single-line section gives the Act its working name, the "AI Guardrails Act of 2026." Practically, this is the citation reference used in other instruments and briefing documents; it has no substantive effect on implementation or compliance.

Section 2(a)

Sense of Congress on AI and defense

This subsection states Congress’s view that the United States should adopt AI aggressively to maintain military preeminence while making those uses secure and reliable. That framing signals congressional intent to balance innovation and restraint, and it can shape judicial and administrative interpretation of the statute’s prohibitions and waiver provisions.

Section 2(b)

Three categorical limitations on DoD AI use

Subsection (b) contains the operative prohibitions. Clause (1) establishes an absolute ban on AI for executing or detonating nuclear weapons. Clause (2) restricts domestic monitoring/targeting, requiring an individualized, articulable legal basis regardless of data origin and specifically protecting First Amendment activities from sole-AI monitoring. Clause (3) restricts lethal autonomous weapon use absent sufficient human judgment and ties remaining autonomy activity to compliance with DoD Directive 3000.09. For program managers this creates a legal constraint layered over policy: some architectures and operational concepts will now be statutorily disallowed or require additional compliance work.

2 more sections
Section 2(c)

Secretary-level waiver procedure and notification requirements

This subsection gives the Secretary of Defense a non-delegable waiver authority for the lethal-autonomy prohibition, limited to one-year terms and renewable. Waiver issuance requires a written certification that extraordinary national-security circumstances exist and that the system’s probability of producing results inconsistent with commander intent does not exceed documented error rates for trained human operators in equivalent conditions. The Secretary must notify congressional defense committees within five days for waivers related to development, fielding, or substantial modifications, and each notification must include a list of specific elements: rationale, system description, performance and testing data, doctrine and training, operational timeframe and geography, mitigation measures, activation/deactivation procedures, and continuous monitoring plans. Notices are unclassified by default but may include classified annexes.

Section 2(d)

Definition of artificial intelligence

The Act adopts the definition of "artificial intelligence" from section 5002 of the National Artificial Intelligence Initiative Act of 2020 (15 U.S.C. 9401). That cross-reference avoids drafting a new statutory definition but imports the scope and ambiguities of the earlier definition into this statute, affecting which systems fall inside these prohibitions and waiver rules.

At scale

This bill is one of many.

Codify tracks hundreds of bills on Defense across all five countries.

Explore Defense in Codify Search →

Who Benefits and Who Bears the Cost

Every bill creates winners and losers. Here's who stands to gain and who bears the cost.

Who Benefits

  • U.S. civilians concerned about surveillance: The ban on AI-driven domestic monitoring without an individualized, articulable legal basis protects people from broad, automated profiling and places a legal barrier between DoD data use and First Amendment activity.
  • Service members and commanders seeking clear rules of engagement: By tying lethal-autonomy decisions to human judgment and documented test standards, the bill gives commanders a legal framework and forces investment in training and doctrine.
  • Congressional oversight offices: The five-day notification requirement with detailed technical annexes strengthens committees’ situational awareness and ability to scrutinize high-risk deployments.
  • Allied and partner states advocating norms: The statute crystallizes U.S. policy lines on nuclear automation and autonomous lethal action, which could support allied efforts to set international standards.

Who Bears the Cost

  • DoD acquisition and program offices: They must produce realistic testing evidence, operational parameters, and mitigation plans to secure waivers or demonstrate compliance, adding time and expense to development and fielding schedules.
  • Contractors and AI vendors: Companies building autonomy, targeting, or surveillance systems will face added compliance burdens—detailed testing, documentation, and potential redesigns to ensure human-in-the-loop controls and demonstrable error-rate parity.
  • Combatant commanders and operational planners: Limitations on autonomous lethal action reduce certain tactical options and may require alternative force-package designs or additional personnel to retain human judgment in the loop.
  • Classified research and classified program managers: Rapid notifications and required content (even if partly classified) increase coordination burdens and risk revealing program intent to oversight bodies or, via leaks, adversaries.
  • Testing and evaluation ranges and labs: The bill raises the bar for operational and developmental testing, increasing demand for realistic testbeds, instrumentation, and sampling to establish comparative human error rates.

Key Issues

The Core Tension

The central dilemma is balancing military effectiveness and speed-of-decision (which favor automation and delegated technical authority) against constitutional protections, humanitarian concerns, and the need for measurable safety guarantees; the bill tries to thread this needle with a high evidentiary bar for exceptions, but doing so forces technically fraught comparisons between human and machine performance and concentrates politically sensitive choices at the Secretary level.

The bill inserts quantifiable standards (error-rate parity, five-day notifications, one-year waiver limits) into domains that are technically messy and operationally dynamic. Measuring a system’s "probability of producing a result inconsistent with commander intent" and comparing it to the "documented error rate" of trained human operators requires carefully designed, realistic test regimes; absent agreed-upon methodologies, the comparison could become a source of dispute between program offices, testing authorities, and overseers.

That technical ambiguity could either slow deployments while methodologies are developed or create space for contested certifications that courts or Congress must later adjudicate.

The statute’s rule that domestic monitoring restrictions apply "regardless of the origin of the data used" complicates intelligence and dual-use data practices. DoD often ingests commercial or foreign-sourced datasets; the provision forces legal and compliance teams to trace data provenance and to apply constitutional constraints even when the data originated outside typical domestic collection channels.

Finally, the non-delegable waiver tied to extraordinary circumstances centralizes decision-making at the Secretary level; that design prevents delegation but also creates a potential operational bottleneck in crises and concentrates political accountability at the top—raising questions about how often waivers will be used and how Congress will respond to repeated renewals.

Try it yourself.

Ask a question in plain English, or pick a topic below. Results in seconds.