The bill directs the Federal Acquisition Security Council to identify artificial intelligence products and services produced or developed by foreign adversaries, requires Office of Management and Budget publication of that list, and tasks executive agencies with excluding or removing listed AI from federal acquisition and use. Agencies may approve limited exceptions for research, testing, counterintelligence/counterterrorism, or to preserve mission‑critical functions, with notice to OMB and congressional oversight committees.
This proposal reorients federal procurement policy toward national‑security risk: it creates a government-maintained blacklist of AI tied to sovereign risk and places explicit exclusion obligations on agencies. For procurement officers, program managers, and vendors, the bill creates new compliance steps, certification pathways for removal from the list, and a concrete process for agency-level exceptions — all of which will affect sourcing, contracting, and vendor due diligence across the federal supply chain.
At a Glance
What It Does
The Federal Acquisition Security Council must compile and keep updated a list of AI produced or developed by foreign adversaries and OMB must publish it. Executive agencies must review acquisitions and consider excluding or removing listed AI, using authorities in 41 U.S.C. 4713, subject to a defined exceptions process.
Who It Affects
Federal procurement offices, program managers for agency IT/AI systems, the Federal Acquisition Security Council and OMB, contractors and vendors that develop or integrate AI, and foreign entities meeting the bill's foreign‑adversary and ownership tests.
Why It Matters
The bill embeds geopolitical risk into federal acquisition decisions and creates a formal mechanism for blacklisting AI tied to adversary states. That changes how agencies assess supplier trustworthiness and creates commercial incentives for suppliers to certify and document provenance of AI components.
More articles like this one.
A weekly email with all the latest developments on this topic.
What This Bill Actually Does
The bill sets up a government process to identify and limit federal use of AI tied to countries the statute treats as foreign adversaries. It charges the Federal Acquisition Security Council with building the authoritative inventory of suspect AI; OMB then posts that inventory publicly.
The council must refresh the list on a recurring schedule and it can remove items if a vendor proves the product isn’t from a foreign adversary and the council confirms that claim.
Once the list exists, each executive agency must review its contracts and systems to identify AI from entities on the list and move to exclude or remove those products from acquisition and use. The statute points agencies to procurement‑law tools (specifically 41 U.S.C. 4713) to implement exclusions and mitigation: that is, agencies must use existing authorities to bar procurements, terminate or modify contracts, or apply other acquisition remedies to reduce risk.The bill does not impose an absolute ban: agency heads may grant narrow exceptions but must put the decision in writing and notify OMB and the designated congressional committees.
The allowed exceptions are limited to scientifically valid research, evaluation/training/testing/analysis, counterterrorism or counterintelligence work, or situations where removing the AI would cripple mission‑critical functions. Finally, the statute defines key terms—importing existing federal definitions for “artificial intelligence,” adopting the covered‑nation concept for “foreign adversary,” and establishing a foreign‑adversary‑entity test that captures entities headquartered in adversary countries, subsidiaries with at least 20% foreign ownership, or actors under foreign control.Operationally, the bill creates a lifecycle: list creation, public posting, agency review and exclusion, and a vendor-driven removal/certification path.
That lifecycle reallocates both decision authority (toward the Federal Acquisition Security Council and agency heads) and operational burden (to procurement, legal, and program offices) while leaving enforcement and oversight to existing procurement controls and congressional notice requirements.
The Five Things You Need to Know
The Federal Acquisition Security Council must develop a list of AI produced or developed by foreign adversaries and OMB must publish it on a public website.
The bill sets discrete timetables: the council must develop the list within 60 days of enactment; OMB must publish it within 180 days; the council must update the list at least every 180 days.
Executive agencies must review and consider excluding or removing listed AI within 90 days of enactment and are directed to use the authorities in 41 U.S.C. 4713 to mitigate procurement risks.
A listed AI can be removed only if the owner submits a certification that it is not produced or developed by a foreign adversary and the council reviews and certifies that claim.
The bill defines a foreign‑adversary entity to include firms domiciled in an adversary country, entities with at least a 20% ownership stake by such interests, or persons subject to the direction or control of those actors.
Section-by-Section Breakdown
Every bill we cover gets an analysis of its key sections.
Short title
Provides the Act's name, the 'No Adversarial AI Act.' This is purely stylistic but signals the bill's focus on preventing the federal government from procuring or using AI linked to defined foreign adversaries.
Create, publish, and update a foreign‑adversary AI list
Subsection (a) tasks the Federal Acquisition Security Council with producing an inventory of AI ‘‘produced or developed by a foreign adversary.’’ Subsection (b) requires OMB, coordinating with that council, to post the list publicly. Subsection (c) demands a recurring update cadence (at least every 180 days) and sets out a removal process: an owner can submit a certification and supporting information; the council must review and then certify removal before the item is delisted. Practically, this section centralizes provenance determinations in a security-focused council and creates a vendor-facing certification pathway for remediation.
Agency review and exclusion using procurement authorities
Section 3 requires heads of executive agencies to review acquisitions and consider the exclusion or removal of AI supplied by covered foreign adversary entities named on the list. The provision expressly points agencies to 41 U.S.C. 4713 — acquisition authorities that can exclude vendors or products, terminate acquisitions, or impose mitigations — effectively instructing contracting officers and program managers to treat listed AI as a procurement security risk and to use established legal tools to act on that risk.
Exceptions and notice to OMB and Congress
This subsection lets an agency head approve a written exception only after notifying the OMB Director and the two named congressional committees. The statute enumerates permissible exceptions—scientifically valid research, evaluation/training/testing/analysis, counterterrorism/counterintelligence activities, and protection of mission‑critical functions—thus limiting but not eliminating operational flexibility. The written‑notice requirement creates a public and congressional record that agencies must rely on when invoking exceptions.
Definitions that set scope of coverage
This subsection imports the federal definition of 'artificial intelligence' from the National AI Initiative Act, uses the 'covered nation' concept from Title 10 to define 'foreign adversary,' and defines 'foreign adversary entity' broadly to include entities domiciled in adversary countries, those with at least 20% foreign ownership by such actors, or entities under their direction or control. The definitions establish the legal boundaries for which AI products and suppliers fall under the list and the exclusion regime.
This bill is one of many.
Codify tracks hundreds of bills on Government across all five countries.
Explore Government in Codify Search →Who Benefits and Who Bears the Cost
Every bill creates winners and losers. Here's who stands to gain and who bears the cost.
Who Benefits
- Federal security, counterintelligence, and acquisition officials — gain an explicit, centralized tool (a government-maintained list and statutory exclusion authority) to reduce supply‑chain risk from AI tied to adversary states.
- Intelligence and defense programs — receive a clearer statutory basis to limit foreign‑sourced AI in sensitive systems and to assert exceptions for operational testing, training, and counterintel work.
- Domestic AI vendors that can document provenance — have a disclosed pathway to remove products from the list via owner certification and council review, creating a compliance market advantage for transparent suppliers.
- Procurement and program managers responsible for high‑risk systems — obtain a binary signal (listed versus not listed) to prioritize mitigation and contracting decisions instead of inventing bespoke provenance rules.
Who Bears the Cost
- Executive agencies and their contracting offices — must perform inventory reviews, implement exclusions or contract modifications, and document exceptions, creating staffing, legal, and operational costs.
- Foreign firms and subsidiaries identified as foreign‑adversary entities — face de facto exclusion from federal contracts, reputational harm, and potential forced unwinding of existing integrations.
- Prime contractors and integrators that rely on multinational supply chains — must trace AI components, substitute products, or establish certification processes, potentially increasing program costs and delivery timelines.
- The Federal Acquisition Security Council and OMB — absorb administrative burdens to create, publish, update, and adjudicate removal petitions on a recurring schedule without additional funding specified in the text.
Key Issues
The Core Tension
The bill pits two legitimate priorities against each other: protecting federal systems from AI tied to hostile states versus preserving operational continuity, procurement flexibility, and open supplier markets; achieving both requires precise, resource‑intensive provenance determinations and narrowly framed exceptions, yet the statute centralizes decisions without prescribing the technical standards or funding needed to do those determinations reliably.
The bill delegates high‑stakes, technical provenance judgments to the Federal Acquisition Security Council but provides little detail on the evidentiary standards or technical criteria the council must use to conclude an AI product is produced or developed by a foreign adversary. That gap risks uneven determinations, inconsistent vendor treatment, and potential legal challenges absent transparent standards and technical review capacity.
The vendor certification and council review path addresses remediation but depends on the council’s ability to evaluate claims — which may require classified or proprietary information and raise disclosure or evidentiary hurdles.
Operational tensions arise between the statute’s security aims and federal program continuity. The exceptions carve out important uses (research, testing, counterintelligence, mission‑critical continuity), but terms like 'mission critical' are fact‑specific and could swallow the prohibition if agencies invoke them broadly.
The directive to use 41 U.S.C. 4713 points agencies to procurement remedies but leaves integration with existing contracting rules, including bilateral contract clauses and subcontractor obligations, ambiguous — which will complicate immediate implementation. Finally, the public posting of the list improves transparency but may create commercial and diplomatic friction, particularly where affiliation and ownership are complex, or where adjudication would benefit from confidential review.
Try it yourself.
Ask a question in plain English, or pick a topic below. Results in seconds.