Codify — Article

NSA must develop guidance to secure advanced AI and its supply chain

Directs the NSA’s Artificial Intelligence Security Center to produce unclassified and classified guidance, consult industry and labs, and report progress — setting baseline practices for protecting high-risk AI.

The Brief

The Advanced Artificial Intelligence Security Readiness Act of 2025 requires the Director of the National Security Agency, through the agency’s Artificial Intelligence Security Center, to develop and disseminate security guidance focused on vulnerabilities in advanced AI technologies and the broader AI supply chain. The guidance must identify unique cyber threat vectors, highlight supply‑chain elements whose compromise would enable foreign actors to replicate or sabotage AI capabilities, and recommend protective, detective, response, and recovery measures.

This bill matters because it centralizes a federal effort to translate national‑security risk into concrete security practices for AI developers, cloud providers, research labs, and contractors. By producing both unclassified best practices and classified materials for briefings, the Act aims to bridge national security needs and private‑sector operational realities without creating new penalties or regulatory authorities in the text of the bill itself.

At a Glance

What It Does

Directs the NSA to create detailed security guidance addressing vulnerabilities unique to AI systems and AI supply chains, produce an unclassified set of best practices and a classified annex or briefing materials, and consult with industry, National Laboratories, and relevant federal agencies.

Who It Affects

Targets developers of advanced AI models, cloud and HPC providers that host training/inference environments, university and government research centers, defense contractors, and service providers that manage model artifacts such as weights and datasets.

Why It Matters

Establishes a single federal focal point for AI security practices tied to national security risks, which is likely to shape private‑sector operational standards and influence procurement, contracting, and cross‑agency coordination on protecting high‑risk AI capabilities.

More articles like this one.

A weekly email with all the latest developments on this topic.

Unsubscribe anytime.

What This Bill Actually Does

The bill instructs the NSA — via its Artificial Intelligence Security Center — to produce a practical set of security guidelines focused squarely on protecting advanced AI systems and the chain of components that make those systems possible. Rather than rebranding conventional IT security, the guidance must explain threat vectors and risks that are particular to AI: model theft, exploitation through public interfaces, side‑channel attacks, and risks introduced by data and training environments.

The guidance must both identify which supply‑chain elements are most valuable to an adversary and give operators playbooks for protection: protecting model weights and artifacts, personnel vetting to reduce insider risk, network access controls, counterintelligence measures, and incident response tailored to AI compromises. The bill requires an unclassified, detailed best‑practice document suitable for broad dissemination and permits a classified annex and classified briefing materials for sensitive, provider‑specific security briefings.To develop grounded guidance, the NSA must engage with prominent AI developers and researchers, review industry materials, interview subject‑matter experts, host workshops, and visit development facilities.

The agency must also leverage federal R&D assets — National Laboratories, university affiliated centers, and federally funded R&D centers — and consult relevant agencies such as Commerce (BIS), NIST’s AI center, DHS, and DOD. Those engagements are procedural requirements built into the bill’s design rather than optional suggestions.The Act formalizes reporting deadlines: an initial status report to congressional intelligence committees within 180 days describing progress and outstanding work, and a final report within 365 days that must include an unclassified version appropriate for private‑sector use, a publicly available version, and may include a classified annex.

The statute defines key terms — including “covered artificial intelligence technologies” (high‑risk systems described by capability domains), “AI supply chain,” “technology theft,” and “threat actors” — which will guide which systems and actors the guidance targets. The statute itself prescribes guidance development and dissemination but does not create enforcement tools or new statutory penalties for noncompliance.

The Five Things You Need to Know

1

The Director of the NSA must deliver an initial report to the congressional intelligence committees within 180 days and a final report within 365 days after enactment.

2

The guidance must include unclassified, detailed best practices and may include a classified annex plus classified materials specifically for security briefings to service providers.

3

Required mitigation topics include protection of model weights and artifacts, personnel vetting to address insider threats, network access controls, and counterintelligence/anti‑espionage measures.

4

The NSA must consult industry leaders, review publicly available industry documents, interview experts, host roundtables and site visits, and leverage National Laboratories and federally funded R&D centers.

5

The statute’s definition of “covered artificial intelligence technologies” targets systems that could match or exceed human expert performance in domains such as CBRN matters, cyber offense, model autonomy, persuasion, R&D, and self‑improvement.

Section-by-Section Breakdown

Every bill we cover gets an analysis of its key sections. Expand all ↓

Section 1

Short title

A single line establishing the Act’s public name: the Advanced Artificial Intelligence Security Readiness Act of 2025. This is administrative, but it signals congressional intent to tie AI security work to a defined statutory program.

Section 2(a)

Mandate to develop AI security guidance

Directs the NSA Director, acting through the Artificial Intelligence Security Center (or a successor office), to develop and distribute guidance that identifies vulnerabilities in covered AI technologies and their supply chains, with emphasis on cyber risks and threats from foreign actors. Practically, this converts the Center into the federal focal point for articulating AI‑specific cybersecurity risk and recommending operational mitigations for both government and private operators.

Section 2(b)–(c)

Required content and format of guidance

Specifies that guidance must identify AI‑specific vulnerabilities and supply‑chain elements whose compromise would materially aid an adversary. It enumerates mitigation strategies—protecting model artifacts, insider threat controls, network access control, counterintelligence measures—and requires an unclassified set of detailed best practices while allowing a classified annex and classified briefing materials. For implementers, this means there will be both publicly shareable playbooks and sensitive briefings tailored to critical providers.

3 more sections
Section 2(d)

Stakeholder engagement and information sources

Requires active engagement with prominent AI developers and researchers (via document review, interviews, roundtables, and site visits) and mandates leveraging National Laboratories, university affiliated research centers, and federally funded R&D centers. It also instructs consultation with Commerce (BIS), NIST’s AI center, DHS, and DOD. This creates a structured process for input but gives the Director discretion over who qualifies as a ‘prominent’ stakeholder.

Section 2(e)

Reporting deadlines and public/unclassified deliverables

Requires an initial status report within 180 days and a final report within 365 days to the congressional intelligence committees. The final report must include an unclassified version suitable for private‑sector dissemination and a publicly available version and may include a classified annex. These deliverables are concrete milestones and the mechanism by which Congress and industry will evaluate the Center’s outputs.

Section 2(f)

Key definitions to scope coverage

Sets statutory definitions for ‘artificial intelligence’ (by reference), ‘AI supply chain,’ ‘covered artificial intelligence technologies’ (high‑capability systems that would pose a grave national security threat if stolen), ‘technology theft,’ and ‘threat actors’ (nation‑states and highly resourced actors). These definitions frame which systems the guidance will target and how the Center should prioritize mitigations.

At scale

This bill is one of many.

Codify tracks hundreds of bills on Technology across all five countries.

Explore Technology in Codify Search →

Who Benefits and Who Bears the Cost

Every bill creates winners and losers. Here's who stands to gain and who bears the cost.

Who Benefits

  • National security and intelligence community — Gains a centralized, NSA‑led articulation of AI security risks and mitigations, improving coordinated threat assessments and operational protection across agencies.
  • Critical infrastructure and defense contractors — Will receive actionable, sector‑relevant guidance and classified briefings that can harden systems that rely on advanced AI or host sensitive model development.
  • AI security and managed‑security providers — Creates demand for services (model‑artifact protection, insider‑threat tooling, secure hosting) and clarifies where specialized security offerings are needed.
  • Research institutions and National Laboratories — Get prioritized engagement and access to federal expertise and possibly classified threat insights to inform secure research practices.
  • Congressional oversight committees — Obtain a structured reporting mechanism and unclassified products to evaluate AI security posture and federal coordination.

Who Bears the Cost

  • Private AI developers and cloud/HPC providers — Face costs to implement recommended protections (segmented environments, artifact encryption, personnel vetting) and to participate in briefings, site visits, and consultations.
  • Universities and small research labs — May need to adopt stricter personnel vetting and access controls, potentially clashing with open academic norms and increasing administrative burden.
  • NSA and participating federal agencies — Must allocate personnel and classified‑analysis capacity to develop, coordinate, brief, and maintain the guidance and annexes.
  • Small AI startups — May lack resources to implement recommended mitigations, creating competitive pressure or the need to purchase third‑party security services.
  • Cloud and managed service operators — Could face technical and contractual changes to support protected training/inference environments and model‑artifact custody.

Key Issues

The Core Tension

The central dilemma is protecting national security by tightly securing high‑risk AI capabilities versus preserving the openness, collaboration, and rapid innovation that underpin the AI ecosystem; measures that harden AI against theft and sabotage can also restrict research partnerships, raise costs for smaller actors, and reduce the public availability of best practices if too much information is classified.

The bill creates a federal focal point for AI security without prescribing enforcement mechanisms, which produces both strength and ambiguity. The statute mandates guidance creation and sets reporting deadlines, but it does not itself impose compliance requirements, civil penalties, or procurement mandates.

That leaves open questions about how the guidance will translate into binding requirements (via contracts, executive orders, procurement rules, or future legislation) and whether industry will adopt recommended practices voluntarily or under pressure from government customers.

Key operational challenges include the tension between classification and information sharing: sensitive, provider‑specific threat insights belong in a classified annex or briefings, but effective private‑sector defenses also rely on timely, actionable unclassified guidance. Deciding what stays classified will determine how usable the public guidance is.

The statutory definition of “covered artificial intelligence technologies” is capability‑based and inherently vague — deciding thresholds for ‘‘grave national security threat’’ will require careful calibration and could privilege protection of a narrow set of systems while leaving ambiguous everyday commercial AI deployments.

Finally, the bill’s emphasis on personnel vetting, site visits, and counterintelligence measures clashes with open research norms and international collaboration. Operationalizing insider‑threat mitigations and access controls in academic and commercial settings will create real costs and potential friction with innovation practices; similarly, smaller actors may lack the resources to comply with best practices, producing concentration risk where only large firms can meet the guidance’s expectations.

Try it yourself.

Ask a question in plain English, or pick a topic below. Results in seconds.