Codify — Article

Advanced AI Security Readiness Act directs NSA to produce an AI Security Playbook

Directs the NSA to develop classified and unclassified guidance to defend advanced AI systems from technology theft—creating government playbook material that industry and agencies will use.

The Brief

The bill requires the Director of the National Security Agency, acting through the Artificial Intelligence Security Center, to develop an "AI Security Playbook" that identifies vulnerabilities in advanced AI systems and recommends strategies to detect, prevent, and respond to technology theft by nation-state and other highly resourced threat actors. The Playbook must assess which AI components (models, weights, architectures, core insights) present the greatest risk and describe the security levels that might compel substantial U.S. Government involvement.

Why it matters: the measure formalizes a federal intelligence-driven effort to define how to protect commercially developed and government AI systems from high-end theft and espionage. It produces both an unclassified set of guidance for dissemination to industry and the option for classified annexes and hypothetical secure-government build plans, which could shape industry practices and future federal engagement with advanced AI development.

At a Glance

What It Does

The bill directs the NSA Director to produce a Playbook that catalogs AI-specific vulnerabilities, outlines detection and response strategies, and analyzes when and how the U.S. Government would need to take a central role in securing extremely advanced AI systems. The Playbook must include an unclassified portion for broad sharing and may include a classified annex with detailed methods and assessments.

Who It Affects

Prominent AI developers, advanced AI data center operators, cloud providers, defense contractors, and agencies responsible for national security will be the primary audiences for the Playbook and likely the targets of its recommended practices. Federally funded research centers and the intelligence community will be engaged to supply expertise and technical assessments.

Why It Matters

This bill creates an authoritative, intelligence-led set of expectations about how to protect high-risk AI capabilities from theft—potentially setting norms and operational priorities for both private-sector security investments and any future secure-government development efforts.

More articles like this one.

A weekly email with all the latest developments on this topic.

Unsubscribe anytime.

What This Bill Actually Does

The core obligation in the bill is straightforward: the NSA Director, through the agency’s AI Security Center (or its successor), must develop an "AI Security Playbook" focused on defending advanced AI technologies from unauthorized acquisition or replication by well-resourced threat actors. That Playbook is meant to be operational: it must identify unique AI-specific vulnerability classes, flag the components whose compromise would most accelerate an adversary’s capabilities, and set out strategies for detecting, preventing, and responding to attacks.

Rather than a single document type, the Playbook is explicitly bifurcated. It must include an unclassified portion suitable for broad dissemination—general guidelines and best practices private actors can use—and it may include a classified annex containing detailed methodologies and intelligence assessments.

The bill also instructs the Director to analyze the security threshold at which protecting certain AI systems would require substantial U.S. Government involvement and to describe how a hypothetical, highly secure government-controlled development environment might be constructed (for example, protections for model weights, insider-threat mitigation, hardened access controls, and contingency plans).To build the Playbook, the Director must actively engage the private sector and research community: reviewing industry security documents, interviewing subject experts, hosting roundtables and panels, visiting AI facilities, and collaborating with a federally funded research and development center with relevant AI-security experience. The bill carves out those engagements from the Federal Advisory Committee Act to keep the process administratively nimble.

Finally, the bill mandates reporting: an initial congressional update shortly after enactment and a comprehensive submission within a specified number of days that will include a public/unclassified version and may include a classified annex.

The Five Things You Need to Know

1

The bill requires the NSA Director, via the AI Security Center, to deliver an initial Playbook progress report to congressional intelligence committees within 90 days of enactment and a final Playbook report within 270 days.

2

The Playbook must identify AI components whose compromise most accelerates adversary capability—explicitly listing models, model weights, architectures, and core algorithmic insights as items of concern.

3

The Playbook must be provided in an unclassified form suitable for private-sector dissemination and may include a classified annex containing detailed methodologies and intelligence assessments.

4

The Director must engage prominent AI developers and researchers and collaborate with a federally funded research and development center; those engagements are exempted from FACA requirements.

5

The bill defines "covered AI technologies" by capability (e.g.

6

CBRN-related performance, cyber offense, model autonomy, persuasion, self-improvement) and includes a rule that the Playbook itself does not authorize regulatory or enforcement action by the U.S. Government.

Section-by-Section Breakdown

Every bill we cover gets an analysis of its key sections. Expand all ↓

Section 2(a)

Creation of the AI Security Playbook

This subsection creates the central directive: the NSA Director must develop an "AI Security Playbook" to defend covered AI technologies from technology theft. Practically, it establishes the NSA's AI Security Center (or its successor) as the coordinator and places responsibility for drafting a cross-cutting security framework squarely with the intelligence community rather than any regulatory agency.

Section 2(b)

Required Elements of the Playbook

This subsection lists the Playbook’s required contents: identification of AI-specific vulnerabilities in data centers and among developers; an inventory of components (models, weights, architectures, core insights) that materially advance adversary capability if stolen; strategies for detection, prevention, and response; an assessment of security levels that would necessitate substantial U.S. Government involvement; and an analysis describing how a hypothetical secure-government build would operate (covering protocols from cybersecurity controls to insider-threat vetting). Those elements frame both technical focus areas and policy choices about federal intervention.

Section 2(c)

Form: Classified Annex and Unclassified Guidance

The bill requires two publication forms: an unclassified portion with general guidance for dissemination to relevant private-sector actors and a classified annex for detailed methodologies and intelligence analysis. That structure is designed to let sensitive tradecraft stay classified while still giving actionable, non-sensitive direction to industry.

4 more sections
Section 2(d)

Mandatory Engagement and Use of FFRDC

This subsection obliges the Director to consult with prominent AI companies and researchers through document reviews, interviews, roundtables, and facility visits, and to collaborate with a federally funded research and development center that has done AI-security work. Importantly, those consultative activities are explicitly not to be treated as advisory committees under FACA, reducing procedural overhead but also limiting formal transparency mechanisms.

Section 2(e)

Reporting Deadlines and Public Versions

The bill sets two report milestones: an initial congressional progress report shortly after enactment and a final Playbook report within a set period (90 and 270 days respectively). The final report must include an unclassified version suitable for private-sector dissemination and a publicly available version, and it may carry a classified annex. These reporting timelines impose near-term deadlines on the NSA for producing actionable material.

Section 2(f)

Rule of Construction — Not a Regulatory Mandate

This clause expressly states that the analysis required about when the federal government would need to become substantially involved in AI development does not by itself authorize any regulatory or enforcement action. That limits the Playbook’s legal bite: it is framed as guidance and assessment rather than a new statutory regime of controls.

Section 2(g)

Key Definitions

The bill defines "covered AI technologies" by capability thresholds (listing examples like CBRN-related proficiency, cyber offense, model autonomy, persuasion, and self-improvement), "technology theft" broadly to include cyber and insider routes, and "threat actors" as nation-states and highly resourced adversaries. Those definitions set the scope for what the Playbook must treat as high-risk.

At scale

This bill is one of many.

Codify tracks hundreds of bills on Defense across all five countries.

Explore Defense in Codify Search →

Who Benefits and Who Bears the Cost

Every bill creates winners and losers. Here's who stands to gain and who bears the cost.

Who Benefits

  • National Security Agencies: The NSA and intelligence community receive an authoritative, intelligence-driven framework that clarifies priorities for protecting high-risk AI capabilities and coordinates technical assessments.
  • Private-sector AI Developers and Operators: Firms get unclassified best practices and threat prioritization from an intelligence source, which can inform corporate security investments and risk management decisions.
  • Critical Infrastructure and Defense Contractors: Operators of sensitive systems gain clearer guidance about which AI components require heightened protection, reducing ambiguity when prioritizing security upgrades.
  • Federally Funded Research Centers and Security Vendors: FFRDCs and commercial security providers can leverage government collaboration and likely contract opportunities to operationalize the Playbook’s recommendations.

Who Bears the Cost

  • AI Companies and Cloud/Data Center Operators: Firms may need to invest in hardened infrastructure, new access controls, model-weight protections, and insider-threat programs to align with Playbook guidance—costs that fall disproportionately on developers of advanced models.
  • Small AI Startups: Startups face relative compliance and operational burdens to adopt high-grade protections or to participate in engagements, with limited resources compared to large incumbents.
  • NSA and Supporting Agencies: The agency will need staff, technical expertise, and potentially classified analytic capacity to produce and maintain the Playbook, and to model hypothetical secure-build environments.
  • Researchers and Open-Source Projects: The Playbook’s emphasis on protecting core algorithmic insights and model components could pressure open research norms, imposing indirect costs on collaborative and open-science communities.

Key Issues

The Core Tension

The bill confronts a central dilemma: protect exceptionally powerful AI capabilities from theft and misuse by concentrating secrecy, control, and potentially government-led development, or preserve the openness and collaboration that underpin rapid AI innovation—measures that can make those capabilities harder to secure. The Playbook approach tries to thread that needle, but its success depends on judgment calls about scope, classification, and incentives that have no clean technical or policy-only answers.

The bill creates useful clarity about an intelligence-led approach to AI security, but it leaves several operational questions unresolved. First, the definition of "covered AI technologies" is capability-based and therefore inherently fuzzy: determinations about what “poses a grave national security threat” will require continuous technical judgement and could vary with advances in model capability.

That ambiguity affects who must act and when. Second, the Playbook’s bifurcated publication model (unclassified guidance plus classified annex) balances secrecy with dissemination, but it risks producing two tiers of protection: high-fidelity tradecraft that only the government and cleared contractors see, and coarser guidance for the broader market.

Translating classified mitigation techniques into robust, publicly usable controls will be technically and politically difficult.

Implementation also raises trade-offs between security and innovation. The bill directs deep engagement with industry but exempts those engagements from FACA, which expedites consultation but reduces formal transparency and public accountability.

The required hypothetical secure-government build—covering vetting, compartmentalization, and model-weight protections—may be feasible in narrow cases but would be expensive and operationally complex at scale, raising questions about when federal involvement is proportionate. Finally, because the bill explicitly refrains from granting regulatory or enforcement authority, the Playbook’s recommendations will be persuasive rather than mandatory; their uptake will depend on industry willingness and incentives, leaving some high-risk areas potentially under-protected.

Try it yourself.

Ask a question in plain English, or pick a topic below. Results in seconds.