The AI for America Act codifies a national strategy for artificial intelligence and creates an inclusive plan to guide how the federal government coordinates AI development, deployment, and oversight. By requiring an Action Plan to be submitted by OSTP by July 31, 2027, and updated biennially, the bill sets a formal timeline and milestones for leadership, workforce development, public-private collaboration, and security measures.
It also directs a proactive identification of regulatory barriers to AI adoption in healthcare, scientific research, transportation, and other sectors.
Within one year of enactment, the bill tasks OSTP and selected agency heads to identify these barriers, creating a catalog that informs future regulatory reform and streamlined adoption. Separately, the bill directs NIST to prepare a report on measures to detect and prevent security risks and ideological bias in AI data, training methods, and decision outcomes, including internal review protocols, third-party audits, and public disclosure requirements.
The act defines AI using the definition from the National Artificial Intelligence Initiative Act of 2020, ensuring a consistent scope across agencies and programs.
At a Glance
What It Does
Section 2(a) requires an AI Action Plan with milestones, to be submitted by OSTP and updated at least biennially. Section 2(b) identifies regulatory barriers to AI adoption in key sectors. Section 2(c) tasks NIST with a risk and bias report within one year. Section 2(d) provides the AI definition.
Who It Affects
The action plan and regulatory review affect OSTP, DOE, HHS, Transportation, NSF, NIST, and other federal agencies, plus private sector partners engaged in public-private AI initiatives.
Why It Matters
A formal, coordinated national AI strategy helps cement U.S. leadership while addressing security and bias concerns, and it creates a pathway for regulatory modernization where AI adoption lags due to barriers and uncertainty.
More articles like this one.
A weekly email with all the latest developments on this topic.
What This Bill Actually Does
The bill moves beyond high-level goals by imposing a structured, government-wide approach to AI policy. It requires a formal Action Plan, to be crafted by OSTP in coordination with major agencies, with clear milestones that laboratories, universities, and industry partners can align to.
The plan also emphasizes workforce development, public-private partnerships, and proactive attention to security risks and ideological bias in AI systems.
A second pillar identifies regulatory barriers that could impede AI adoption in healthcare, scientific research, transportation, and other sectors. The act calls for a one-year look-back to map these barriers, enabling lawmakers and regulators to target reforms and reduce unnecessary friction for responsible AI deployment.
Finally, the bill tasks the National Institute of Standards and Technology with producing a formal report on measures to detect and remediate bias and security risks in AI data, training, and outcomes. This includes standard-setting language around internal reviews, third-party audits, and disclosure, and ties the reporting to existing framework updates under the NST Act.
The AI definition anchors scope across agencies, ensuring a common understanding as agencies implement the action plan and any future reforms.
The Five Things You Need to Know
The bill requires OSTP to submit an AI Action Plan by July 31, 2027, with measurable milestones.
The Action Plan must be updated at least every two years.
OSTP and agency heads identify regulatory barriers to AI adoption in healthcare, research, transportation, and other sectors within one year of enactment.
NIST must issue a report within one year on measures to detect and prevent AI security risks and ideological bias, including audits and disclosure requirements.
AI is defined consistently with the National AI Initiative Act of 2020 (15 U.S.C. 9401).
Section-by-Section Breakdown
Every bill we cover gets an analysis of its key sections.
Action Plan and milestones
The Director of the Office of Science and Technology Policy must submit an Action Plan to the House Science and Senate Commerce committees. The Plan should cover leadership, workforce development, public-private AI partnerships, and security against risks and ideological bias. It must include measurable milestones and be updated not less than every two years to reflect progress and new priorities.
Regulatory barriers identification
Within one year of enactment, the Director, in consultation with energy, health, transportation, and other agency heads, must identify regulatory barriers to AI adoption across healthcare, scientific research, transportation, and any other sectors deemed relevant. The goal is to surface obstacles that hinder AI deployment and inform targeted reform measures.
NIST risk and bias report
Within one year of enactment, the NIST Director shall report on measures to detect and prevent security risks and ideological bias in AI data, training methods, or decision outcomes. The report will discuss internal review protocols, third-party audits, and public disclosure requirements, and will reference agency criteria for assessing risks and corrective actions taken.
AI defined
The bill adopts the AI definition from section 5002 of the National Artificial Intelligence Initiative Act of 2020 (15 U.S.C. 9401), ensuring a consistent scope for policy development and implementation across agencies.
This bill is one of many.
Codify tracks hundreds of bills on Technology across all five countries.
Explore Technology in Codify Search →Who Benefits and Who Bears the Cost
Every bill creates winners and losers. Here's who stands to gain and who bears the cost.
Who Benefits
- OSTP and participating federal agencies gain a formal, accountable framework for AI policy and coordinated execution.
- Public-private AI developers and platform partners gain clearer milestones and collaboration opportunities through public-sector partnerships.
- Healthcare providers and researchers seeking AI adoption benefit from a roadmap that reduces regulatory ambiguity.
- Universities and national labs involved in AI research benefit from governance and standards integration.
Who Bears the Cost
- Federal agencies will incur costs to develop, maintain, and publish the Action Plan and its updates.
- NIST and other standard-setting bodies will incur costs to develop, publish risk/bias measures, audits, and disclosure protocols.
- AI vendors and industry participants may incur costs to align with risk, audit, and disclosure expectations if such measures are adopted broadly.
- Public and private entities in regulated sectors may face compliance-related costs to address identified barriers when reforms occur.
Key Issues
The Core Tension
The central tension is between pushing for bold national AI leadership and ensuring rigorous risk management, bias mitigation, and regulatory modernization without stifling innovation or creating new, uneven compliance burdens.
The bill’s approach relies on cross-agency coordination and formal reporting to drive AI policy. While that structure can accelerate leadership and transparency, it also creates potential implementation frictions if agencies differ in priorities or funding.
The identification of regulatory barriers is a useful diagnostic, but translating those findings into concrete reforms will require subsequent legislative and administrative action. The NIST risk and bias provisions set the stage for rigorous oversight, yet the success of those measures will hinge on how broadly the accompanying audits, disclosures, and corrective steps are adopted by industry and regulators.
Try it yourself.
Ask a question in plain English, or pick a topic below. Results in seconds.