Codify — Article

AI PLAN Act (H.R.2152) mandates strategy to counter AI-enabled financial crime

Directs Treasury, DHS, and Commerce to produce an interagency report and follow-on recommendations to defend U.S. financial systems against AI-driven fraud, deepfakes, synthetic identities, and misinformation.

The Brief

H.R.2152 requires the Treasury, the Department of Homeland Security, and the Department of Commerce to jointly produce a written strategy for defending U.S. financial markets, businesses, persons, and global supply chains from financial crimes enabled by artificial intelligence. The statute compels an initial submission soon after enactment and then yearly updates, and it enumerates the types of AI-related threats that the strategy must address.

The bill matters because it turns a cross-cutting concern—AI-enhanced fraud and misinformation—into a formal interagency reporting and planning obligation. By forcing agencies to catalog existing defensive tools and quantify unmet needs, the measure creates a vehicle for Congress and agencies to prioritize investments, identify gaps in authority or capability, and produce legislative and operational recommendations to harden financial systems against emerging AI threats.

At a Glance

What It Does

The bill directs Treasury, DHS, and Commerce to submit a joint report to Congress within 180 days of enactment and then annually, describing interagency policies to defend against AI-enabled financial crime, providing inventories of available and needed resources, and addressing a specified list of AI risks. Within 90 days after each report, the same trio must deliver follow-on recommendations that include legislative proposals and best practices for government and industry.

Who It Affects

Primary actors are federal financial regulators and executive agencies (Treasury, DHS, Commerce, plus consulted offices such as the Attorney General, Fed Chair, SEC Chair, NIST, USTR, and Commerce’s Industry & Security office). Secondary effects fall on banks, fintechs, payment processors, cybersecurity vendors, corporations with global supply chains, and law enforcement entities that investigate financial crime.

Why It Matters

This bill centralizes defensive planning for AI-driven financial threats and forces agencies to inventory both ready-to-deploy tools and capability shortfalls with budget estimates—information Congress rarely compiles in a single, actionable package. The required recommendations create a clear pathway for new legislation and operational changes targeted at AI-enabled fraud and misinformation that disrupt markets.

More articles like this one.

A weekly email with all the latest developments on this topic.

Unsubscribe anytime.

What This Bill Actually Does

The AI PLAN Act instructs three Cabinet-level agencies—Treasury, DHS, and Commerce—to lead a coordinated exercise: produce a joint report to Congress laying out how the U.S. will defend its financial systems from misuse of AI. That report must not be a high-level overview; the bill asks for concrete outputs: descriptions of interagency policies and procedures, an inventory of immediately usable hardware, software and other resources, and a separate inventory of additional items, people, technologies and estimated budgets needed to close gaps.

The statute narrows the threat landscape by listing particular risks that the report must consider: deepfakes, voice cloning, foreign election interference, synthetic identities, false flags or signals that could disrupt market operations, and more general digital fraud. The agencies must also consult a defined set of senior officials and regulators (including the U.S. Trade Representative, Attorney General, Federal Reserve Chair, NIST Director, Commerce’s Under Secretary for Industry and Security, and the SEC Chair) while preparing the report—so the product should reflect regulatory and national-security perspectives, not just Treasury’s view.After each report lands with Congress, the three lead secretaries have 90 days to provide a companion package of recommendations: proposed statutory fixes or new authorities, plus best-practice guidance intended for American companies and government incident responders.

The statute sets an annual rhythm—an initial submission within roughly six months of enactment and yearly updates thereafter—so the deliverable becomes a recurring planning tool rather than a one-off study.Notably, the bill focuses on defensive planning and prioritization: it inventories what’s already available, what’s missing (with estimated costs), and what legal or operational changes Congress might consider. The text does not itself create new prohibitions, funding streams, procurement authorities, or enforcement mechanisms; it creates information and a pathway for policy change through follow-on recommendations.

The Five Things You Need to Know

1

The lead requirement: Treasury, DHS, and Commerce must jointly submit a detailed report to Congress about defending U.S. financial systems from AI-enabled financial crime within 180 days of enactment, and then every year after.

2

Report content: the agencies must describe interagency policies and procedures, provide an itemized list of immediately usable resources (hardware, software, technologies), and provide a separate itemized list of needed resources, personnel, and budget estimates.

3

Threats enumerated: the law explicitly requires the report to address deepfakes, voice cloning, foreign election interference, synthetic identities, false flags/false signals that could disrupt markets, and general digital fraud.

4

Consultation requirement: the report must be developed in consultation with a specified group of officials—USTR, the Attorney General, the Fed Chair, the NIST Director, the Commerce Under Secretary for Industry and Security, and the SEC Chair—bringing regulators and national-security offices into the process.

5

Follow-on deliverable: within 90 days after each report the same three agencies must submit legislative recommendations and best-practice guidance for both government and private-sector actors to mitigate AI-driven financial crime.

Section-by-Section Breakdown

Every bill we cover gets an analysis of its key sections. Expand all ↓

Section 1

Short title

Gives the Act the public name: the "Artificial Intelligence Practices, Logistics, Actions, and Necessities Act" (AI PLAN Act). This is purely formal but important for citations and how agencies will label any implementing documents.

Section 2(a)

Sense of Congress on AI-enabled financial crime

Declares that Congress views the use of AI by adversarial actors to commit financial crimes as a significant national and economic security risk. The language is hortatory—intended to signal priority and justify the reporting mandate—but it has no operative effects beyond framing the subsequent requirements.

Section 2(b)(1)(A)-(C)

Core report deliverables: policies, available resources, and needs

Establishes three concrete categories the joint report must include: (A) a description of interagency policies and procedures to defend U.S. financial markets, persons, businesses, and supply chains; (B) an itemized list of readily available resources (hardware, software, technologies) that can be deployed immediately; and (C) an itemized list of additional resources, personnel, technologies, and estimated budgets required to fill gaps. Practically, this will require agencies to map existing capabilities, identify gaps in procurement or staffing, and produce figures that could be used in budget requests or to prioritize grant programs.

3 more sections
Section 2(b)(2)

Specified risks the report must assess

Requires the report to address a targeted set of AI-enabled threats: deepfakes, voice cloning, foreign election interference, synthetic identities, false flags/false signals that disrupt markets, and overall digital fraud. By enumerating these categories, the bill narrows the analytic scope and forces agencies to tie inventories and policy recommendations to concrete threat vectors rather than generic AI risk statements.

Section 2(b)(3)

Required consultation with specified officials

Directs the lead agencies to consult a named roster of senior officials and regulators (USTR, Attorney General, Fed Chair, NIST Director, Commerce’s Under Secretary for Industry & Security, and SEC Chair) when preparing the report. This provision pulls in trade, law enforcement, regulatory supervision, standards, and export-control perspectives, increasing the likelihood the product will address cross-cutting policy levers (export controls, enforcement options, regulatory guidance) rather than only operational tools.

Section 2(c)

Follow-on recommendations to Congress and industry

Mandates that within 90 days after each report, the three secretaries submit recommendations that include suggested legislation to address identified risks and best practices for private and public incident response. Functionally, this turns the report into the informational foundation for specific policy proposals and operational guidance; it also creates a short, recurring window for Congress to receive refined legislative options tied to enumerated capability shortfalls.

At scale

This bill is one of many.

Codify tracks hundreds of bills on Finance across all five countries.

Explore Finance in Codify Search →

Who Benefits and Who Bears the Cost

Every bill creates winners and losers. Here's who stands to gain and who bears the cost.

Who Benefits

  • Federal financial regulators and national-security offices — receive a consolidated, recurring intelligence and capability assessment that can inform supervision, enforcement priorities, and budget requests.
  • Banks, payment processors, and large financial institutions — obtain government-developed best practices and a clearer roadmap of defensive tools and expected industry standards for combating AI-enabled fraud.
  • Cybersecurity vendors and detection technology providers — stand to gain clearer demand signals and procurement opportunities as agencies inventory available solutions and identify capability gaps.
  • Law enforcement and fraud investigators — benefit from interagency coordination and inventories that can speed access to forensics tools, attribution capabilities, and cross-border cooperation when AI is used in scams or misinformation campaigns.
  • Corporations with global supply chains and investors — receive assessments and recommended mitigations that aim to reduce market-disrupting misinformation and false-signal events that can cause operational and financial shocks.

Who Bears the Cost

  • Treasury, DHS, and Commerce (plus consulted agencies) — must allocate staff time, expertise, and possibly contracting resources to produce the inventories, analyses, and follow-on recommendations on an annual cycle.
  • Federal agencies likely need additional funding to act on identified gaps — the bill requires budget estimates but contains no appropriation, creating pressure on agencies to compete for limited resources in future budget cycles.
  • Private-sector firms, especially smaller banks and fintechs — may face costs to implement recommended best practices or to procure defensive technologies identified as necessary, with limited insulation for compliance costs.
  • Cybersecurity contractors and service providers — while benefiting commercially, may need to scale quickly and invest in compliant offerings, absorbing upfront development and certification expenses.
  • Privacy and civil-liberties advocates and downstream users — could face indirect costs if recommendations prioritize surveillance or expansive data collection to detect AI-enabled fraud without clear privacy safeguards.

Key Issues

The Core Tension

The central dilemma is urgency versus means: the bill pushes agencies to produce timely, concrete plans and to quantify resource shortfalls, but it provides no funding or new authorities—forcing agencies to choose between rapid, possibly superficial reporting and deeper analysis that requires resources Congress has not committed to provide. That trade-off pits the need to act quickly against the reality that meaningful defense often requires new funding, clearer authorities, and international cooperation.

The bill creates an explicit information and planning obligation but stops short of funding or authority changes. That creates a familiar implementation problem: agencies will identify capability shortfalls and budget requests, but the statute does not provide resources or enforce a timeline for Congress to act.

The requirement to itemize 'readily available' resources versus 'needed' resources will produce useful procurement and gap analyses, but without accompanying appropriations or procurement authorities it may simply catalog unmet needs.

Operationally, the mandate intersects with existing programs and standards (FinCEN reporting regimes, SEC cyber rules, NIST AI work, export controls handled by Commerce and BIS). The text does not specify how its inventories and recommendations should dovetail with those regimes, so agencies may duplicate effort or produce inconsistent guidance.

The consultation list is broad and includes regulators with different missions, which increases legitimacy but also complicates consensus-building: trade, national-security, enforcement, and market-regulation priorities can pull in different directions. Finally, the statute is explicit about certain threat types but silent on how to handle classified intelligence, private-sector proprietary tools, or cross-border legal constraints—these practicalities could limit how candid or actionable the public report can be.

Try it yourself.

Ask a question in plain English, or pick a topic below. Results in seconds.