The DETECT Act of 2025 directs the Comptroller General to produce a report evaluating the potential of artificial intelligence to assist the Internal Revenue Service in detecting tax fraud. The report must be submitted to the House Committee on Ways and Means and the Senate Committee on Finance not later than 180 days after enactment.
The act is a reporting requirement, aiming to inform congressional oversight and future policy considerations without authorizing or mandating any specific deployment of AI. The analysis should cover feasibility, data needs, governance considerations, risks, and implementation prerequisites that would affect future decisions about AI-enabled enforcement.
This is a governance and oversight step. By clarifying what AI capabilities could plausibly contribute to fraud detection and what constraints apply, the bill helps Congress understand whether further study, pilot programs, or policy actions would be warranted.
It does not itself create funding, set standards, or prescribe a specific technology, but it sets the stage for informed decision-making about the IRS’s use of AI tools in tax enforcement.
At a Glance
What It Does
The Comptroller General must prepare and submit a report within 180 days of enactment analyzing the potential for AI to assist the IRS in detecting tax fraud. The report is directed to the House Ways and Means Committee and the Senate Finance Committee.
Who It Affects
Directly affects the Comptroller General (GAO), the IRS, and the two congressional committees named in the bill. Indirectly affects taxpayers and tax policy stakeholders who would be impacted by any future AI-enabled enforcement actions or oversight.
Why It Matters
This is an early, probative step to assess AI capabilities, data prerequisites, governance, and risk before any implementation decisions. It signals congressional oversight priorities and can inform future policy choices around technology use in tax enforcement.
More articles like this one.
A weekly email with all the latest developments on this topic.
What This Bill Actually Does
This bill creates a formal, data-driven moment for accountability around AI and tax enforcement. It requires the GAO to evaluate whether AI technologies could help the IRS detect tax fraud, identify what kinds of data would be needed, and assess governance and risk considerations.
The GAO must deliver its findings to the House Ways and Means Committee and the Senate Finance Committee within 180 days after enactment. The request is strictly for a report; it does not authorize funding or mandate immediate adoption of AI tools, nor does it prescribe how any future AI system would be used.
The purpose is to inform lawmakers about feasibility, trade-offs, and conditions for responsible use.
By focusing on the feasibility and governance of AI in tax enforcement, the bill aims to illuminate what is technically possible and what practical barriers might exist. The analysis should help policymakers decide whether further study, pilots, or policy action would be warranted, and it establishes a clear reporting track for oversight without pre-committing to specific technologies or outcomes.
The Five Things You Need to Know
The bill requires the Comptroller General to study AI's potential to help the IRS detect tax fraud.
The GAO must deliver the report within 180 days after enactment.
The report is due to the House Committee on Ways and Means and the Senate Committee on Finance.
This is a reporting requirement, not a funding or mandate to implement AI.
The DETECT Act is a short, focused step toward evaluating technology use in tax enforcement.
Section-by-Section Breakdown
Every bill we cover gets an analysis of its key sections.
Short title
This Act may be cited as the “Digital Evaluation for Tax Enforcement and Compliance Tracking Act of 2025” or the “DETECT Act of 2025.” The section establishes the act’s official name for reference in future legislative and oversight actions.
Report on potential for use of artificial intelligence to detect tax fraud
Section 2 directs the Comptroller General to prepare and submit a report not later than 180 days after enactment. The report must assess the potential of artificial intelligence to assist the Internal Revenue Service in detecting tax fraud and must be transmitted to the House Committee on Ways and Means and the Senate Committee on Finance. The scope is limited to evaluating feasibility, data needs, governance considerations, risks, and prerequisites for any future AI-enabled enforcement actions. The section makes no funding authorization and does not mandate adoption of AI tools.
This bill is one of many.
Codify tracks hundreds of bills on Finance across all five countries.
Explore Finance in Codify Search →Who Benefits and Who Bears the Cost
Every bill creates winners and losers. Here's who stands to gain and who bears the cost.
Who Benefits
- GAO’s metric-driven evaluation supports congressional oversight and adds clarity to how AI could be evaluated in federal programs.
- IRS leadership and program stakeholders gain a preliminary, structured assessment of data, governance, and risk implications that could inform later decisions.
- House Ways and Means Committee staff and Senate Finance Committee staff receive an explicit, time-bound briefing on AI’s potential for fraud detection to guide oversight.
- Tax policy researchers and policy analysts benefit from a formal, auditable study framework that clarifies feasibility and constraints.
Who Bears the Cost
- GAO will incur staff time and resource use to conduct the analysis and prepare the report.
- IRS may need to provide data and access to systems or data sources as part of the GAO’s evaluation.
- House and Senate committee offices will allocate staff time to review and engage with the GAO’s findings.
- Other federal entities supporting data or governance discussions may incur minor coordination costs as needed.
Key Issues
The Core Tension
The central dilemma is whether to accelerate oversight by evaluating AI feasibility now versus the risk of conflating an evaluative report with an expectation of future AI deployments or policy actions, all while ensuring data governance and privacy considerations are adequately explored.
The bill creates a focused, early-stage evaluation rather than a programmatic mandate. It relies on GAO data collection and analysis, which requires interagency cooperation and access to information about potential AI capabilities.
The scope is inherently limited to a report, leaving open future decisions about funding, pilot programs, standards, or deployment. Privacy, data governance, model transparency, and accuracy considerations are raised as potential issues that the GAO may need to address in its assessment.
Implementers and policymakers should be mindful of data quality, potential biases in AI systems, and the risk that a study could overstate capabilities without concrete implementation plans.
Try it yourself.
Ask a question in plain English, or pick a topic below. Results in seconds.