Codify — Article

Algorithmic Accountability Act of 2025: FTC‑mandated impact assessments for AI systems

Requires large (and certain mid‑sized) entities to run, document, and report impact assessments for automated systems that make ‘critical decisions,’ plus a public repository and new FTC technology capacity.

The Brief

The Algorithmic Accountability Act of 2025 directs the Federal Trade Commission to write and enforce rules requiring covered entities to perform, document, and retain impact assessments for any automated decision system used to make “critical decisions” (education, employment, healthcare, housing, financial services, utilities, legal services, and similar areas). Covered entities must submit an initial summary report before deployment and annual summary reports thereafter; the FTC will publish aggregated analyses and a publicly searchable repository of a limited subset of those reports.

The bill creates concrete compliance obligations—data provenance, testing and benchmarking, differential‑impact analysis, stakeholder consultation, mitigation plans, retention rules, and reporting formats—and backs them with FTC enforcement (treated as unfair or deceptive acts) and state parens patriae authority. It also authorizes a Bureau of Technology inside the FTC, hires for enforcement capacity, and formalizes interagency and standards coordination.

The result is a cross‑sector baseline for auditing and transparency of algorithmic systems that materially affect consumers’ lives, with substantial operational, contractual, and disclosure implications for affected firms.

At a Glance

What It Does

Directs the FTC to adopt rules (within two years) that require covered entities to conduct ongoing impact assessments of automated decision systems used for ‘‘critical decisions,’’ maintain documentation, file initial and annual machine‑readable summary reports, and enable a public repository of limited report data. It also establishes a Bureau of Technology at the FTC and enforcement authorities for violations.

Who It Affects

Applies to entities meeting financial or data thresholds (e.g., >$50M average annual gross receipts or custody of identifying information for >1,000,000 consumers), and to certain mid‑sized firms whose systems are used in covered entities’ critical decisions. Affected parties include platform operators, banks and lenders, healthcare providers, housing and employment service providers, model developers, and third‑party integrators.

Why It Matters

Creates an enforceable, sector‑agnostic standard for algorithmic impact assessment and reporting, pushing firms to build auditability, data provenance, and mitigation into product lifecycles. The public repository and FTC summaries will supply researchers, advocates, and regulators with comparative data while changing vendor contracting dynamics and compliance priorities across AI supply chains.

More articles like this one.

A weekly email with all the latest developments on this topic.

Unsubscribe anytime.

What This Bill Actually Does

The bill starts by defining key terms—‘‘automated decision system’’ (broadly any computation used as a basis for a decision), ‘‘augmented critical decision process’’ (an automated system making a critical decision), and what counts as a ‘‘critical decision’’ (categories such as education, employment, health care, housing, financial services, utilities, and legal services). Those definitions matter because they determine when the assessment and reporting obligations kick in and which systems are covered.

The core rulemaking duty falls to the FTC: within two years it must write regulations (in consultation with NIST, OSTP, NAII, standards bodies, industry, and civil society) that require covered entities to perform impact assessments before and after deploying relevant systems, keep detailed documentation for a specified period, and submit a machine‑readable initial summary report prior to deployment plus annual summary reports thereafter. The rules must specify assessment content and formats, set guidance on counting consumers and identifying third‑party recipients, and allow the FTC to tailor reporting requirements by category of critical decision or stage of development.The statute lays out substantial content expectations for impact assessments: evaluate baseline (pre‑automation) processes when replacing them; document stakeholder consultation; record data provenance and quality; test and benchmark performance (including comparisons of test vs. deployed conditions); analyze differential performance across protected or other characteristics; document privacy and security safeguards; identify likely material negative impacts and mitigation options; and keep logs of development milestones and points of contact.

Covered entities must attempt to mitigate material negative impacts in a timely way and explain any unmitigated harms and the rationale.To make findings useful beyond enforcement, the FTC must publish an annual public report synthesizing trends from submitted summaries and build a publicly accessible repository with a limited subset of information from summary reports (identity of reporting entity, category of critical decision, prohibited applications, data sources where possible, metrics used, and links to consumer recourse mechanisms). The bill protects full impact assessments from forced public disclosure—only the summary fields are required to be shared—while authorizing interagency sharing for standards and regulatory work.

Enforcement treats violations as unfair or deceptive acts under the FTC Act, allows FTC civil actions and remedies, and preserves State attorney generals’ parens patriae authority.

The Five Things You Need to Know

1

Covered‑entity thresholds combine revenue and data criteria: a firm meets the primary threshold if it had >$50,000,000 average annual gross receipts (or >$250,000,000 equity value) over three tax years, or if it possesses identifying information for >1,000,000 consumers/households/devices; smaller entities fall in if they have >$5,000,000 and their systems are used in augmented critical decision processes by covered entities.

2

Deadlines are staged: the FTC must promulgate regulations within 2 years of enactment; those regulations take effect 2 years after promulgation (so compliance obligations start after that second 2‑year window). The FTC must also stand up a public repository within 180 days after issuing regulations and update it quarterly.

3

Impact assessments must include—data provenance and quality, testing/benchmarking results, comparisons of test vs. deployed performance, differential‑impact analysis by protected or proxy characteristics, stakeholder consultation records, privacy/security evaluations, mitigation plans for likely material negative impacts, and documentation of infeasible assessment elements.

4

Summary reports: covered entities must submit an initial machine‑readable summary report prior to deployment and annual summary reports thereafter. The FTC will publish an annual synthesized report and maintain a searchable public repository showing a limited subset of report fields for consumer and researcher use.

5

Enforcement and resources: violations are treated as unfair or deceptive acts under the FTC Act; the bill authorizes a new Bureau of Technology within the FTC (headed by a Chief Technologist), requires hiring at least 50 technical staff within two years, authorizes 25 additional enforcement hires, and permits interagency information sharing with NIST, OSTP, and other regulators.

Section-by-Section Breakdown

Every bill we cover gets an analysis of its key sections. Expand all ↓

Section 2

Definitions that set scope

Section 2 supplies the Act’s operative vocabulary. The statutory definitions are unusually broad: an ‘‘automated decision system’’ can be any computation used as a basis for judgment (not limited to specific AI architectures), while ‘‘critical decision’’ is framed by categories (education, employment, housing, health care, financial services, utilities, legal services) and left partly open for the Commission to refine by rule. Those choices make the statute flexible but require the FTC to draw clear regulatory lines in later rulemaking about edge cases and the boundaries between advisory tools and decision systems.

Section 3

FTC rulemaking, prohibitions, and covered‑entity mechanics

Section 3 makes it unlawful for covered entities to ignore FTC regulations the agency will promulgate, and bars third parties from knowingly assisting violations. It sets a two‑year deadline for rulemaking, requires consultation with NIST/OSTP/NAII and stakeholders, and mandates machine‑readable summary reporting. Mechanically, the section tasks the FTC with guidance on how to count consumers and what constitutes possession or control of identifying information, plus prioritization criteria for which systems get full assessments first. That shifts substantial definitional and operational work to the agency.

Section 4

Detailed impact‑assessment content requirements

Section 4 enumerates the concrete items an impact assessment must cover, to the extent possible: baseline analysis when replacing existing decision processes, stakeholder consultation documentation, data provenance and quality, privacy/security assessments, benchmarking and deployed‑vs‑test comparisons, differential performance analysis (race, sex, age, disability, socioeconomic proxies), mitigation plans for material negative impacts, training and education obligations for staff, and explicit documentation of assessment tasks that were infeasible and why. For compliance teams this is the operative checklist: it converts many best‑practice audit items into statutory expectations that will be evaluated during enforcement.

4 more sections
Section 5

What goes into the summary report to the FTC

Section 5 prescribes the summary report fields the FTC can require and permits additional voluntary disclosure. Required items include the covered entity’s contact info, the specific critical decision category, the system’s intended purpose, stakeholders consulted, testing methodologies and results (including differential‑impact findings), guardrails or prohibited applications, data‑source descriptions, consumer recourse mechanisms, remediation steps, and any assessment gaps claimed as infeasible. The FTC must define the report’s format and can insist on machine‑readable, standardized fields to enable aggregation and analysis.

Section 6

Public reporting and a searchable repository

Section 6 directs the FTC to publish annual, machine‑readable syntheses of summary reports and to build a publicly accessible repository containing a limited subset of summary fields. The repository must be searchable, downloadable, follow federal UX/accessibility guidance, and update quarterly. The statute balances public visibility with commercial risk by allowing the Commission discretion over what summary fields are exposed, but it still requires disclosure of entity identity, decision category, data sources (to the extent possible), and consumer recourse links—information that will materially change public visibility into who automates important decisions.

Section 8

Staffing, a Bureau of Technology, and interagency sharing

Section 8 authorizes a Bureau of Technology within the FTC, led by a Chief Technologist, and empowers the Chair to hire technical staff (not less than 50 within two years) outside standard civil‑service constraints. It also authorizes 25 additional enforcement personnel and directs the FTC to negotiate information‑sharing agreements with other agencies. Practically, this provision recognizes the technical enforcement demands of the regime and attempts to resource the Commission, while also raising questions about hiring, expertise mix, and internal governance.

Section 9

Enforcement framework and state authority

Section 9 treats violations as unfair or deceptive acts under the FTC Act, giving the Commission the same remedies and procedures it has in other UDAP cases. It preserves state attorney general parens patriae suits, provides the FTC with intervention rights in state actions, and requires coordination mechanisms. That dual federal‑state enforcement model increases enforcement leverage but also creates potential coordination and forum complexity for regulated entities.

At scale

This bill is one of many.

Codify tracks hundreds of bills on Technology across all five countries.

Explore Technology in Codify Search →

Who Benefits and Who Bears the Cost

Every bill creates winners and losers. Here's who stands to gain and who bears the cost.

Who Benefits

  • Consumers exposed to critical decisions — they gain mandated impact assessments, clearer notices, and links to contest or appeal mechanisms that could improve transparency and recourse when automated decisions materially affect housing, credit, healthcare, employment, education, or essential services.
  • Researchers, civil‑rights groups, and journalists — the standardized summary reports and public repository create repeatable data for cross‑sector study of differential impacts, bias patterns, and mitigation effectiveness, enabling independent scrutiny and advocacy.
  • Compliance, auditing, and assurance vendors — the law creates new demand for internal impact assessment programs, external audits, stakeholder‑engagement facilitation, and tooling (data lineage, benchmarking, explainability), opening professional services and product markets.

Who Bears the Cost

  • Large platforms, banks, healthcare systems, housing and employment services, and other covered entities — they must build or expand risk‑assessment pipelines, document data provenance, run testing and differential analyses, retain records for multiple years, and produce machine‑readable reports, generating ongoing operational and legal costs.
  • Model developers and third‑party vendors — covered entities may require suppliers to hand over training datasets, performance logs, and proprietary design details for impact assessments; vendors will face increased contractual obligations and may need to redesign products or impose licensing restrictions.
  • FTC and state enforcement apparatus — regulators must stand up technical review capacity, maintain the public repository, and manage interagency coordination; these are resource‑intensive tasks, and the bill authorizes funding but creates a heavy administrative load that can affect timelines and enforcement priorities.

Key Issues

The Core Tension

The central dilemma is balancing robust consumer protection—transparency, auditability, detection and mitigation of material harms—with preserving legitimate commercial confidentiality, security, and incentives for innovation. Requiring thorough assessments and public summaries helps identify and reduce harm, but it also forces firms to disclose operational and data details that are often proprietary or sensitive, and it raises practical limits where data or attribution are unavailable; the Commission’s rulemaking will need to resolve this trade‑off without creating perverse incentives or unworkable compliance burdens.

The bill resolves some accountability gaps by making assessments mandatory, but it creates several implementation tensions and open questions. First, many assessment tasks require data that covered entities do not control—models built by third parties, proprietary training datasets, or deployment details held by integrators—so the statutory requirement that covered entities document provenance and testing will often force complex contract renegotiations or blunt disclosures that vendors claim are trade secrets.

The statute permits covered entities to document infeasibility, but the FTC will need clear, defensible standards for when an assessment element is truly infeasible versus inadequately pursued.

Second, the law asks for demographic and differential‑impact analyses while simultaneously recognizing privacy and legal limits on collecting sensitive attributes. The bill allows proxies (ZIP Code, etc.) and contemplates infeasibility exceptions, but proxy use raises statistical and fairness challenges (measurement error, ecological fallacy).

Regulators will need to decide acceptable methodologies for inferring protected attributes, what confidence thresholds suffice, and how to weigh trade‑offs between privacy and the ability to detect disparate impacts.

Third, the public repository and standardized machine‑readable reporting increase transparency but risk exposing commercially sensitive information or exploitable system details (attack surface, data sources). The Commission must balance consumer‑facing usefulness against creating a roadmap for gaming or reverse engineering.

Finally, practical enforcement will rest on the FTC’s ability to hire technical staff and develop standards; the statute authorizes hires and a Bureau of Technology, but the efficacy of enforcement depends on recruiting relevant expertise, producing rigorous guidance, and coordinating with other regulators without creating overlapping enforcement that confuses regulated entities.

Try it yourself.

Ask a question in plain English, or pick a topic below. Results in seconds.