Codify — Article

Algorithmic Accountability Act of 2025 would require FTC impact assessments of AI systems

Creates an FTC-led regime for algorithmic impact assessments, public reporting, and a searchable repository—shaping compliance for large tech vendors and critical-decision systems.

The Brief

The bill directs the Federal Trade Commission to establish a regulatory framework requiring covered entities to study and report the consumer impacts of certain algorithms used in consequential or critical decisions. It mandates structured reporting, stakeholder consultation, mitigation plans for material harms, and a publicly accessible registry of key information about deployed systems.

For legal and compliance teams, the measure signals an operational shift: affected organizations will need to document development and testing, preserve records for a multi-year period, prepare machine-readable summary reports for the Commission, and anticipate enforcement actions under the FTC’s unfair-or-deceptive authority.

At a Glance

What It Does

The Act directs the FTC to write rules obligating regulated organizations to perform ongoing impact assessments of algorithms that affect consequential decisions, produce summary reports for the agency, and make a limited set of information available to the public. It also tasks the agency with building a searchable repository and standing up technical capacity to evaluate and enforce compliance.

Who It Affects

The requirement targets organizations that develop or deploy complex computational systems used to influence important outcomes—particularly larger firms meeting specified revenue, equity, or data-holdings thresholds—and smaller vendors whose products are routed into those larger entities’ decision flows.

Why It Matters

This creates a formal, cross-sector supervisory regime for algorithmic systems where none currently exists at the same scale, standardizing documentation, testing, and remediation expectations, and producing public data useful to researchers, regulators, and impacted communities.

More articles like this one.

A weekly email with all the latest developments on this topic.

Unsubscribe anytime.

What This Bill Actually Does

The Act defines a ‘‘covered algorithm’’ broadly to include machine-learning, natural language processing, and similarly complex computational processes when they produce, rank, recommend, decide, or facilitate human decisions tied to consequential outcomes. It also sets out categories of ‘‘critical decisions’’ (education, employment, housing, healthcare, financial services, utilities, legal services, family planning, and similar high-impact areas) that trigger special attention.

The statutory definitions will guide what systems and contexts fall into scope for the mandatory assessments.

Covered entities are identified through a tiered threshold structure that combines financial size, equity valuation, and the scope of identifying information held (the bill uses a 1,000,000-consumer/data-subject benchmark). The rules will also sweep in entities that develop systems for use by covered entities and entities that previously met thresholds within a multi-year look-back period.

The Commission must produce guidance on how to calculate these thresholds and how to count consumers, households, or devices when assessing covered-entity status.Regulations the Commission will write are detailed in the bill: they must require pre-deployment and post-deployment impact assessments, retention of assessment documentation for a period beyond the algorithm’s deployment, stakeholder consultation (internal and external), privacy and security reviews, performance testing (including differential performance analysis across demographic or proxy groups), and documented mitigation steps for identified material negative impacts. Summary reports—submitted initially before deployment and annually thereafter—are to be machine-readable and follow a Commission-prescribed structure that captures the system’s intended purpose, testing methods and metrics, data sources, transparency and recourse mechanisms, and any remediation actions taken or declined.The Commission must make a curated subset of the summary-report information available through a publicly accessible, searchable repository designed for discoverability and machine download.

The repository is intended to balance consumer information needs against commercial sensitivities; the law directs the agency to consider user accessibility and to include links to consumer recourse mechanisms. The bill also creates an internal technical capacity at the FTC—a Bureau of Technology headed by a Chief Technologist with an initial staffing floor—to support rulemaking, review, and enforcement, and it authorizes coordination with NIST, OSTP, and other agencies on standards and guidance.Enforcement proceeds through the FTC’s existing unfair-or-deceptive-practices machinery, and States may bring parens patriae actions.

The statute preserves nondisclosure of raw impact-assessment documents to the public while requiring the submission of summary reports to the Commission, and it mandates periodic review of the rules to adapt to changing technologies and practices.

The Five Things You Need to Know

1

The bill uses three size tests to define a covered entity: average annual gross receipts above $50 million (or $250 million equity value), possession of identifying information for over 1,000,000 consumers/households/devices, or a lower $5 million/$25 million tier for developers whose products will be used by larger covered entities.

2

The Commission must promulgate regulations in consultation with NIST, OSTP, and other stakeholders and has a two-year window from enactment to issue those rules.

3

Covered entities must retain impact-assessment documentation for at least three years longer than the period the covered algorithm is deployed.

4

The Act establishes a Bureau of Technology inside the FTC, led by a Chief Technologist, and requires the agency to appoint at least 50 technical personnel within two years.

5

Violations are enforced under the FTC Act as unfair or deceptive acts or practices; States may pursue parens patriae suits and the Commission can coordinate with other agencies.

Section-by-Section Breakdown

Every bill we cover gets an analysis of its key sections. Expand all ↓

Section 2 (Definitions)

Scope-setting terms and the triggers for coverage

This section supplies the precise legal hooks the rest of the statute uses: it defines covered algorithms, covered entities, critical decisions, and what ‘‘deploy’’ and ‘‘develop’’ mean. Practically, these definitions determine perimeter (what systems fall under the rules) and jurisdictional reach (which organizations must comply). The bill’s inclusion of proxy data (ZIP codes, inferred demographics) and a catch-all for services the Commission later designates gives the agency flexibility but also places a burden on regulated parties to map their systems against a wide list of triggers.

Section 3 (Regulations and Prohibitions)

Mandate for FTC rulemaking and legal prohibition on noncompliance

Section 3 makes it unlawful to ignore the forthcoming rules and forbids third parties from materially assisting such noncompliance. It requires the Commission to consult specified federal technical authorities and a broad set of stakeholders when writing rules. The statute preserves procedural rulemaking under the Administrative Procedure Act, so the final requirements will emerge through notice-and-comment—meaning regulated entities should prepare for evolving obligations rather than a static checklist.

Section 4 (Impact Assessment Requirements)

A prescriptive template for what assessments must analyze

Section 4 lays out the granular content the Commission can require in an impact assessment: baseline comparisons, stakeholder consultation records, privacy and security analyses, testing and differential-performance metrics, datasets provenance, consumer notice and opt-out practices, and an explicit inventory of mitigation actions taken or declined. For compliance teams, this translates into cross-functional workstreams—legal, product, data science, privacy, and external affairs must collaborate to generate auditable artefacts that address each bulleted element.

4 more sections
Section 5 (Summary Reports)

What must be sent to the FTC and in what form

Rather than forcing public disclosure of full assessments, the bill requires a structured summary report for each new covered algorithm (pre-deployment) and annual summaries for deployed systems. The Commission will prescribe the required fields and machine-readable formats; regulators and auditors will rely on these summaries to triage risks, identify noncompliance, and populate the public repository. Expect template-driven reporting that can be parsed at scale, with optional fields for voluntarily submitted extra detail.

Section 6 (Reporting and Repository)

Public-facing registry and aggregated agency reporting

The FTC must publish an accessible, searchable repository with a limited subset of summary-report information designed for consumer education, research, and oversight. The statute expects quarterly updates, downloadable results, and compliance-friendly UX. The agency must also issue annual, machine-readable reports synthesizing trends and anonymized takeaways—documents that will inform industry standards and future rule changes.

Section 8 (Resources and Bureau of Technology)

Building technical capacity inside the FTC

To implement the program, the Act creates an internal Bureau of Technology and allows the agency to hire technical staff outside standard civil-service constraints, with a required minimum staffing ramp. That structure is intended to give the FTC the in-house expertise to audit models, review technical evidence, run workshops, and support enforcement—shifting some technical burden from contractors back into agency operations.

Section 9 (Enforcement and State Authority)

Enforcement pathways and intergovernmental coordination

The bill makes violations subject to the FTC’s unfair-or-deceptive-practices enforcement framework and preserves state parens patriae litigation. It also instructs the agency to negotiate coordination agreements with other federal regulators. Practically, this creates overlapping enforcement channels and incentives for cooperation—but also potential issues about which agency leads on a multi-jurisdictional case.

At scale

This bill is one of many.

Codify tracks hundreds of bills on Technology across all five countries.

Explore Technology in Codify Search →

Who Benefits and Who Bears the Cost

Every bill creates winners and losers. Here's who stands to gain and who bears the cost.

Who Benefits

  • Consumers subject to critical decisions — they gain standardized disclosures and a clear, searchable pointer to recourse mechanisms and documented remediation histories, which improves transparency and the ability to contest or appeal algorithmic decisions.
  • Civil-rights and consumer-advocacy groups — the repository and aggregated FTC reports create researchable data and audits that bolster systemic investigations into discrimination or harm.
  • Independent researchers and journalists — machine-readable summaries and a searchable registry lower the barrier to empirical study of deployed systems across sectors, enabling comparative analysis and public accountability.
  • Standards bodies and federal technical agencies (NIST, OSTP) — summary reports and a secure sharing channel provide empirical inputs to craft better technical guidance and interoperable testing practices.

Who Bears the Cost

  • Large technology firms and platforms meeting the covered-entity thresholds — they will incur engineering, data-governance, legal, and third-party-audit costs to build, document, and continuously evaluate impact assessments and to produce machine-readable reports.
  • Vendors and model developers that supply algorithms to covered entities — they may need to share proprietary details or rework contracts to enable covered entities to meet documentation and testing obligations, putting pressure on IP-heavy business models.
  • The Federal Trade Commission — implementing, operating, and enforcing the program requires new technical staff, a bureau, and systems for ingesting and publishing structured reports; the bill authorizes appropriations but creates an administrative scaling challenge.
  • Smaller firms that cross thresholds or whose products are routed into covered entities — compliance may be complex and costly for entities that lack mature privacy and testing infrastructures.

Key Issues

The Core Tension

The central dilemma is between consumer protection through mandated transparency and the preservation of privacy and commercial secrets: regulators need enough data and disclosure to detect and fix harms, but too much mandated disclosure or demographic collection risks violating privacy, undermining proprietary value, and chilling innovation—there is no simple way to maximize all three simultaneously.

The Act forces a practical trade-off between transparency and commercial confidentiality: by mandating structured public reporting and a searchable registry it increases public accountability, but the statute also leaves latitude to the Commission to decide which fields are public and which remain internal. That discretion is necessary to protect trade secrets, yet it creates uncertainty for regulated firms about what will eventually be disclosed.

Similarly, the nondisclosure carve-out for full impact-assessment documents preserves privacy and IP, but summary reports may omit technical nuances necessary for independent validation.

Assessing algorithmic fairness requires demographic signals; the bill recognizes that demographic attributes may be sensitive or unavailable and allows for proxy use and rationale documentation. That raises a second tension: better bias measurement often depends on collecting or inferring sensitive attributes, but doing so creates privacy and legal exposure.

Implementation will hinge on how the Commission balances the need for subgroup analysis against data-protection constraints and downstream legal limits on protected-class inference. Moreover, the regime depends heavily on self-generated assessments and summary reports, making the FTC’s audit capacity, coordination agreements with other agencies, and its ability to demand evidence crucial to closing gaps between self-reporting and reality.

Try it yourself.

Ask a question in plain English, or pick a topic below. Results in seconds.