The Eliminating Bias in Algorithmic Systems Act of 2026 requires any federal agency that uses, funds, procures, develops, or oversees complex computational systems to establish an office of civil rights staffed with experts and technologists focused on algorithmic bias, discrimination, and related harms. Each office must report to congressional oversight committees within one year of enactment and every two years thereafter on the state of relevant technology, mitigation steps, stakeholder engagement, and recommended legislative or administrative actions.
The bill defines covered algorithms by technique (machine learning, natural language processing, AI, and derivations) and by effect (those that can materially affect programs, economic opportunities, or rights). It also enumerates a broad list of protected characteristics—ranging from race and disability to biometric data and income level—and charges the Department of Justice Civil Rights Division with convening an interagency working group.
The statute authorizes appropriations but does not create new enforcement mechanisms or private remedies.
At a Glance
What It Does
The bill obliges covered agencies to create an office of civil rights that includes technical staff to identify and mitigate algorithmic harms, and to deliver a detailed report to congressional committees within one year and biennially after that. It also directs the DOJ Civil Rights Division to form an interagency working group composed of those agency offices.
Who It Affects
All federal agencies as defined in 44 U.S.C. §3502 that use, fund, procure, develop, regulate, or advise on advanced computational systems; vendors and contractors who supply such systems to agencies; and individuals and communities whose access to programs, economic opportunities, or rights can be affected by automated decision processes.
Why It Matters
The bill establishes a consistent federal expectation that algorithmic systems be reviewed through a civil‑rights lens, creating a new administrative layer for oversight, procurement reviews, and interagency coordination. That shift will influence agency risk management, contracting practices, and how civil‑rights harms from AI are documented and escalated.
More articles like this one.
A weekly email with all the latest developments on this topic.
What This Bill Actually Does
The Act starts by capturing which federal bodies must comply: any agency under the general definition in title 44 that either uses, funds, procures, participates in development of, or oversees algorithms with potentially material effects. That trigger is intentionally broad: it pulls in programs, economic-regulatory decisions, and agency-protected rights rather than limiting coverage to specific technologies or mission areas.
A central definitional choice is the treatment of 'covered algorithm.' The bill ties coverage to computational techniques—machine learning, natural language processing, and other AI-style processes—and to the potential to 'materially affect' outcomes. In practice, that will require agencies to assess whether a given system can change eligibility, access, cost, terms, availability, or enforcement related to a program, market, or legal right.
The statute also lists protected characteristics at length (race, sex, disability, biometric data, income level, etc.), with one notable drafting point: it excludes 'ability to pay for a specific good or service' from the income-level bar.For agencies that meet the test, the bill requires an internal office of civil rights that explicitly includes personnel with technical expertise. Those offices must produce a public-facing report to congressional committees not later than one year after enactment and every two years after.
Each report must (1) survey the state of covered algorithms in the agency’s jurisdiction and the risks they create, (2) summarize mitigation steps already taken, (3) describe stakeholder engagement with industry, advocates, academics, workers, and affected communities, and (4) offer recommendations for legislative or administrative fixes as the office head deems appropriate.Finally, the statute makes the DOJ Civil Rights Division responsible for standing up an interagency working group within one year. Every agency office created under the Act will be a member; the group’s role is coordination, shared learning, and cross-agency policy development.
The bill authorizes funds 'as may be necessary' but leaves appropriation decisions to Congress.
The Five Things You Need to Know
The bill defines a 'covered algorithm' as a computational process using machine learning, natural language processing, AI techniques (or derivations) that can materially affect agency programs, economic opportunities, or rights.
Covered agencies must staff an office of civil rights that includes experts and technologists focused on bias, discrimination, and other algorithmic harms.
Offices must file a report to congressional committees within one year of enactment and every two years, covering the state of technology, mitigation steps, stakeholder engagement, and recommended legislative or administrative actions.
The protected‑characteristic list is extensive and explicit—examples include race, sex (including sexual orientation and gender identity), disability, biometric data, and income level (with a carve‑out excluding ability to pay for a specific good or service).
Within one year the DOJ Civil Rights Division must convene an interagency working group composed of the newly created agency offices to coordinate policy and share best practices.
Section-by-Section Breakdown
Every bill we cover gets an analysis of its key sections.
Short title
Gives the Act the public name 'Eliminating Bias in Algorithmic Systems Act of 2026.' This is purely formal but signals the statute's focus on civil‑rights harms from algorithmic systems and frames later interpretive questions for agencies and courts.
Definitions — agency and covered agency
Section 2 adopts the standard statutory term 'agency' from 44 U.S.C. §3502 and then turns it into a threshold test: a 'covered agency' is any agency that uses, funds, procures, participates in development of, or oversees algorithms. That expansive phrasing means regulatory bodies and funding agencies—especially those involved in grants, loans, licensing, or program administration—will need to determine whether their activities bring systems within the Act’s scope.
Definitions — covered algorithm and protected characteristics
The bill ties coverage to two elements: technique (ML, NLP, AI, or derived processes) and effect (the capacity to materially affect programmatic outcomes, market access, or rights). The protected characteristics list is unusually broad, including biometric information and income level while explicitly excluding 'ability to pay' for a specific good or service; that phrasing will shape how agencies evaluate equal‑treatment concerns versus ordinary means‑tested determinations.
Offices of civil rights and required reports
Subsection (a) requires each covered agency to maintain an office of civil rights that employs both civil‑rights specialists and technologists focused on algorithmic harms. Subsection (b) prescribes the reporting cadence and the four substantive report elements—state of the field, mitigation steps, stakeholder engagement, and recommendations—placing documentation, transparency, and cross‑stakeholder consultation at the center of compliance rather than prescribing specific technical standards or audit protocols.
Interagency coordination and funding
The Assistant Attorney General for Civil Rights must convene an interagency working group within one year; each agency office created under the Act will participate. The group’s mandate is coordination and shared learning rather than enforcement. Subsection (d) authorizes 'such sums as may be necessary' for agencies to implement the Act, but funding still requires appropriation action by Congress, making implementation contingent on budgetary choices.
This bill is one of many.
Codify tracks hundreds of bills on Civil Rights across all five countries.
Explore Civil Rights in Codify Search →Who Benefits and Who Bears the Cost
Every bill creates winners and losers. Here's who stands to gain and who bears the cost.
Who Benefits
- Individuals and communities subject to agency decisions (e.g., benefits applicants, licensing applicants, people evaluated by automated enforcement): they gain a clearer administrative entry point for identifying algorithmic harms and a formal process that requires agencies to document mitigation steps and stakeholder engagement.
- Civil‑rights organizations, consumer advocates, and researchers: they receive recurring reporting and an institutionalized interlocutor inside agencies, improving access to information, data on agency risk assessments, and opportunities to influence remediation or regulatory proposals.
- Agency program managers and legal teams: they gain structured internal capacity (technologists and civil‑rights specialists) to identify and manage compliance risks, which can reduce legal and reputational exposure if implemented well.
- Policymakers and Congress: the mandated reports create a regular, comparable evidence base across agencies that can inform legislative or administrative policy responses to algorithmic discrimination.
Who Bears the Cost
- Covered federal agencies: they must hire or reassign staff with technical expertise, stand up offices, and produce substantive reports on a recurring basis—an administrative and budgetary burden, particularly for smaller agencies.
- Vendors and contractors supplying algorithms to agencies: they will face added procurement scrutiny, potential contractual requirements to support audits or mitigation, and the time/cost of responding to agency inquiries.
- Taxpayers and appropriators: the Act authorizes funding but does not appropriate it; effective roll‑out will require new appropriations to cover staffing, technical assessments, and interagency coordination.
- Small AI firms and startups: while large vendors can absorb compliance processes, smaller companies may face disproportionate costs to meet agency documentation or transparency demands and to support mitigation work.
Key Issues
The Core Tension
The central dilemma is balancing a robust civil‑rights oversight posture against operational, legal, and innovation costs: the Act pushes agencies to detect and document algorithmic harms but avoids prescribing technical standards or enforcement tools, which protects flexibility yet risks uneven implementation, weak deterrence, and a patchwork of agency practices that may frustrate the very equity goals the law seeks to promote.
The statute establishes institutional infrastructure and reporting expectations but leaves key operational questions open. 'Material effect' is a triggering phrase that will require agencies to adopt or develop criteria for impact assessment—without uniform metrics the result may be inconsistent scope decisions across agencies. Similarly, the bill mandates technical expertise inside civil‑rights offices but does not specify minimum qualifications, reporting lines, audit methodologies, or data‑access rules, leaving significant discretionary space for agencies to interpret their obligations.
Another implementation tension concerns transparency versus confidentiality. Agencies and contractors may claim trade secrets or national‑security exemptions when reporting on system design, training data, or mitigation steps, limiting the practical usefulness of reports to outside stakeholders.
The Act also creates a potential resourcing gap: it authorizes appropriations 'as may be necessary' but relies on the annual budget process to fund hiring, technical assessments, and interagency activities. Finally, the bill stops short of creating an enforcement pathway (no private right of action, no administrative sanction specified), so the primary lever is disclosure and political pressure rather than statutory penalties—an intentional design choice that could blunt deterrence for noncompliance or slow remediation in high‑risk contexts.
Try it yourself.
Ask a question in plain English, or pick a topic below. Results in seconds.