This Act requires every covered federal agency—those that use, fund, procure, or oversee algorithms—to establish an Office of Civil Rights staffed with experts in bias, discrimination, and harms from algorithmic systems. It defines what counts as a covered algorithm and what protected characteristics are, setting a clear scope for oversight.
The bill also mandates regular reporting on the state of the field, mitigation efforts, and stakeholder engagement, plus it creates an interagency forum to coordinate actions across agencies. Finally, it authorizes funding to implement these requirements.
The core aim is to institutionalize civil rights oversight of algorithmic systems across the federal government, ensuring that protections against bias and discrimination are integrated into how algorithms are developed, deployed, and evaluated. By requiring periodic reporting and cross-agency collaboration, the bill seeks to make algorithmic harms more visible to policymakers and the public and to inform future legislation or administrative action.
At a Glance
What It Does
The head of each covered agency must ensure an Office of Civil Rights focused on bias and harms in algorithmic systems, staffed with experts and technologists. The Act requires a biennial report detailing algorithmic risks, mitigation steps, stakeholder engagement, and legislative recommendations.
Who It Affects
Federal agencies that use, fund, procure, or oversee algorithms; their contractors, grantees, and program offices; and the communities whose rights could be affected by algorithmic decisions.
Why It Matters
Establishes formal oversight for AI/ML systems within the federal government, setting a baseline for bias detection, mitigation, and accountability that informs policy and procurement decisions.
More articles like this one.
A weekly email with all the latest developments on this topic.
What This Bill Actually Does
The bill creates a formal Office of Civil Rights within each covered federal agency, where staff with expertise in bias, discrimination, and harms from algorithmic systems will work. A covered algorithm is a computational process using machine learning, natural language processing, or AI techniques that could affect rights, access, costs, or terms for individuals or communities.
The scope includes algorithms used in programs funded or operated by the agency, those regulated by the agency, and those involved in advising on their development or use.
Within one year of enactment, and every two years after, the office must submit a comprehensive report to congressional committees detailing the current state of algorithmic technologies under the agency’s jurisdiction, steps taken to mitigate harms, and engagement with a broad group of stakeholders, including industry representatives, civil rights advocates, consumer protection groups, academics, workers’ organizations, and affected populations. The report should also contain recommendations for legislation or agency action to reduce bias linked to protected characteristics.
The bill also directs the creation of an interagency working group on covered algorithms, led by the Department of Justice’s Civil Rights Division, to coordinate across agencies. Finally, it authorizes funding to cover the costs of establishing and operating these offices and the interagency group.
The Five Things You Need to Know
The bill creates an Office of Civil Rights in every covered agency to address algorithmic bias and harms.
A covered algorithm is a computational process using ML, NLP, AI, or similar methods that can affect rights or access.
Biennial bias and harms reports must be submitted to Congress detailing risks, mitigations, and stakeholder engagement.
An interagency working group on covered algorithms will be established, with participation from the new civil rights offices.
Funding to support these offices and activities is authorized as necessary to carry out the Act.
Section-by-Section Breakdown
Every bill we cover gets an analysis of its key sections.
Short Title
The Act is titled the Eliminating Bias in Algorithmic Systems Act of 2026. This establishes the measure’s general purpose as creating civil rights oversight for algorithmic systems across federal agencies.
Definitions
Key terms set the scope: an agency is defined as in 44 U.S.C. 3502; a covered agency uses, funds, procures, or oversees a covered algorithm; a covered algorithm is a computational process that uses ML, NLP, AI, or related techniques and can affect rights, access, or costs for individuals or groups based on protected characteristics. Protected characteristics include race, ethnicity, religion, sex, disability, income, age, and other federally protected attributes.
Civil Rights Offices
Each covered agency must ensure an Office of Civil Rights staffed with experts and technologists who focus on bias, discrimination, and other harms related to algorithmic systems. The office’s mandate includes identifying risks tied to protected characteristics and developing mitigation strategies that align with civil rights laws and principles.
Bias, Discrimination, and Harms Reporting
Not later than one year after enactment and every two years thereafter, the office must submit a report to congressional committees with jurisdiction over the agency. The report details the state of the field, steps the agency has taken to mitigate harms from covered algorithms, and actions taken to engage with industry, civil rights advocates, consumer protection groups, academics, workers, and affected populations, plus recommendations for further action.
Interagency Working Group
The Assistant Attorney General in charge of the Civil Rights Division shall establish an interagency working group on covered algorithms, with each office of civil rights as a member. The group is intended to coordinate policy, research, and enforcement approaches across agencies to address cross-cutting algorithmic harms.
Authorization of Appropriations
The Act authorizes appropriations to covered agencies as necessary to carry out its provisions, ensuring the offices, reporting, and interagency coordination can be funded and sustained.
This bill is one of many.
Codify tracks hundreds of bills on Government across all five countries.
Explore Government in Codify Search →Who Benefits and Who Bears the Cost
Every bill creates winners and losers. Here's who stands to gain and who bears the cost.
Who Benefits
- The head of each covered agency gains dedicated civil rights expertise to oversee algorithms within their programs.
- Communities and individuals protected by federal civil rights who could be impacted by algorithmic decisions benefit from more systematic monitoring and mitigation of bias.
- Civil rights advocates, consumer protection organizations, and civil society groups gain formal avenues for engagement and input into agency AI practices.
- Academic and industry experts can contribute to the risk assessment process through stakeholder engagement provisions.
- Agency procurement and program offices benefit from clearer expectations and standardized reporting around algorithmic risk.
Who Bears the Cost
- Covered agencies must budget for and staff new Offices of Civil Rights.
- The Department of Justice’s Civil Rights Division, and the interagency group, will require administrative support and coordination resources.
- Administrative and reporting overhead increases for agencies, including data collection, analysis, and stakeholder engagement activities.
- Some vendors and contractors may face enhanced transparency and accountability requirements or heightened scrutiny in program-related algorithms.
Key Issues
The Core Tension
The central dilemma is balancing rigorous civil rights oversight of powerful algorithmic systems with the risk of imposing new compliance burdens that could slow beneficial innovation or affect agency agility. The definitions and scope must be precise enough to prevent loopholes, yet flexible enough to adapt as technology evolves; funding and staffing must scale with the breadth of coverage to avoid under-resourcing critical oversight.
The Act significantly elevates civil rights oversight of algorithmic systems by embedding dedicated offices within agencies and requiring regular reporting and cross-agency coordination. Practically, this will create new compliance and data-sharing requirements across disparate programs, which could slow procurement or deployment cycles if not managed carefully.
A key design question is whether the definitions of covered algorithms and protected characteristics are sufficiently precise to prevent scope creep or gaps in coverage. The reliance on biennial reporting means agencies must invest in data collection, risk assessment, and stakeholder outreach in a manner that is consistent and auditable, which could prove challenging for smaller programs or agencies with limited AI maturity.
Finally, while the Act authorizes appropriations, it does not guarantee funding levels, potentially creating uneven implementation across agencies depending on budget decisions.
Try it yourself.
Ask a question in plain English, or pick a topic below. Results in seconds.