The Public Authority Algorithmic and Automated Decision‑Making Systems Bill requires public authorities in England and Wales to complete prescribed Algorithmic Impact Assessments and Algorithmic Transparency Records before they deploy or procure algorithmic or automated decision systems. The bill sets publication deadlines, requires ongoing auditing, mandates employee training and logging, and blocks use where contractual or technical barriers prevent meaningful assessment.
The legislation also compels vendors to disclose evaluation results and submit systems for evaluation on request, creates a statutory path to independent dispute resolution for affected individuals, and leaves detailed form and enforcement to regulations the Secretary of State must draft and consult on. For compliance officers, procurement leads and public officials, the bill creates a new set of pre‑deployment obligations and ongoing recordkeeping and transparency duties that will need to be written into contracts, workflows and audit programmes.
At a Glance
What It Does
The bill obliges public authorities to run and publish Algorithmic Impact Assessments and Transparency Records for systems they develop or procure; keep operational logs for at least five years; train staff to challenge system outputs; and make vendor evaluation results available. It prohibits deployment where vendors or contracts prevent effective assessment and requires an independent dispute resolution service for challenges and redress.
Who It Affects
This targets public authorities (as defined in Schedule 2) deploying or procuring algorithmic or automated decision systems in England and Wales, suppliers and integrators to those authorities, and individuals subject to automated administrative decisions. Regulators, auditors and the Secretary of State will have rule‑making and oversight roles via regulations and statutory instruments.
Why It Matters
The bill shifts several accountability functions from opaque procurement practice into statutory duty: mandatory impact assessments, public registers and long retention of logs raise the bar for transparency and post‑hoc review. It also forces procurement teams to negotiate vendor cooperation on disclosure and external evaluation, which could reshape supplier contracts and commercial terms.
More articles like this one.
A weekly email with all the latest developments on this topic.
What This Bill Actually Does
The bill creates a compliance architecture around algorithmic decision‑making used by public authorities. It applies to systems a public body develops or procures and to systems in active development (excluding test‑only environments), but it explicitly exempts national security uses and simple formulaic calculators such as routine tax computations.
The Act comes into force six months after passage and, separately, specifies that obligations apply from that same six‑month point for covered systems.
Before an authority can use or buy a covered system, it must complete two distinct written artefacts: an Algorithmic Impact Assessment (AIA) and an Algorithmic Transparency Record (ATR). Regulations will set the required form of each.
The AIA must analyse benefits and risks, explain mitigation steps, require independent external scrutiny and include a mandatory bias assessment tied to the Equality Act and Human Rights Act. The ATR must describe the system, technical specifications, the rationale for use, how it informs administrative decisions, and human oversight arrangements.
Both documents must be published in accessible formats within 30 days of completion or of final assessment results.Operational requirements extend beyond paperwork. Public authorities must put public notice on a register that a decision—fully or partially—will be made by algorithm, ensure employees who make final decisions have authority and competence to override or challenge system outputs, provide meaningful personalised explanations to affected individuals, monitor outcomes for unintentional consequences, validate data quality in line with the Data Protection Act 2018, and run regular audits as regulations prescribe.
All systems must log events to recognised standards, transmit or hold logs with the responsible authority, and retain them for a minimum of five years unless a shorter, pre‑justified period is necessary for privacy or security.The bill also tackles vendor cooperation and discoverability: public authorities must not deploy systems where contractual or technical barriers prevent effective assessment or monitoring, and the text expressly contemplates requiring vendors to disclose evaluation results and submit systems and documentation to an AI Safety Institute on request. Finally, the Secretary of State must make the detailed AIA and ATR forms by statutory instrument after draft publication and consultation, and those regulations require affirmative approval by both Houses, which preserves parliamentary scrutiny of the delegated rules.
The Act leaves the precise remit, funding and powers of the independent dispute resolution service to further regulation or design, but it establishes that such a route must exist for challenges to algorithmic decisions.
The Five Things You Need to Know
The Act applies to algorithmic or automated decision systems a public authority develops or procures and to systems in development, subject to specific exemptions for national security and simple formulaic calculation tools.
Public authorities must complete and publish an Algorithmic Impact Assessment and an Algorithmic Transparency Record, each in a form prescribed by Secretary of State regulations, and update them when system scope or functionality changes.
All systems must implement logging that records operational events and, for decision‑support systems, whether the final human decision followed the system's recommendation; logs must be retained for at least five years unless a shorter period is pre‑justified.
The bill bars deployment where contractual, technical or IP constraints prevent effective assessment or monitoring, and it expects vendors to disclose evaluation results and submit systems for evaluation by an AI Safety Institute on request.
Regulations setting the form of impact assessments and transparency records are subject to draft publication, public consultation and positive approval by both Houses under the statutory instrument procedure.
Section-by-Section Breakdown
Every bill we cover gets an analysis of its key sections.
Scope, commencement and core exemptions
Section 1 sets the territorial and substantive reach: the Act extends to England and Wales, comes into force six months after passage, and applies to algorithmic and automated decision‑making systems developed or procured by public authorities from that same six‑month point. It contains two principal exclusions: national security uses and automated systems that merely compute and implement fully understood formulas (for example routine tax calculations). Practically, those exclusions will be litigated at the margins—what qualifies as a "mere calculation" or a national security use requires clear operational definitions in guidance and contracts.
Algorithmic Impact Assessments — form, content and parliamentary oversight
These provisions require public authorities to complete an AIA before deployment and to update it when a system changes. The Secretary of State must make regulations prescribing the AIA framework; the bill lists required elements including benefit‑risk analysis, privacy and safety risk assessment, mitigation steps, independent external scrutiny, and a mandatory bias assessment referencing the Equality Act and Human Rights Act. The regulations must be published in draft, consulted on, and then laid before both Houses; they are subject to affirmative resolution, giving Parliament a gatekeeping role over the detailed framework.
Transparency records, public notice and individual explanations
Public authorities must prepare an Algorithmic Transparency Record before use, publish it within 30 days, and update it on functional change. The ATR must explain the rationale for using the system, provide technical specifications, describe how automated outputs inform administrative decisions, and document human oversight. Authorities must also notify on a public register when decisions will be made wholly or partly via algorithm, and they must provide meaningful, personalised explanations to affected individuals as prescribed by regulation—an obligation that creates operational requirements for caseworkers and customer communications teams.
Governance, employee duties and training
The bill mandates that decision‑makers retain the authority and competence to challenge system outputs and requires training for employees involved in design, oversight and decision‑taking. Schedule 1 sets out interpretive principles—transparency, accountability, robustness, non‑discrimination and inclusive outcomes—that must inform training and governance. The practical result is a statutory expectation that authorities redesign internal sign‑off, escalation and oversight processes to ensure humans can supervise and, where appropriate, override automated recommendations.
Logging, retention and operational monitoring
All systems must include logging to recognised standards, transmit logs to the responsible authority, and store them for a minimum five‑year period unless a shorter, pre‑justified retention is necessary for privacy or security. For decision support systems, logs must record whether the human decision followed the algorithm's recommendation. This provision creates a durable audit trail suitable for ex post review, litigation and accountability but also raises data protection and storage cost issues that public bodies must address in project planning.
Vendor disclosure, contractual barriers and external evaluation
Public authorities must not deploy systems where contractual, technical or IP constraints prevent effective assessment or monitoring. The statute explicitly suggests that authorities should require vendors to disclose evaluation results (including evaluations of foundation models used as components) and to submit systems and documentation to an AI Safety Institute for evaluation on request. This provision shifts responsibility for transparency upstream into procurement and contract negotiation and gives procurement teams leverage to insist on cooperation or to reject non‑compliant offers.
Independent dispute resolution and redress
Section 14 requires the Secretary of State to ensure an independent dispute resolution service exists so individuals can challenge or obtain redress for algorithmic decisions. The bill sets the requirement but leaves the structure, powers, remit and funding of that service unspecified in primary text, meaning operational details will be crucial to whether the route delivers timely remedies or merely creates another procedural step.
Delegation to Secretary of State and key definitions
Multiple obligations (AIA form, ATR form, explanatory content, monitoring and audit requirements) are to be set by statutory instrument with affirmative procedure: draft publication, consultation, then both‑Houses approval. Schedule 2 supplies definitions—'public authority', 'decision support system', 'procure' and others—that gate coverage and will be central in contested cases about whether a given tool falls inside the Act.
This bill is one of many.
Codify tracks hundreds of bills on Technology across all five countries.
Explore Technology in Codify Search →Who Benefits and Who Bears the Cost
Every bill creates winners and losers. Here's who stands to gain and who bears the cost.
Who Benefits
- Individuals subject to public administrative decisions — they gain statutory routes to meaningful explanations, public notice that algorithms are in use, and access to independent dispute resolution designed for algorithmic outputs.
- Civil society groups, researchers and oversight bodies — mandatory publication of Impact Assessments, Transparency Records and long‑lived logs create material for external scrutiny, auditing and research into public sector AI use.
- Procurement and compliance teams in public authorities — they receive a statutory framework that standardises pre‑deployment checks and reporting, which can reduce litigation risk and align internal processes with legal expectations.
Who Bears the Cost
- Public authorities (local councils, central government departments, arms‑length bodies) — they must resource AIAs/ATRs, staff training, logging infrastructure, audits and long‑term storage, and renegotiate vendor contracts to secure necessary disclosures.
- AI vendors and integrators supplying the public sector — they will face contractual demands to disclose evaluation results, submit systems for external review, and potentially alter IP practices, which could increase development and legal costs and affect commercial terms.
- The Secretary of State and the centre of government — responsible for drafting detailed regulations, consulting stakeholders, and establishing or accrediting an independent dispute resolution service and mechanisms (or funding an AI Safety Institute), all of which impose administrative and budgetary burdens.
Key Issues
The Core Tension
The central dilemma is the choice between accountability through transparency and the practical, commercial and security limits of that transparency: requiring full assessment, logging and external scrutiny protects individuals and public trust, but it collides with vendor IP, legitimate national security protections, and the technical limits of explaining complex models—forcing trade‑offs that regulations and contracts must resolve.
The bill advances transparency and auditability but relies heavily on delegated regulations for crucial detail: the prescribed forms for Impact Assessments and Transparency Records, the content of "meaningful" explanations, log formats and acceptable shorter retention periods. That reliance concentrates practical power in the Secretary of State and makes parliamentary approval of secondary legislation a decisive implementation step; uncertainty about those delegated instruments will complicate planning for procurement teams and suppliers until the regulations are settled.
Several implementation frictions are likely. First, the requirement to withhold deployment where contractual or IP barriers impede assessment elevates vendor cooperation to a gatekeeping function, but vendors may resist disclosure on legitimate commercial or security grounds.
Second, the bill mandates "independent external scrutiny" and submission to an AI Safety Institute on request, but it leaves the remit, resourcing and legal powers of such evaluators undefined, creating potential gaps between expectation and enforceability. Third, the demand for meaningful personalised explanations may be technically difficult for black‑box models where causal attribution is contested; compliance officers will need to translate legal requirements into practicable explanation templates without undermining the model’s utility.
Finally, tension with data protection must be managed: five‑year log retention is useful for accountability but may conflict with data‑minimisation and privacy requirements unless authorities pre‑define retention justifications and embed safeguards. The Act avoids setting criminal penalties or a dedicated enforcement regulator, so much of enforcement will depend on judicial review, procurement rejection, contractual remedies, and the yet‑to‑be‑designed dispute resolution service, which could limit the immediacy of corrective action in practice.
Try it yourself.
Ask a question in plain English, or pick a topic below. Results in seconds.