The bill requires the Secretary of State to establish an AI Authority by regulations and gives that body a coordinator role: aligning regulators, gap‑analysing existing law, monitoring risks, accrediting auditors and promoting international interoperability. It also sets statutory principles for regulation and empowers the Secretary of State to create regulatory sandboxes, mandate designated AI officers in businesses, require records and assurances about third‑party training data and IP, force labelling and consent for AI products, and permit independent audits accredited by the Authority.
For compliance officers, product teams and legal counsel the bill matters because it converts general AI governance goals into enforceable regulatory tools. It imposes new organisational obligations (a designated AI officer, record‑keeping and auditability), creates routes for innovation (regulatory sandboxes and testbeds), and centralises oversight in an Authority whose remit and powers will be set by statutory instruments — some of which require affirmative approval by both Houses of Parliament.
At a Glance
What It Does
The Secretary of State must create an AI Authority with functions including coordinating regulators, conducting gap analyses, accrediting independent AI auditors and supporting sandboxes. The bill establishes regulatory principles for safety, transparency, fairness, inclusivity and proportionality, and requires regulations to mandate designated AI officers, training‑data and IP disclosure, health labelling and accredited third‑party audits.
Who It Affects
Any UK business that develops, deploys or uses AI; firms that train models using third‑party data or IP; consumer‑facing companies required to label AI products; regulators running sandboxes; and third‑party auditors who must seek accreditation from the Authority.
Why It Matters
The bill centralises oversight and creates statutory hooks for compliance across sectors, converting policy aims into obligations that touch product development, procurement, data practices and governance. It also changes how regulators and innovators will interact by formalising sandboxes and an accreditation market for auditors.
More articles like this one.
A weekly email with all the latest developments on this topic.
What This Bill Actually Does
The bill starts by obliging the Secretary of State to set up an AI Authority through regulations. The Authority’s job is largely coordination: it must make sure existing regulators consider AI in their work, align regulator approaches, identify regulatory gaps, review related legislation (product safety, privacy, consumer protection), monitor risks across the economy, do horizon‑scanning with industry, support testbeds and sandboxes, accredit independent AI auditors, run public education and promote international interoperability.
The Secretary of State keeps the power to change the Authority’s functions or to dissolve it by further regulation.
Separately, the bill sets out a compact set of principles the Authority must have regard to. Those principles include safety, security and robustness; transparency and explainability; fairness and non‑discrimination; accountability and governance; and contestability and redress.
The bill also explicitly places duties on businesses in the principles: be transparent about AI use, test AI thoroughly and comply with existing laws (data protection, IP). It adds inclusivity requirements (design for older people, disabled people and lower socio‑economic groups) and a requirement that AI‑generated data be findable, accessible, interoperable and reusable (FAIR).On operational tools, the bill requires the Secretary of State to make regulations establishing regulatory sandboxes in collaboration with relevant regulators.
The bill defines a sandbox as a small‑scale market test open to authorised firms, unauthorised firms that need authorisation, and technology partners; it requires clear objectives, consumer protection safeguards and, where activities are regulated, authorisation or registration with the relevant regulator before testing begins. The Authority is also tasked with supporting these sandboxes.The bill forces concrete corporate governance changes by requiring regulations that mandate a designated AI officer in businesses that develop, deploy or use AI.
That officer must ensure safe, ethical, unbiased and non‑discriminatory AI use and, as far as reasonably practicable, ensure data used are unbiased. Complementing that, the Secretary of State must make regulations requiring any person involved in training AI to supply the Authority a record of all third‑party data and IP used and to assure the Authority they have informed consent and meet IP obligations.
Firms supplying AI products must provide clear health warnings and give customers opportunities to give or withhold consent, and businesses must permit independent third‑party audits accredited by the Authority.Finally, the bill sets out how these rules will be made and overseen: regulations are statutory instruments; those implementing the Authority or the core principles (sections 1 and 2) require affirmative approval by both Houses, while other regulations are subject to annulment by either House. Regulations may create offences, fees and fines, and regulations applying to devolved nations must be laid before the relevant devolved legislature.
The Five Things You Need to Know
The Secretary of State must create an ‘AI Authority’ by regulations and may later amend its functions or dissolve it by regulation.
The bill requires regulations mandating a designated AI officer for any business that develops, deploys or uses AI, with duties to ensure safe, ethical and unbiased AI and to seek to ensure unbiased training data.
Persons involved in training AI must provide the AI Authority with a record of all third‑party data and IP used and must assure the Authority they used such material with informed consent and in compliance with IP law.
Regulatory sandboxes must be open to authorised firms, unauthorised firms that need authorisation, and technology partners, require clear objectives and small‑scale tests, and demand authorisation for regulated activities before testing.
Regulations under the Act are statutory instruments; those creating the AI Authority or setting the principles require affirmative approval by both Houses of Parliament, while other regulations are subject to annulment.
Section-by-Section Breakdown
Every bill we cover gets an analysis of its key sections.
Create an AI Authority and set its coordinating functions
Section 1 directs the Secretary of State to establish an AI Authority by regulations and lists its functions: aligning regulators, carrying out gap analyses, reviewing existing laws for AI‑readiness, monitoring risks and supporting testbeds. Practically, this centralises oversight in a single body that does not itself regulate products but coordinates regulatory responses and builds infrastructure (auditor accreditation, public engagement and international alignment). The Secretary of State’s power to amend these functions or dissolve the Authority by further regulation is a built‑in flexibility that also concentrates executive control over scope and lifespan.
Statutory principles for AI regulation and business conduct
Section 2 sets the normative framework the Authority must consider: safety, transparency, fairness, accountability, contestability and proportionality. It goes beyond abstract principles by specifying business duties (transparency, testing, legal compliance) and inclusivity expectations (design for older, disabled and lower socio‑economic groups). It also requires AI‑generated data to be FAIR (findable, accessible, interoperable, reusable). These principles will guide regulator decisions and any secondary regulations, but the Secretary of State retains the power to amend them by regulation.
Regulatory sandboxes — structure and eligibility
Section 3 requires the Authority to work with regulators to build sandboxes. The bill defines a sandbox as a small‑scale test with consumer safeguards, open to authorised firms, unauthorised firms that need authorisation, and technology partners. Crucially, it requires firms testing regulated activities to be authorised or registered with the relevant regulator before testing begins. This provision formalises experimentation routes across sectors while embedding pre‑test compliance where activities touch regulated domains.
Designated AI officer: mandatory corporate role
Section 4 mandates that regulations require businesses that develop, deploy or use AI to appoint a designated AI officer whose duties include ensuring safe, ethical and non‑discriminatory AI use and, so far as reasonably practicable, ensuring training data are unbiased. The bill leaves the detail (who counts as a business, thresholds, qualifications, reporting lines and penalties for non‑compliance) to subordinate regulations but fixes the obligation as a statutory responsibility to be implemented across the economy.
Training data and IP disclosure, health warnings and accredited audits
Section 5 imposes three linked obligations by regulation: (a) anyone involved in training AI must submit records of third‑party data and IP used and assure the Authority of informed consent and IP compliance; (b) suppliers of AI products or services must provide clear health warnings, labelling and obtain or offer informed consent; and (c) businesses must allow audits by independent third parties accredited by the AI Authority. These measures aim to improve traceability, consumer awareness and external verification but will require detailed rules on scope, confidentiality safeguards and what constitutes adequate assurance.
Public engagement programme
Section 6 requires the Authority to implement a long‑term public engagement programme and to consult on effective engagement frameworks, with reference to international comparators. This embeds a domestic mandate for ongoing dialogue with citizens and stakeholders, which the Authority must operationalise through outreach, consultation and education — functions that will demand resources and measurable delivery plans.
Definitions, regulatory procedure, territorial extent and commencement
Sections 7–9 define ‘AI’ (including generative AI and systems that perceive environments, interpret data and make recommendations), set out that regulations are statutory instruments (noting that core regulations need affirmative approval while others are subject to annulment), permit regulations to create offences and penalties, require laying of devolved‑nation‑applicable instruments before their legislatures, and specify that the Act extends to the whole UK and commences on passage. These procedural rules shape parliamentary scrutiny, enforcement design and interaction with devolved competence.
This bill is one of many.
Codify tracks hundreds of bills on Technology across all five countries.
Explore Technology in Codify Search →Who Benefits and Who Bears the Cost
Every bill creates winners and losers. Here's who stands to gain and who bears the cost.
Who Benefits
- Consumers and service users — clearer labelling, health warnings and a right to consent aim to increase transparency and allow people to avoid unwanted AI interventions.
- Startups and innovators engaged in safe experimentation — sandboxes and testbeds create regulated spaces to trial products with regulator guidance, potentially lowering time‑to‑market where authorisation rules are navigable.
- Regulators — the AI Authority’s coordination, gap analysis and horizon‑scanning give regulators a framework and shared intelligence to address AI across sectors.
- Independent auditors and assurance providers — the Authority’s accreditation programme creates a new market for third‑party auditing services and formal recognition for practitioners.
- Disadvantaged groups — the statutory emphasis on inclusivity and non‑discrimination creates a legal lever for designing AI to meet needs of older people, disabled people and lower socio‑economic groups.
Who Bears the Cost
- Businesses that develop, deploy or use AI — they must appoint a designated AI officer, keep records on third‑party training data and IP, provide labelling and consent mechanisms, and submit to accredited audits, imposing governance and compliance costs.
- Small and medium enterprises and startups — the administrative burden (officer appointment, documentation, audits) may be proportionally heavier for smaller organisations, even if sandboxes provide testing routes.
- Regulators and the AI Authority — coordinating activity, accrediting auditors, running public engagement and maintaining sandboxes will require funding, new capabilities and inter‑agency processes.
- Organisations supplying proprietary training data and IP — mandatory disclosure and assurances to the Authority could expose commercially sensitive datasets and increase litigation or licensing scrutiny.
- Devolved administrations — regulators in Scotland, Wales and Northern Ireland must be engaged for instruments affecting them, increasing coordination complexity and potential delay.
Key Issues
The Core Tension
The bill’s central dilemma is reconciling robust consumer protection and public trust with a flexible, innovation‑friendly regime: it formalises governance, disclosure and audit requirements that strengthen accountability, while simultaneously creating sandboxes and executive powers intended to speed innovation — but the broad scope and dependence on secondary regulations risk either over‑burdening firms or leaving protections toothless if delegated rules are weak.
The bill converts broad governance goals into regulation but leaves many high‑stakes details to secondary instruments. Key implementation questions are unresolved: what threshold defines ‘any business which develops, deploys or uses AI’?
Without thresholds or size‑based exemptions, the obligation to appoint a designated AI officer and to provide training‑data records could sweep very widely and impose disproportionate costs. The bill requires companies to assure informed consent and IP compliance for third‑party data, but it does not specify standards of proof, confidentiality protections for commercially sensitive datasets, or how trade secrets and security concerns will be balanced against disclosure to the Authority.
Another tension is executive flexibility versus parliamentary oversight. The Secretary of State may amend core functions and principles and even dissolve the Authority by regulation, while the bill subjects only some regulations (those creating the Authority or the principles) to affirmative parliamentary approval and leaves others to negative procedures.
That design allows rapid updates but concentrates discretionary power in ministers. Enforcement is similarly under‑specified: the bill allows regulations to create offences and fines but does not set thresholds, reporting requirements, audit standards, or appeal routes for accreditation decisions, leaving the technical and legal architecture for enforcement to future instruments.
Try it yourself.
Ask a question in plain English, or pick a topic below. Results in seconds.