Codify — Article

VET AI Act directs NIST to publish voluntary internal and external AI assurance guidelines

Requires NIST to develop and update consensus-driven assurance specifications, create an advisory committee, and spur capacity studies—shaping how developers, deployers, and assurers demonstrate AI trustworthiness.

The Brief

This bill tasks the Director of the National Institute of Standards and Technology with producing voluntary technical guidelines and specifications for ‘‘internal’’ and ‘‘external’’ artificial intelligence assurance. The guidelines must be consensus-driven, address privacy, dataset quality, governance, documentation and testing, and be published publicly; NIST must review and update them at least every two years.

The Act also directs the Secretary of Commerce to establish an Artificial Intelligence Assurance Qualifications Advisory Committee to recommend qualifications and potential accreditation approaches for assurance providers, and to run a study of the sector that conducts AI assurances and report findings to Congress. Though the guidance is explicitly voluntary and non-prescriptive, it is structured to align with international standards and to provide a foundational conformity-assessment model for the private sector and federal agencies.

At a Glance

What It Does

Mandates NIST to craft publicly available, voluntary technical specifications for conducting internal and external AI assurances, including minimum content areas and recommended methods; requires NIST to update the guidance every two years. Separately, the Commerce Secretary must form an advisory committee to recommend qualifications for assurers and commission a sector study on assurance capacity.

Who It Affects

AI developers and deployers who will use the guidelines to scope and document internal testing and to contract external evaluations; third-party assurance firms that may seek to meet the recommended qualifications; standards organizations, accreditation bodies, and federal procurement officers evaluating assurance claims.

Why It Matters

Creates a U.S.-centric blueprint for how organizations should validate and verify AI systems without imposing new statutory mandates—potentially becoming a de facto market requirement. It also targets shortcomings in assurance capacity by directing a federal study and pushing for alignment with voluntary consensus and international standards.

More articles like this one.

A weekly email with all the latest developments on this topic.

Unsubscribe anytime.

What This Bill Actually Does

The bill defines two assurance types: internal assurances are independent evaluations done by the organization that owns or operates an AI system but structured to reduce conflicts of interest; external assurances are independent evaluations by nonaffiliated third parties that must demonstrate technical competence and financial independence. NIST must incorporate these definitions into guidance that clarifies when and how each assurance type is appropriate given a system’s use-case and risk profile.

NIST’s guidance must enumerate specific content areas for assurance activities—privacy safeguards, harm-mitigation methods, dataset quality, documentation and provenance, and governance controls—and provide practical methodologies, from testing protocols to minimum information to be shared with evaluators. The Director must solicit public comment, hold workshops, and publish draft(s) and the final framework on NIST’s website, ensuring the guidance ultimately aligns with existing voluntary consensus and international standards where feasible.After publication, the Commerce Secretary creates an advisory committee with up to 20 members representing academia, industry (both developers and deployers), assurers, civil rights and consumer groups, labor, and accreditation bodies.

That committee will study existing licensure/certification models and recommend what qualifications, independence safeguards, and accountability measures assurers should meet. The committee must deliver its recommendations within a year and then disband.Concurrently, the Secretary must study the market for assurance providers: personnel and technical capabilities, safeguards for confidential or proprietary data during evaluations, demand and supply dynamics, and whether existing NIST-accredited labs could perform external assurances.

The Secretary’s study must culminate in a report to Congress with recommendations to strengthen capacity and preserve confidentiality while enabling meaningful assurances.Throughout, the statute keeps obligations voluntary and forbids prescribing specific technologies or commercial products; it also directs NIST to recommend methods to protect sensitive data shared during assurance engagements. The overall design intends to standardize assurance practices without creating statutory mandates, while informing potential future accreditation or procurement preferences through advisory recommendations and capacity analysis.

The Five Things You Need to Know

1

NIST must publish voluntary technical guidelines and specifications for internal and external AI assurance within 1 year of enactment and review them at least every 2 years.

2

The guidance must cover at minimum: consumer privacy safeguards, methods to assess and mitigate harms, dataset quality, documentation/provenance, and governance/process controls.

3

Within 90 days after NIST publishes the guidance, the Secretary of Commerce must form an Artificial Intelligence Assurance Qualifications Advisory Committee (up to 20 members) to recommend qualifications for assurers; that committee must report within 1 year and then terminate.

4

The Secretary must begin a study within 90 days of publication to evaluate the capabilities, safeguards, market demand, and feasibility of leveraging NIST-accredited facilities for external AI assurances, and deliver a report to Congress within 1 year.

5

The statute defines ‘‘nonaffiliated third party’’ with specific independence requirements (no common ownership, demonstrable financial independence, no shared employees) plus technical expertise criteria for performing external assurances.

Section-by-Section Breakdown

Every bill we cover gets an analysis of its key sections. Expand all ↓

Section 1

Short title

Establishes the bill’s formal name as the ‘‘Validation and Evaluation for Trustworthy (VET) Artificial Intelligence Act,’’ which signals the bill’s focus on assurance and evaluation rather than prescriptive regulatory mandates.

Section 2

Purposes

Lists three objectives: develop consensus-driven voluntary assurance guidance, use assurance to supplement trust-building and governance of AI, and advance the goals of NIST’s AI Risk Management Framework and any successor frameworks. The purposes anchor the subsequent requirements and emphasize voluntary, evidence-based practices tied to existing NIST work.

Section 3

Key definitions

Provides working definitions for core terms used throughout the Act—artificial intelligence system, deployer, developer, internal and external assurance, nonaffiliated third party, Director, and Secretary. The nonaffiliated third-party definition contains operational criteria (no common ownership, financial independence, no shared employees) and competency expectations, which will matter in any future conformity or accreditation conversations.

3 more sections
Section 4

NIST voluntary assurance technical guidelines and specifications

Requires NIST to develop, publish, and periodically update voluntary technical guidelines that identify consensus-driven standards and provide practical methodologies for internal and external assurance. The statute prescribes substantive content areas (privacy, harms mitigation, dataset quality, documentation and provenance, governance), recommends approaches for scoping assurance frequency and scope relative to risk, and directs stakeholder outreach with public comment and workshops. Importantly, the provision forbids prescribing specific products or technologies and requires recommendations to protect sensitive data exchanged during assurance activities.

Section 5

Advisory Committee on qualifications for assurers

Directs the Secretary to create a 20-member advisory committee within 90 days of NIST publishing the guidance, with representation across higher education, developers and deployers, assurance professionals, civil rights and consumer groups, labor, accreditation bodies, and public health/safety. The committee’s task is to review licensure, certification, and accreditation case studies and recommend qualification, independence, and accountability criteria for assurance providers, and whether existing programs can satisfy those criteria. The committee must report within one year and then terminate, producing a bounded set of recommendations for the Secretary and Congress.

Section 6

Study and report on entities that conduct assurances

Requires the Secretary to commence a study within 90 days of the guidance’s publication to assess the assurance sector’s readiness—personnel, tools, computing and physical infrastructure, confidentiality safeguards, market demand, and capacity to follow the NIST guidance. The study also must evaluate whether existing NIST-accredited facilities could feasibly conduct external assurances. Results and recommendations to improve availability and capability must be reported to Congress within one year.

At scale

This bill is one of many.

Codify tracks hundreds of bills on Technology across all five countries.

Explore Technology in Codify Search →

Who Benefits and Who Bears the Cost

Every bill creates winners and losers. Here's who stands to gain and who bears the cost.

Who Benefits

  • Developers of AI systems — Receive a clear, consensus-based blueprint for internal testing, documentation practices, and what external evaluators may expect, which reduces uncertainty when marketing systems or responding to procurement requirements.
  • Deployers and large purchasers (private and public) — Gain a standardized reference to assess vendor assurance claims and to structure contractual requirements or procurement preferences around demonstrated assurance practices.
  • Third‑party assurance firms and labs — Obtain a definitional basis and potential pathway to offer accredited assurance services aligned to NIST guidance, creating new market opportunities.
  • Standards organizations and accreditation bodies — Can map the NIST guidance to existing voluntary consensus standards and identify gaps for new conformity assessment programs.
  • Civil rights and consumer advocacy groups — Benefit from explicit coverage of privacy, harm mitigation, and documentation in the guidance, which provides advocacy levers and clearer expectations for accountability.

Who Bears the Cost

  • AI developers and deployers (especially startups and small firms) — Will face direct compliance costs to implement internal assurance processes, prepare documentation, and fund external evaluations if they choose to pursue them.
  • Third‑party assurance providers — Must invest in specialized personnel, tooling, and infrastructure to meet recommended qualifications and to maintain confidentiality controls, increasing operating costs.
  • NIST and Commerce (federal agencies) — Will bear administrative and operational burdens to draft, update, convene stakeholders, run the advisory committee, and carry out the mandated study and reporting tasks.
  • Procurement offices and public agencies — May need to revise solicitation templates, evaluation criteria, or contract oversight functions to incorporate voluntary assurance practices, creating short‑term administrative costs.
  • Organizations with proprietary models — Risk exposing sensitive IP during external assurance engagements and may need to pay for enhanced confidentiality safeguards or limit information shared, increasing transaction complexity.

Key Issues

The Core Tension

The central dilemma is whether a voluntary, consensus-driven assurance framework can maximize public trust and interoperability without becoming a de facto regulatory standard that imposes significant compliance costs and competitive disadvantages—i.e., standardization and accountability versus market burden and confidentiality risks.

The Act intentionally keeps NIST’s output voluntary and non-prescriptive, but voluntary guidance can become a de facto requirement when adopted in procurement, investor due diligence, or litigation. The bill does not create enforcement mechanisms, leaving open how federal agencies or courts will treat compliance with the guidance when assessing claims of due care or conformity.

This gap may produce uneven uptake: large organizations and government contractors may follow the guidance to manage risk, while smaller entities may be unable to absorb upfront assurance costs.

Operationally, the Act assumes the existence of a sufficient market of independent assurers, yet the mandated study specifically asks whether that capacity exists and whether existing NIST‑accredited labs could pivot to AI assurance. If capacity falls short, the guidance risks creating demand that outstrips supply, raising costs and delays for evaluations.

Another tension lies between confidentiality and public accountability: the statute requires methods to protect proprietary and personal data shared during assurance, but also contemplates disclosure of assurance results and corrective actions. Balancing robust, transparent reporting with protection of trade secrets and privacy will be a practical and legal challenge during implementation.

Try it yourself.

Ask a question in plain English, or pick a topic below. Results in seconds.