Codify — Article

Content Origin Protection Act: provenance and deepfake standards

Establishes public-private standards for content provenance, watermarking, and enforcement to deter tampering and protect creators.

The Brief

The Content Origin Protection and Integrity from Edited and Deepfaked Media Act of 2025 would require transparency about where digital content comes from and how it was created. It sets up standards for labeling and detecting synthetic media and establishes a framework for watermarking and provenance information.

The bill also creates a pathway for enforcement through the FTC, state attorneys general, and private rights of action. It aims to protect artists and journalists whose works are used to train AI, while guiding developers and platforms toward responsible deployment of advanced content-generation technologies.

At a Glance

What It Does

Establishes a public-private partnership under the Under Secretary of Commerce to develop consensus-based standards for content provenance information, watermarking, and detection of synthetic and synthetically-modified content, and to promote best practices.

Who It Affects

Covers large digital platforms (revenue over $50M or 25M monthly users) and tools used to create or modify content, along with content creators, publishers, and AI researchers.

Why It Matters

Creates a baseline for authenticity in digital media, strengthens rights for content owners, and positions the U.S. to lead in AI transparency and safety standards.

More articles like this one.

A weekly email with all the latest developments on this topic.

Unsubscribe anytime.

What This Bill Actually Does

The bill defines key terms like content provenance information, covered content, and deepfakes, to set a common language for transparency in digital media. It directs the Under Secretary of Commerce for Standards and Technology to build a public-private partnership that develops voluntary, consensus-based standards for watermarking, provenance data, and detection tools.

It also calls for grand challenges and collaboration with DARPA and NSF to improve detection capabilities and cybersecurity related to provenance tooling. The act then introduces a two-year rollout for provenance requirements: tools that primarily generate synthetic content must offer provenance labeling, and platforms must prevent removal or tampering of provenance data, with limited exceptions for security research.

Enforcement is multi-layered—FTC oversight of overall compliance, state attorneys general, and private rights actions—creating a robust framework to deter misuse and protect owners of copyrighted content. The measure also includes public education campaigns to raise awareness about synthetic content and provenance technologies.

The Five Things You Need to Know

1

The bill authorizes a public-private partnership under the Under Secretary to develop consensus-based content provenance standards and best practices.

2

Two years after enactment, tools that create synthetic or substantially modify content must enable provenance labeling and make provenance machine-readable where technically feasible.

3

It prohibits non-consensual use of covered content with provenance data for AI training or synthetic-content generation without owner consent and compliant terms.

4

Enforcement is shared among the FTC, state attorneys general, and private parties, with cross-state and interagency cooperation.

5

The definition of covered platforms ties liability to large, commercially active platforms (>$50M revenue or 25M MAU) to incentivize widespread adoption of provenance standards.

Section-by-Section Breakdown

Every bill we cover gets an analysis of its key sections. Expand all ↓

Section 2

Sense of Congress

This section frames the bill’s purpose: to address gaps in visibility into how AI systems work, what data trains them, and the absence of consensus-based standards for provenance. It argues that improving transparency and standards will enable safer deployment of AI, protect rights holders, and position the United States as a leader in developing responsible AI ecosystems.

Section 3

Definitions

Section 3 provides precise meanings for terms used throughout the bill—artificial intelligence, content provenance information, deepfakes, synthetic content, covered platform, and related concepts. These definitions unify interpretation across all provisions and set the boundaries for who and what is regulated.

Section 4

Standards Development

Under Secretary of Commerce for Standards and Technology must lead a public-private partnership to craft consensus standards for watermarks, provenance metadata, and detection tools. The section also calls for collaboration with copyright authorities and for prize-based challenges with DARPA and NSF to spur innovation in detection and defense against tampering.

5 more sections
Section 5

R&D and Public Education

This section tasks the Under Secretary with funding and guiding research into measurement science, detection technologies, and security measures for content provenance. It also requires a nationwide public education campaign about synthetic content, watermarking, and provenance information to raise awareness and comprehension among content creators and platform operators.

Section 6

Provenance Requirements and Removal

Two parallel streams govern provenance: (1) for synthetic and synthetically-modified content, and (2) for covered content. Tools and platforms must offer and preserve provenance data where feasible, and it is unlawful to remove or tamper with provenance information in most cases. A limited security-research exception allows removal or tampering for defensive testing.

Section 7

Enforcement

The FTC oversees enforcement with authorities paralleling FTC Act provisions. State attorneys general may sue on behalf of residents, and private parties can seek relief, damages, and attorney’s fees. The section also details venue, service, and the interplay between federal and state actions.

Section 8

Rule of Construction

This section clarifies that the Act does not diminish copyright rights under existing law, preserving statutory privileges and remedies outside the Act’s provisions.

Section 9

Severability

If any provision is found unenforceable, the remaining provisions stay in effect, ensuring the act remains partially operative rather than fully voided.

At scale

This bill is one of many.

Codify tracks hundreds of bills on Technology across all five countries.

Explore Technology in Codify Search →

Who Benefits and Who Bears the Cost

Every bill creates winners and losers. Here's who stands to gain and who bears the cost.

Who Benefits

  • Artists, photographers, and other content creators whose works are used to train AI—provenance standards help protect their rights and ensure fair use.
  • Journalists and publishers relying on authenticity and trust, who benefit from clear labeling and defense against misleading synthetic content.
  • Copyright owners and rights holders who gain stronger control and leverage against improper use of their works in AI training or content generation.
  • AI researchers and standards bodies who gain clearer guidelines and measurable targets for detection tools and provenance verification.
  • General public and consumers who benefit from increased assurance about the authenticity of digital content.

Who Bears the Cost

  • Large covered platforms (social networks, video sites, and search engines) will face compliance obligations and potential liability for provenance mismanagement.
  • Developers and vendors of AI content-generation tools must integrate provenance-labeling capabilities and security measures, increasing their development costs.
  • Small content creators and niche platforms may experience relative burdens in adoption and interoperability with emerging standards.
  • Regulatory and enforcement resources at both federal and state levels will require funding and administrative overhead.

Key Issues

The Core Tension

Balancing rapid AI innovation and market growth with robust provenance, labeling, and protection for creators and rights holders remains the central dilemma. The more prescriptive the standards, the greater the potential constraint on developers and platforms; yet without these standards, the risk of deepfakes and opaque AI training data grows, potentially eroding trust and fair competition.

The bill presents a deliberate strategy to seed standards and tooling that can scale with advancing AI capabilities, while imposing concrete requirements on provenance labeling and protection. Real-world implementation will depend on interoperable standards, the availability of machine-readable provenance data, and the ability of platforms and toolmakers to integrate these features without stifling innovation.

The security-research exception recognizes a necessary carve-out for experiments aimed at strengthening defenses but may invite debate about how broadly to interpret it. The reliance on civil enforcement—by the FTC, states, and private parties—ensures multiple leverage points but also creates potential for fragmented litigation or inconsistent remedies across jurisdictions.

Try it yourself.

Ask a question in plain English, or pick a topic below. Results in seconds.