Codify — Article

QUIET Act: AI disclosures for robocalls and penalties

Requires upfront AI-use disclosures in robocalls and texts, with doubled penalties for AI impersonation.

The Brief

The QUIET Act amends the Communications Act of 1934 to require disclosures when robocalls use artificial intelligence to imitate a human. It also introduces enhanced penalties for AI-driven impersonation, doubling the maximum forfeiture and criminal fines for violations.

The bill defines AI-enabled robocalls and text messages and sets scope for enforcement, applying to violations after enactment.

At a Glance

What It Does

Adds a new subsection to 47 U.S.C. § 227 requiring robocalls and texts that use AI to disclose that AI is being used at the beginning of the call or message. It also defines key terms and clarifies what counts as a robocall or text message.

Who It Affects

Telecom carriers, SMS/AI messaging platforms, and entities that deploy AI to generate calls or messages, along with recipients who receive these communications.

Why It Matters

Establishes transparency for AI-driven communications and provides a framework for enforcement through enhanced penalties.

More articles like this one.

A weekly email with all the latest developments on this topic.

Unsubscribe anytime.

What This Bill Actually Does

The QUIET Act changes how robocalls and AI-generated messages must behave when they reach consumers. It adds a new requirement under the Communications Act that if a robocall or text message uses artificial intelligence to emulate a human, the caller must disclose that AI is being used at the start of the call or message.

This transparency measure targets deceptive practices where AI makes it hard for a recipient to tell whether they are speaking with a machine or a real person.

To make this work, the bill expands or clarifies definitions. It defines robocalls in terms of the equipment used to make calls or send texts, including AI-generated voices, and it excludes calls that require substantial human intervention.

It also broadens the concept of a text message to cover SMS, MMS, and RCS, while excluding real-time two-way voice or video communications from the text-message category.The QUIET Act also strengthens enforcement for AI impersonation. If a violation involves AI impersonation intended to defraud, cause harm, or obtain value, penalties are doubled: the forfeiture cap under the civil provisions and the criminal fines under the relevant criminal provisions.

The increased penalties apply to violations after enactment, signaling a harsher stance against AI-driven deception while creating a clearer compliance path for industry.

The Five Things You Need to Know

1

The bill adds a new disclosure requirement at the start of robocalls/texts when AI is used to emulate a human.

2

It expands definitions to include AI-generated voices and messages within robocalls and texts.

3

It excludes real-time two-way voice or video communications from the text-message category.

4

Penalties for AI impersonation violations are doubled (civil forfeitures and criminal fines).

5

The enhanced penalties apply to violations occurring after enactment.

Section-by-Section Breakdown

Every bill we cover gets an analysis of its key sections. Expand all ↓

Section 2

AI disclosure requirements for robocalls and texts

Section 2 adds a new subsection to 47 U.S.C. § 227 requiring that if a robocall uses artificial intelligence to emulate a human, the caller must disclose AI usage at the beginning of the call or text message. The provision is designed to ensure recipients know when they are interacting with AI, rather than a real person. This creates a standard for transparency in automated communications.

Section 2

Definitions of robocall and text message

The bill defines ‘robocall’ to include calls or texts produced using equipment (hardware, software, or both) that may involve an AI-generated voice or message, while excluding calls that require substantial human intervention. It also defines ‘text message’ to cover SMS, MMS, and RCS messages, and explicitly excludes real-time two-way voice or video communication from the text-message category.

Section 3

Enhanced penalties for AI impersonation

Section 3 adds a new subsection (l) to § 227 providing enhanced penalties for violations involving AI impersonation. When the AI-driven call or text is used to impersonate an individual or entity with the intent to defraud, cause harm, or obtain value, the maximum forfeiture and the maximum criminal fine are doubled compared with the existing penalties. The amendments apply to violations occurring after enactment.

At scale

This bill is one of many.

Codify tracks hundreds of bills on Privacy across all five countries.

Explore Privacy in Codify Search →

Who Benefits and Who Bears the Cost

Every bill creates winners and losers. Here's who stands to gain and who bears the cost.

Who Benefits

  • Smartphone users and other recipients of robocalls/texts who gain upfront awareness that AI is involved in the communication.
  • Telecommunications carriers and messaging platforms that must implement a clear disclosure framework, creating predictable compliance expectations.
  • Regulators such as the FCC and FTC, which gain stronger tools to deter AI-driven deception and to pursue enforcement.
  • Law enforcement and consumer protection agencies that benefit from clearer evidentiary grounds in impersonation cases.

Who Bears the Cost

  • AI-enabled robocall operators and text-message senders must build disclosure workflows and labeling into their systems.
  • Telecoms and messaging platforms incur costs to update their infrastructure, interfaces, and enforcement tooling to support the disclosures.
  • Small businesses that deploy AI-based outreach may face higher compliance costs as they adapt to the new rules.
  • Regulators may need additional resources to monitor compliance and enforce the enhanced penalties.

Key Issues

The Core Tension

The central tension is between preventing deception through AI disclosures and preserving practical, scalable communication practices for legitimate use cases, while ensuring that enforcement is feasible and fair across diverse platforms and providers.

The QUIET Act introduces a straightforward transparency mechanism but raises several policy and implementation questions. Key tensions include defining precisely when AI usage triggers a disclosure, especially for complex AI systems that involve multiple stages or hybrid human-AI interactions.

The scope of “robocall” and “text message” could be challenged by evolving communications channels (e.g., new messaging formats or platforms) that do not neatly fit current definitions. There is also the risk of chilling legitimate AI-assisted communications if disclosures are overly broad or technically opaque, potentially reducing legitimate outreach to consumers.

Enforcement challenges include determining how to verify AI use, monitor disclosures, and allocate resources across federal and state jurisdictions when cross-border or cross-platform communications occur. Finally, the act raises questions about the balance between consumer protection and innovation, particularly for smaller providers who may struggle to implement rapid changes.

Try it yourself.

Ask a question in plain English, or pick a topic below. Results in seconds.