The Artificial Intelligence Scam Prevention Act creates new guardrails to curb fraud enabled by artificial intelligence. It makes impersonation of government, business, or official entities unlawful and bans AI-generated replication of a person’s image or voice to defraud; it also extends the Federal Trade Commission’s enforcement tools to cover these AI-enabled schemes.
The bill further updates telecommunications and telemarketing regimes to address AI-driven deception in text messages and video conferencing, and it establishes a joint FTC-FCC advisory group to coordinate preventive measures and public education. Finally, it creates enhanced reporting requirements and outreach efforts to raise awareness among consumers, with a five-year sunset for the advisory group.
At a Glance
What It Does
It prohibits AI-enabled impersonation that defrauds, extends FTC enforcement to these acts, and expands telecom regulations to include AI-driven text messages and video calls. It also requires AI-disclosure in human-emulated telemarketing and sets up a joint FTC-FCC advisory group to coordinate prevention efforts.
Who It Affects
Regulated entities in marketing, telecom providers, payment and retail services, and government and law-enforcement partners. It directly touches consumer-facing industries—telemarketing, text messaging platforms, and gift-card/wire-transfer ecosystems—along with seniors and other vulnerable consumers who are frequent scam targets.
Why It Matters
AI-enabled scams have broadened in reach and sophistication. The bill creates enforceable rules, improves visibility into scam activity, and establishes a cross-agency forum to align industry practices and public awareness, reducing consumer harm and improving response capacity.
More articles like this one.
A weekly email with all the latest developments on this topic.
What This Bill Actually Does
First, the bill establishes a clear prohibition on impersonation that leverages artificial intelligence to defraud. It bars deceptive acts that impersonate government or official entities and prohibits AI-generated replication of an individual’s image or voice for fraud.
It also provides that assisting others to commit these acts is illegal. The Federal Trade Commission is positioned to enforce these provisions with the same reach and penalties as other unfair or deceptive acts under the FTC Act, ensuring consistent consumer protection standards across tech-enabled scams.
The bill also preserves existing authorities under the FTC Act and related laws, signaling that AI-specific fraud should fit within established consumer-protection regimes.
Second, the text messaging and video conferencing provisions expand the regulatory lens on how modern scams operate. The act revises definitions to explicitly include text messages and video conferences, and it requires disclosures when AI is used to emulate a human in a call or text.
The telecom and telemarketing framework is updated to ensure that messages and calls using AI are not misleading, with new disclosure obligations intended to help consumers identify when AI is at work in a communication.Third, the bill creates a cross-cutting approach to information sharing, public education, and industry training. It amends federal communications and consumer protection regimes to require public-facing information portals, complaint-logging processes, and targeted outreach to seniors and their families.
The bill also pilots an Artificial Intelligence Scams Advisory Group that will bring together FTC and FCC leadership with representatives from retail, telecommunications, wire-transfer services, and consumer-advocacy groups to develop best practices and model materials for preventing scams.Together, these elements aim to reduce the incidence of AI-facilitated fraud, provide clearer expectations for industry, and improve the public's ability to recognize and report scams. The measure emphasizes enforcement, disclosure, and education as core levers to protect consumers while coordinating across regulatory and industry boundaries.
The Five Things You Need to Know
Section 3 bars impersonation and AI-based replication of image or voice to defraud; it applies to government or official entities and to underlying assistance to others.
FTC enforcement is updated so violations are treated as unfair or deceptive acts under the FTC Act, with full powers and immunities preserved.
Text messaging and video conferencing rules are updated to include AI-related definitions and mandatory disclosures when AI emulates a human in a call or message.
A joint Artificial Intelligence Scams Advisory Group (FTC and FCC co-chairs) will develop educational materials, guidance, and best practices for retailers, payment services, and telecom providers.
FTC and FCC must report on AI-enabled scams, update public information portals, and implement robo-call and caller ID regulations within set timelines, with a five-year sunset for the advisory group.
Section-by-Section Breakdown
Every bill we cover gets an analysis of its key sections.
Findings
The bill cites the scale of AI-enabled scams and frames AI as both an opportunity and a risk. It highlights the FTC’s commitment to detecting impersonation fraud and notes that scammers can replicate voice and image with minimal samples. The findings set the stage for a policy response that prioritizes enforcement, consumer protection, and updated laws to address AI-enabled deception.
Impersonation and AI-Enabled Fraud Prevention
This section makes impersonating government, business, or officials unlawful and prohibits AI-assisted replication of a person’s image or voice to commit fraud. It directs the FTC to enforce these provisions with existing authorities, preserving powers and immunities, and it extends accountability to those who aid others in such acts. The intent is to close gaps where AI makes impersonation easier and more convincing.
Text Messaging and Video Conferencing
The act expands definitions in the Telemarketing/CPA and Communications Act regimes to explicitly cover text messaging and video conferencing. It requires disclosures when AI is used to emulate a human in a call or text, and it introduces text-message-specific definitions to ensure clarity around SMS, MMS, and RCS. The changes are designed to curb deception in modern communication channels while preserving legitimate services.
Reporting, Advisory Group, and Robo-Call Provisions
Section 5 creates an FTC-FCC advisory group to coordinate prevention efforts and develop education materials, sets reporting duties to Congress, and assigns timelines for agency rulemaking around robo-calls, caller ID, and related disclosures. It also adds reporting requirements to track AI-enabled scams and to inform public and private sector coordination. A sunset of five years is attached to the advisory group.
This bill is one of many.
Codify tracks hundreds of bills on Technology across all five countries.
Explore Technology in Codify Search →Who Benefits and Who Bears the Cost
Every bill creates winners and losers. Here's who stands to gain and who bears the cost.
Who Benefits
- Consumers, especially seniors and other vulnerable groups, gain clearer protection and better access to information about AI-enabled scams.
- The Federal Trade Commission and the Federal Communications Commission gain clearer enforcement authority and a formal mechanism to coordinate policy and outreach.
- Retailers, payment services, and wire-transfer companies receive model educational materials and guidance to prevent in-store and online scams at points of sale.
- Telecommunications providers benefit from updated definitions and regulatory guidance that support blocking and signaling AI-driven impersonation attempts.
- Law enforcement and public safety agencies improve information sharing and consumer-education efforts through public portals and Sentinel Network integration.
Who Bears the Cost
- Telemarketing and text-messaging entities must adapt to new definitions and disclosures, updating systems and training staff.
- Small businesses and merchants in gift-card and wire-transfer ecosystems incur compliance costs and training obligations to prevent scams.
- Telecom and messaging platforms may incur costs implementing enhanced caller-id, blocking, and disclosure mechanisms, as well as regulatory compliance overhead.
- Federal agencies (FTC and FCC) will need to allocate resources for administration of the Advisory Group, rulemaking, and ongoing reporting.
- Industry stakeholders may need to invest in educational materials and outreach campaigns to align with model practices developed by the Advisory Group.
Key Issues
The Core Tension
The central tension is balancing effective protection from AI-enabled scams with the risk of imposing burdens on legitimate communications and innovation. Strong enforcement and clear disclosures can deter scammers but may also slow legitimate AI-enabled services, especially for smaller businesses that lack large compliance teams.
The bill introduces a comprehensive overlay of AI-related rules on existing consumer-protection and telecom regimes, which could broaden the scope of regulated activities and impose new compliance burdens. The reliance on public-facing portals and advisory materials assumes timely funding and effective dissemination across diverse sectors, including small businesses and retailers.
There is a risk that rapid rulemaking and broad definitions could capture legitimate AI-assisted communications or create ambiguity in rapidly evolving AI technologies, potentially creating enforcement gaps or overreach. Finally, the cross-agency coordination required by the Advisory Group will depend on sustained collaboration among FTC, FCC, and industry stakeholders, which may be uneven across regions and sectors.
Try it yourself.
Ask a question in plain English, or pick a topic below. Results in seconds.