SB2367 (AI Accountability and Personal Data Protection Act) creates a new federal tort making any person who “appropriates, uses, collects, processes, sells, or otherwise exploits” an individual’s covered data without that person’s express, prior consent liable in court. The bill defines covered data very broadly — including inferred or derived data, many identifiers, behavioral and biometric data, and even copyrighted content generated by or about an individual — and explicitly treats training and outputs of generative AI as covered activities.
The bill gives individuals a private right of action with remedies that include actual damages, a statutory floor ($1,000) or treble profits (whichever is greater), punitive damages, injunctive relief, and attorney’s fees. It also invalidates predispute arbitration agreements and class/collective-action waivers for claims under the statute and requires clear, separate disclosure and affirmative acknowledgement when data will be shared with third parties.
Finally, the Act is framed as a federal minimum standard and does not preempt state laws that provide greater protections.
At a Glance
What It Does
Creates a federal tort making unauthorized appropriation or exploitation of an individual’s covered data actionable; covers training and outputs of generative AI and imposes statutory remedies, including treble profits or a $1,000 floor, punitive damages, injunctive relief, and fees. It also renders predispute arbitration agreements and joint-action waivers unenforceable for claims under the Act.
Who It Affects
AI developers and providers, data brokers, digital platforms that collect large datasets, advertisers, and any entity that trains models or derives profiles from personal data. It also affects businesses that rely on third-party datasets or terms-of-service consent models and defense counsel and plaintiffs’ firms handling data-liability litigation.
Why It Matters
The bill shifts significant litigation and compliance risk onto entities that use or monetize personal and derived data, narrows the circumstances in which consent is valid, and forces explicit third-party disclosure. It therefore creates a new nationwide baseline for data-control claims and could change how companies source training data and structure user agreements.
More articles like this one.
A weekly email with all the latest developments on this topic.
What This Bill Actually Does
SB2367 establishes a standalone federal tort: if a person appropriates, uses, collects, processes, sells, or otherwise exploits an individual’s “covered data” without that person’s express, prior consent, the individual can sue. The draft intentionally uses expansive language for “covered data,” capturing not only direct identifiers (names, device IDs, IP addresses, geolocation, biometrics) but also behavioral signals, inferred or derived attributes, and copyrighted material that an individual generated.
The bill expressly counts both the use of covered data to train generative AI systems and content generated by those systems that pertains to an individual as activities covered by the statute.
On remedies, the bill gives plaintiffs a powerful toolkit: recover actual damages or a statutory alternative equal to the greater of treble the defendant’s profits from exploiting that individual’s covered data or $1,000; seek punitive damages; pursue injunctive relief to stop ongoing misuse; and obtain attorney’s fees and costs. Defendants can raise an affirmative defense if they can prove the individual gave express, prior consent, but the statute bars consent obtained through coercion or deception and rejects consents that are tied to mandatory use of a product when the data exploitation goes beyond what is reasonably necessary for that product or service.SB2367 also removes predispute arbitration and predispute joint-action waivers from the toolkit companies commonly use to limit consumer litigation.
Claims under this Act must be adjudicated in court; courts (not arbitrators) decide whether the Act applies to a dispute. The bill further requires that any third party authorized to receive or exploit covered data be separately and specifically disclosed to the individual at the time consent is requested; a simple hyperlink or passive notice in a privacy policy does not suffice.
Finally, the Act is written as a minimum federal standard and explicitly preserves any state law that offers greater protection or remedies.
The Five Things You Need to Know
The statute defines “covered data” to include inferred or derived information and copyrighted content created by an individual, meaning outputs that replicate or substantially derive from a person’s material can trigger liability.
A prevailing plaintiff can recover the greater of actual damages, treble the profits tied to that individual’s data, or a $1,000 statutory floor, in addition to punitive damages, injunctive relief, and attorney’s fees.
Consent is only an affirmative defense if it is express and prior; the bill voids consent obtained by coercion, deception, or as a condition of service when data use exceeds what’s reasonably necessary.
Predispute arbitration agreements and predispute joint-action waivers are unenforceable for claims under the Act, and a court must decide the Act’s applicability rather than an arbitrator.
The bill requires separate, prominent disclosure and affirmative acknowledgement of any third parties who will receive or exploit covered data; embedding that disclosure in a privacy policy or hyperlink is not enough.
Section-by-Section Breakdown
Every bill we cover gets an analysis of its key sections.
Short title
Identifies the Act as the "AI Accountability and Personal Data Protection Act." This is a labeling provision only, but it signals congressional intent to tie the measure to AI accountability and personal-data control themes that appear in the operative text.
Definitions — broad scope of covered data and AI terms
Sets the operative vocabulary. ‘Covered data’ is defined broadly to reach direct identifiers, device and advertising IDs, geolocation, biometrics, behavioral signals, inferred or derived attributes used for profiling, and copyrighted materials generated by individuals. The definition explicitly includes data used to train generative AI and outputs of generative systems that pertain to an individual. The bill imports the federal definition of ‘artificial intelligence’ from the National AI Initiative Act and defines ‘generative artificial intelligence system’ as systems that produce novel media from prompts. These definitions determine the statute’s reach — particularly the inclusion of derived or inferred data and copyrighted material — and will shape both compliance programs and litigation strategies.
Federal tort and private right of action; remedies
Creates liability for using covered data without express, prior consent and authorizes individuals to sue in state or federal court. The remedies are structured to be both compensatory and punitive: plaintiffs may recover actual damages or a statutory alternative (treble profits tied to the individual’s data or $1,000, whichever is greater), punitive damages, injunctive relief, and attorney’s fees. The treble-profits option links remedies to defendants’ monetization, potentially creating high exposure where targeted data drives revenue. Injunctive relief adds prospective compliance leverage for plaintiffs.
Consent rules and arbitration carve-out
Sets an affirmative defense for defendants that can prove express, prior consent but invalidates consent obtained through coercion, deception, or conditional acceptance where the data use exceeds what is necessary to provide the service. The section also strips the Federal Arbitration Act’s ability to compel arbitration for claims under this Act: predispute arbitration agreements and predispute joint-action waivers are unenforceable, courts (not arbitrators) decide applicability, and certain collective-bargaining arbitration clauses remain unaffected. That court-control clause shifts forum and procedure expectations for defendants and plaintiffs alike.
Third‑party disclosure and affirmative acknowledgement
Requires that any third party who will receive or exploit covered data be specifically and clearly disclosed to the individual at the time consent is sought. The disclosure must be presented separately and cannot be satisfied by burying the information in a privacy policy or by providing only a hyperlink. This provision changes standard consent mechanics: companies must implement discrete, affirmative notices and capture an acknowledgement that the individual actually saw and understood which third parties will use their data.
Relationship to state law — minimum federal standard
States that the Act does not preempt state laws and is intended as a minimum baseline; states may provide greater rights or remedies. This preserves state common-law claims and statutory regimes and invites parallel litigation strategies invoking both the federal tort and more protective state rules.
This bill is one of many.
Codify tracks hundreds of bills on Privacy across all five countries.
Explore Privacy in Codify Search →Who Benefits and Who Bears the Cost
Every bill creates winners and losers. Here's who stands to gain and who bears the cost.
Who Benefits
- Individual consumers whose data is used to train models or to generate content that imitates them — they gain an enforceable federal right and a suite of remedies tied to defendants’ profits, making claims financially meaningful even for small harms.
- Creators and copyright holders whose user-generated works are used to train AI or reproduced by models — the inclusion of copyrighted material within ‘covered data’ gives them a distinct pathway to seek redress when their works are exploited without consent.
- Plaintiffs’ and consumer protection attorneys — the statutory damages floor, treble-profits option, fee-shifting, and elimination of arbitration waivers make consumer and data-centric litigation more commercially viable.
- State regulators and advocates — the Act’s non-preemption language allows states to coordinate enforcement and to rely on stronger state standards alongside the federal tort.
Who Bears the Cost
- AI developers, model trainers, and platform operators that rely on large, mixed-origin datasets — they face increased compliance costs to obtain express, prior consent and possible high damages where monetization is linked to an individual’s data.
- Data brokers and advertising platforms that trade in behavioral and inferred profiles — the need for clear third‑party disclosures and affirmative consent will disrupt current notice-and-choice models and could eliminate some revenue streams.
- Small and medium enterprises that embed third-party AI services or use off-the-shelf models — these businesses will need contractual protections and assurances from vendors, potentially paying higher licensing fees or conducting new risk assessments.
- Insurance carriers and corporate legal budgets — insurers underwriting cyber, privacy, or media-liability risks may see increased claims exposures, and companies may face higher defense and indemnity costs in federal and state courts.
Key Issues
The Core Tension
The central tension is between individual control over personal and derived data — enforced by a strict express-consent rule and strong remedies — and the societal and commercial value of large, diverse datasets for AI development; protecting individuals’ data rights in a granular, consent-driven way reduces privacy harms but risks fragmenting training datasets, raising compliance costs, and slowing innovation that depends on broad data access.
The bill leaves several hard implementation questions unresolved. Causation and proof will be thorny: plaintiffs must connect a particular company’s use of their covered data to specific monetization or to a generative output that “pertains” to them.
The statute’s broad sweep over derived and inferred data raises evidentiary challenges — how do you show an inference is ‘about’ a person in the actionable sense? The treble-profits remedy ties liability to monetization, but allocating profits to an individual’s data in complex, aggregated-ad-revenue ecosystems will require novel accounting or expert proof, inviting protracted discovery and disputes.
The interplay with copyright and fair use is unsettled. Including copyrighted material that an individual generated brings traditional IP claims and privacy claims into the same lane; courts will need to sort when copyright remedies, fair-use defenses, or this tort govern.
The absence of explicit public-interest or research exceptions may chill academic and noncommercial model training unless narrow consent mechanisms or exemptions are later added. Cross-border data flows and multinational providers create jurisdictional complexity: the Act applies to commerce “in or affecting interstate or foreign commerce,” but enforcing it against foreign entities or handling data collected under other legal regimes will complicate compliance and contracting.
Try it yourself.
Ask a question in plain English, or pick a topic below. Results in seconds.