Codify — Article

RISE Act grants conditional civil immunity to AI developers

Conditional safe harbor requires model cards, model specifications, and clear limitations disclosures for AI used by learned professionals.

The Brief

The Responsible Innovation and Safe Expertise Act of 2025 creates a conditional safe harbor from civil liability for certain errors by artificial intelligence products when used by learned professionals. To qualify, developers must publicly release a model card and a model specification before deployment, with redactions allowed only for trade-secret information and accompanied by a contemporaneous justification.

The bill also requires clear documentation of known limitations, failure modes, and appropriate domains of use.

Immunity applies only to acts or omissions that do not amount to recklessness or willful misconduct, and the act preempts state-law claims arising from immunized errors. It also imposes a duty to update model cards, specifications, and related documentation within 30 days of deploying a new version or discovering a material new failure mode, with liability consequences if the update is not made and harm follows.

The effective date is December 1, 2025, and the measure targets professional services delivered with AI tools, aiming to balance innovation with accountability.

At a Glance

What It Does

Before deployment, developers must publish a model card and a model specification, including any stated limitations, and provide documentation of known failure modes and suitable use cases. Immunity from civil liability for errors arises when the AI is used by a learned professional in providing services.

Who It Affects

AI developers, learned professionals who use AI tools (licensed doctors, lawyers, engineers, and similar), and the clients who receive professional services relying on AI-backed recommendations.

Why It Matters

Establishes a transparent baseline for AI use in professional settings, clarifying liability boundaries while encouraging responsible innovation through defined disclosures.

More articles like this one.

A weekly email with all the latest developments on this topic.

Unsubscribe anytime.

What This Bill Actually Does

This bill creates a safety framework around AI in professional services. It requires developers to publish model cards and model specifications before releasing an AI product, along with disclosures about what the AI can and cannot do.

It also allows redactions for trade secrets, provided the developer explains the justification for redaction, and asks for clear information about limitations and optimal domains of use. If a professional uses the AI and relies on it for client work, the developer may be immune from civil liability for errors, as long as the developer did not act recklessly or with willful misconduct.

A key feature is the duty to keep disclosures up to date. If a new version is deployed or a new major failure mode is discovered, the model card, model specification, and related documentation must be updated within 30 days.

Immunity does not apply if updates are not completed and harm occurs after that date. The measure also preempts state-law claims for immunized errors, but exceptions exist for fraud, knowing misrepresentation, or conduct outside professional use.

The act takes effect on December 1, 2025, and applies to acts or omissions occurring after that date.Taken together, the bill seeks to reduce uncertainty around AI errors in professional contexts by mandating transparency and ongoing governance, while offering a shield of liability protection to developers who adhere to these requirements.

The Five Things You Need to Know

1

Immunity is conditional and available to AI developers for errors when used by learned professionals.

2

Predeployment model cards and model specifications must be publicly released; redactions allowed with justification.

3

Developers must provide clear documentation of known limitations and appropriate domains of use.

4

Immunity excludes recklessness or willful misconduct and does not apply to fraud or conduct outside professional use; state-law claims are preempted for immunized errors.

5

Effective date is December 1, 2025, with a 30-day update window for new versions or new failure modes.

Section-by-Section Breakdown

Every bill we cover gets an analysis of its key sections. Expand all ↓

Section 3

Definitions of AI-related terms

This section defines key terms such as artificial intelligence, client, developer, error, learned professional, model card, and model specification. It creates the threshold vocabulary the bill uses across all provisions, grounding the immunity concept in concrete roles and artifacts.

Section 4(a)

Safe harbor eligibility prerequisites

To qualify for immunity, a developer must publicly release a model card and a model specification before deployment. Redactions are permitted for trade secrets, provided there is contemporaneous justification. The developer must also supply clear documentation of known limitations and appropriate domains of use.

Section 4(b)

Scope of immunity

Immunity covers acts or omissions that do not amount to recklessness or willful misconduct by the developer. It creates a protective shield only when the specified predeployment disclosures and limitations are in place and properly applied in professional contexts.

4 more sections
Section 4(c)**

Duty to update

If a new version is deployed or a material new failure mode is discovered, the model card, model specification, and accompanying documentation must be updated within 30 days. Failure to update that proximately causes harm can nullify immunity for subsequent errors.

Section 4(d)

Preemption of state liability

Express preemption applies to state-law claims for immunized errors. However, the bill preserves immunity-related protections while carving out exceptions for fraud, knowing misrepresentation, or conduct outside the professional use of the AI product.

Section 5

Preservation of other immunities

Nothing in this act alters other immunities under federal or state law that are not related to the immunity established under section 4(a).

Section 6

Effective date and applicability

The act takes effect on December 1, 2025 and applies to acts or omissions occurring on or after that date, aligning implementation with the broader governance and disclosure requirements established in the bill.

At scale

This bill is one of many.

Codify tracks hundreds of bills on Technology across all five countries.

Explore Technology in Codify Search →

Who Benefits and Who Bears the Cost

Every bill creates winners and losers. Here's who stands to gain and who bears the cost.

Who Benefits

  • AI developers who publicly disclose model cards and specifications gain a clearer, legally protected operational runway when used by trained professionals.
  • Licensed learned professionals who rely on AI will have more transparent tools and defined boundaries for risk.
  • Clients receiving AI-assisted professional services benefit from clearer expectations about the responsible use and limitations of AI tools.
  • Professional services firms implementing AI governance gain standardized disclosure requirements that reduce ambiguity in liability.
  • Regulators and policymakers gain a framework balancing innovation with accountability through transparency requirements.

Who Bears the Cost

  • Developers who choose to publish model cards and specifications may incur costs for disclosure, documentation, and ongoing updates.
  • Organizations must allocate resources to maintain up-to-date model cards, specifications, and limitation docs, especially after new AI versions.
  • Compliance teams in professional services firms may incur training and governance costs to ensure proper use and disclosure of AI tools.
  • Clients may face higher costs if services require AI-enabled tools with mandated disclosures and domains of use.
  • Regulated industries may bear the burden of integrating the update cadence and monitoring obligations into existing workflows.

Key Issues

The Core Tension

The central dilemma is balancing rapid, beneficial AI deployment in professional services with the need to prevent harm, without stifling innovation or disclosing sensitive trade secrets. The bill trades a broad immunity shield for developers against narrative risk in professional use, but the precise boundaries of “non-reckless” behavior and the sufficiency of disclosures remain areas of potential ambiguity and dispute.

The bill presents a deliberate tension between encouraging AI innovation and ensuring accountability through transparency. The mandatory disclosures (model cards and model specifications) and the duty to update create a governance framework that reduces information asymmetry between developers and professional users.

However, publishing model details and maintaining updates could reveal trade secrets or sensitive configurations, raising concerns about competitiveness and security. The immunity provisions rely on non-reckless behavior, which leaves unresolved questions about what constitutes reasonable care across rapidly evolving AI systems and diverse professional contexts.

The preemption of state-law claims for immunized errors also concentrates liability within a federal framework, potentially reducing state-level remedies for some harmed parties, while exceptions for fraud or out-of-scope conduct preserve some accountability channels. Finally, the December 1, 2025 effective date provides a transition window that may require firms to adjust existing AI deployments and governance structures.

Try it yourself.

Ask a question in plain English, or pick a topic below. Results in seconds.