TwinLadder logoTwinLadder
TwinLadder
TwinLadder logoTwinLadder
Back to Archive
TwinLadder Intelligence
Issue #5

TwinLadder Weekly

April 2025

TwinLadder Weekly

Issue #5 | April 2025


Texas Passes First Major AI Regulation: What Lawyers Must Know

TRAIGA brings real enforcement teeth. Up to $200,000 per violation. Here's what changes on January 1, 2026.


On June 22, 2025, Texas Governor Greg Abbott signed the Texas Responsible Artificial Intelligence Governance Act (TRAIGA) into law. It takes effect January 1, 2026.

This isn't a toothless guidance document. TRAIGA has enforcement mechanisms, civil penalties, and specific prohibited practices. If you practice in Texas—or have clients who do—this matters.

What TRAIGA Does (And Doesn't Do)

Unlike the EU AI Act or Colorado's approach, Texas didn't create a "high-risk AI" classification system. Instead, TRAIGA takes a simpler, more targeted approach:

  1. Prohibits specific harmful AI uses
  2. Requires disclosure for government and healthcare AI
  3. Creates enforcement with real penalties
  4. Establishes a regulatory sandbox for innovation

The absence of a risk-tiering framework is notable. You won't find categories like "high-risk" or "limited-risk" AI systems here. Texas focused on what AI can't do rather than how risky it might be.

The Prohibited Practices

TRAIGA explicitly prohibits AI systems designed to:

Prohibited Use What It Means
Harm another person AI designed to cause physical, psychological, or financial harm
Engage in criminal activity AI built for illegal purposes
Infringe constitutional rights AI that restricts speech, assembly, due process, etc.
Unlawfully discriminate AI that discriminates against protected classes
Manipulate human behavior Deceptive AI that exploits cognitive vulnerabilities
Assign social scores (government only) Government entities can't use AI for social credit systems
Capture biometric data without consent Facial recognition, voice prints, etc. without permission

Key distinction: The standard is intentional design. Disparate impact alone isn't sufficient—prosecutors must show intent to discriminate, harm, or infringe.

Who's Covered

TRAIGA applies to anyone who:

  • Promotes, advertises, or conducts business in Texas
  • Produces products or services used by Texas residents
  • Develops or deploys AI systems in Texas

Translation: If you have Texas clients or Texas-based operations, you're covered.

The law distinguishes between:

  • Developers: Those who create AI systems
  • Deployers: Those who use AI systems in their operations
  • Government entities: Additional requirements for state agencies

The Penalties

Here's where TRAIGA gets serious:

Violation Type Penalty Range
Curable violation $10,000 - $12,000 per violation
Breach of cure commitment $10,000 - $12,000 per breach
Uncurable violation $80,000 - $200,000 per violation
Continuing violation $2,000 - $40,000 per day

The 60-day cure period matters. If the Texas Attorney General notifies you of a violation, you have 60 days to fix it. Fail to cure—or make false representations about fixing it—and penalties escalate dramatically.

No private right of action: Only the Attorney General can enforce TRAIGA. Clients can't sue you directly under this statute, but they can file complaints that trigger AG investigations.

The Safe Harbors

TRAIGA provides meaningful defenses:

1. NIST Compliance Substantial compliance with the NIST AI Risk Management Framework (or similar recognized frameworks) creates an affirmative defense against enforcement.

2. Third-Party Misuse You're not liable if someone else misuses your AI system in ways TRAIGA prohibits—as long as you didn't design it for prohibited purposes.

3. Good Faith Testing Discovering violations through internal testing or good-faith audits won't trigger liability if you address them.

4. Sector-Specific Exemptions Financial institutions complying with federal/state banking laws, insurance entities subject to anti-discrimination statutes, and HIPAA-covered healthcare uses have certain exemptions.

The Regulatory Sandbox

This is TRAIGA's most innovative feature. The Texas Department of Information Resources will administer a sandbox program that lets companies:

  • Test AI systems without full regulatory compliance
  • Operate temporarily exempt from certain licensing requirements
  • Run pilot programs for up to 36 months
  • Get regulatory feedback before full deployment

For legal tech companies and law firms experimenting with AI, this could provide a safer testing environment—though the sandbox rules aren't yet finalized.

What This Means for Legal AI Tools

For Contract Review AI

The discrimination prohibition matters. If your AI tool makes decisions about contracts—particularly employment, lending, or housing-related agreements—ensure it's not producing discriminatory outcomes. The intent standard provides some protection, but "we didn't know" won't work if the discriminatory design was foreseeable.

For Legal Research AI

The harm and manipulation provisions are less directly applicable, but watch the disclosure requirements. If you're deploying AI that interacts with clients (chatbots, intake systems), disclosure may be required depending on context.

For AI-Assisted Decision Making

Any AI used to make consequential decisions about people—hiring, lending, housing—needs review. The protected-class discrimination prohibition is broad.

The Federal Preemption Question

Here's the uncertainty: On December 11, 2025, President Trump signed an executive order proposing federal AI policy that could preempt inconsistent state laws.

The King & Spalding analysis notes: "The future of TRAIGA is clouded by the possibility of federal preemption on state AI regulations, which could limit or nullify its effect."

Practical advice: Comply with TRAIGA now. If federal preemption happens, you'll be ahead of any federal framework. If it doesn't, you're already compliant.


Tool Review: AI Governance Platforms

With TRAIGA and similar laws coming, governance tools are becoming essential

What's Emerging

A new category of tools is developing to help organizations manage AI compliance:

Tool Type Purpose Examples
AI Inventory Systems Track all AI in use across organization Credo AI, TruEra
Bias Detection Test AI outputs for discriminatory patterns IBM AI Fairness 360, Fiddler
Audit Trail Platforms Document AI decisions for compliance OneTrust AI Governance
Policy Management Maintain and enforce AI use policies Securiti.ai

The Honest Assessment

These tools are nascent. Most are enterprise-priced and designed for large organizations. For mid-market firms, the more practical approach:

  1. Inventory manually: List every AI tool in use
  2. Document purposes: What each tool does, what decisions it influences
  3. Review vendor compliance: Check vendor NIST compliance claims
  4. Build internal policies: Don't wait for perfect tools

Rating: Too early to recommend specific governance platforms for mid-market. Focus on manual compliance processes for now.


What's Working: Compliance Success Stories

Success Story #1: The Proactive Inventory

Firm type: 40-lawyer Texas firm Challenge: Unknown AI exposure across practice groups

Approach: Before TRAIGA passed, conducted firm-wide AI audit. Discovered 12 different AI tools in use—some unknown to management.

Result: "We found three tools with questionable data handling practices. Replaced them before the law took effect. The audit took 20 hours but probably saved us from a compliance nightmare."

Key insight: You can't govern what you don't know exists. Inventory first.


Success Story #2: The NIST Framework Adopter

Firm type: Legal tech company (contract review AI) Challenge: Prepare for multi-state AI regulation

Approach: Mapped their AI development process to NIST AI Risk Management Framework. Documented everything.

Result: "When TRAIGA passed, we already had the affirmative defense documentation. Colorado compliance was similar. One framework, multiple jurisdictions."

Key insight: NIST compliance creates safe harbors in multiple states. Worth the investment.


Hard Cases: Where TRAIGA Creates Uncertainty

Hard Case #1: The Disparate Impact Dilemma

Scenario: Contract review AI flags more agreements from minority-owned businesses for manual review—not by design, but due to training data patterns.

Problem: TRAIGA requires intent for discrimination liability. But is using AI you know produces disparate impact tantamount to intent?

Uncertainty: The statute says disparate impact alone isn't sufficient. But continued use after discovering disparate impact might look like intentional discrimination.

Practical advice: Monitor for disparate impact. Document remediation efforts. Don't assume the intent standard protects willful ignorance.


Hard Case #2: The Multi-State Compliance Maze

Scenario: National firm uses same AI tools across all offices. Texas has TRAIGA. Colorado has the Colorado AI Act. California has its own rules coming.

Problem: Different frameworks, different requirements, different timelines. Do you need separate compliance programs per state?

Uncertainty: NIST compliance creates safe harbors in both Texas and Colorado, but frameworks aren't identical.

Practical advice: Build to the highest common denominator. NIST compliance + documentation + bias monitoring covers most state requirements.


Hard Case #3: The Third-Party Tool Question

Scenario: You use Harvey, LegalOn, or Lexis AI. If those tools violate TRAIGA, who's liable?

Problem: TRAIGA says you're not liable for third-party misuse if you didn't design it for prohibited purposes. But you're a deployer, not just a user.

Uncertainty: Does deployer liability attach if you knew (or should have known) the tool had compliance issues?

Practical advice: Vendor due diligence matters more now. Ask about NIST compliance. Get representations in contracts. Document your evaluation process.


Reliability Corner

The State-by-State Patchwork

State AI Law Status Key Requirement
Texas TRAIGA (Jan 2026) Prohibited practices + AG enforcement
Colorado AI Act (Feb 2026) High-risk AI disclosures
California SB 574 pending Mandatory AI review for lawyers
Utah AI Policy Act Consumer disclosure requirements
~25 states Ethics guidance only Verification, competence standards

This Month's Ethics Update

Oregon's Formal Opinion 2025-205 addressed AI billing practices directly: "If the use of AI results in significant time savings, lawyers may not engage in billing practices that duplicate charges or falsely inflate billable hours."

Translation: If AI cuts your research time from 3 hours to 30 minutes, you can't bill 3 hours.


Workflow of the Month: AI Disclosure Decision Tree

Use this when determining whether—and how—to disclose AI use.

AI DISCLOSURE DECISION TREE
━━━━━━━━━━━━━━━━━━━━━━━━━━━

START: Did you use AI on this matter?
│
├─ NO → Document in file notes. No disclosure required.
│
└─ YES → What type of AI use?
    │
    ├─ LEGAL RESEARCH
    │   └─ Were all citations verified?
    │       ├─ YES → Document verification in file.
    │       │        Check local court rules for filing disclosure.
    │       └─ NO → STOP. Verify before proceeding.
    │
    ├─ DOCUMENT DRAFTING
    │   └─ Is this a court filing?
    │       ├─ YES → Check local rules:
    │       │        □ Pennsylvania: Disclosure required
    │       │        □ Federal: Varies by district
    │       │        □ State: Check local requirements
    │       │        Document AI use in file regardless.
    │       └─ NO → Document in file. Consider client disclosure.
    │
    ├─ CONTRACT REVIEW / ANALYSIS
    │   └─ Was human review performed?
    │       ├─ YES → Document AI + human review process.
    │       │        Disclose per client agreement.
    │       └─ NO → STOP. Human review required.
    │
    └─ CLIENT-FACING (chatbots, intake)
        └─ Does AI interact directly with client?
            ├─ YES → Disclosure required (especially in TX for
            │        government/healthcare contexts).
            │        Obtain consent where required.
            └─ NO → Document in file notes.

TEXAS-SPECIFIC (TRAIGA):
□ Government entity use: Disclosure to consumer required
□ Healthcare AI: Disclosure to patient required
□ Biometric data: Consent required before capture

DOCUMENTATION CHECKLIST:
□ AI tool(s) used: _________________________
□ Purpose: _______________________________
□ Human review by: ________________________
□ Verification steps: _______________________
□ Disclosure made: YES / NO / N/A
□ Client consent: YES / NO / N/A
□ Date: _________________________________

Time investment: 2-5 minutes per matter Why it matters: Disclosure requirements are expanding. Build the habit now.


Quick Hits

Regulatory News:

  • TRAIGA signed June 22, 2025, effective January 1, 2026
  • Texas AI Council created to oversee sandbox program
  • Federal preemption executive order creates uncertainty

State Bar Updates:

  • ~50% of states have now issued AI guidance
  • New York requires 2 CLE credits in AI competency by Q3 2025
  • Pennsylvania mandates disclosure in all court submissions

Coming Next Issue:

  • AI in Small Claims: The £7.50 Legal Letter—UK's Garfield.Law gets SRA approval

Ask the Community

TRAIGA raises practical questions we're researching:

  1. How are you handling AI vendor due diligence? Do you ask about NIST compliance?
  2. What's your AI inventory look like? How many tools does your firm actually use?
  3. Multi-state firms: How are you approaching state-by-state compliance variations?
  4. Would you use a TRAIGA compliance checklist template?

Reply to share. Anonymized contributions welcome.


TwinLadder Weekly | Issue #5 | April 2025

Helping lawyers build AI capability through honest education.


Sources