TwinLadder logoTwinLadder
TwinLadder
TwinLadder logoTwinLadder
Back to Archive
TwinLadder Intelligence
Issue #4

TwinLadder Weekly

March 2025

TwinLadder Weekly

Issue #4 | March 2025


The AI Playbook Gap: 95% of Legal Teams Are Flying Blind

Most firms rushed into AI without documenting how to use it. Here's how to build the governance framework you should have started with.


Last issue, we compared contract review AI tools. The verdict: they work well—after you've built playbooks.

But here's the uncomfortable truth: 95% of legal teams have playbook gaps. 54% have no playbooks at all.

That's not a typo. According to LegalOn's 2025 survey of legal professionals, the vast majority of teams adopted AI tools without first documenting their review standards.

They're flying blind.

The Numbers That Should Alarm You

LegalOn's survey paints a stark picture:

Status Percentage
No playbook at all 54%
Basic clause libraries only 42%
Comprehensive coverage 5%
Total with gaps 95%

Meanwhile, AI adoption is accelerating. Corporate legal AI adoption more than doubled in just one year—from 23% in 2024 to 54% in 2025.

Here's the math: More teams using AI. Fewer teams governing how.

That's a recipe for inconsistency, risk exposure, and the kind of mistakes that end up in sanctions reports.

Why Playbooks Matter More With AI

Before AI, inconsistent review was a quality problem. Associate A might flag a liability cap; Associate B might miss it. Annoying, but fixable through supervision.

With AI, inconsistent standards get amplified. The tool does exactly what you tell it—but if you haven't defined what "acceptable" means, you're automating chaos.

Consider:

  • Without a playbook: AI reviews contract, flags 47 issues, nobody knows which matter
  • With a playbook: AI reviews contract against your documented standards, escalates 5 issues that actually violate your position

The tool is only as good as the standards you feed it.

What a Playbook Actually Contains

A contract review playbook isn't a legal memo or a policy document. It's a practical rulebook that captures how your team thinks.

Core Components

1. Position Statements What's your firm's position on key terms? Document it explicitly.

LIMITATION OF LIABILITY
- Acceptable: Mutual cap at 2x annual contract value
- Acceptable with escalation: Mutual cap at 1x annual contract value
- Unacceptable: Uncapped liability
- Fallback language: [specific clause text]

2. Red Lines vs. Yellow Lines Which terms are non-negotiable (red lines) vs. negotiable within limits (yellow lines)?

RED LINES (Never accept):
- Unlimited indemnification for IP claims
- Governing law outside our operating jurisdictions
- Automatic renewal without 90-day notice

YELLOW LINES (Negotiate within parameters):
- Payment terms (target: Net 30, acceptable: Net 45)
- Warranty periods (target: 12 months, acceptable: 6 months)

3. Escalation Triggers When does the AI-flagged issue need human review? When does it need partner approval?

4. Fallback Language Pre-approved alternative language when you push back on a term. Don't make your team draft from scratch every time.

The "Identify → Check → Act" Framework

Plume.law recommends a three-step framework that translates legal judgment into consistent rules:

Identify: What clause or concept are we evaluating? Check: What's our acceptable range? What triggers escalation? Act: What's the response? Approve, reject, or propose alternative?

This framework works because it breaks down "it depends" judgment calls into structured decisions an AI (or junior associate) can consistently follow.

Building Your First Playbook: A Practical Approach

Start Small

Don't try to document every contract type. Start with:

  • High-volume contracts (NDAs, vendor agreements, standard service contracts)
  • Heavily templated agreements where you can standardize quickly
  • Known time sinks that always end up on your desk

For most firms, that means starting with NDAs. You probably review hundreds annually. Your positions are likely consistent. The risk is low if the AI makes a mistake.

The 4-Week Playbook Sprint

Week 1: Inventory

  • List your 10 most common contract types
  • Rank by volume and risk
  • Select 2-3 for initial playbook development

Week 2: Document Current Practice

  • Interview senior lawyers: "What do you look for in an NDA?"
  • Review your last 20 redlined NDAs for patterns
  • Identify where judgment varies vs. where it's consistent

Week 3: Codify Standards

  • Write position statements for key clauses
  • Define red lines and yellow lines
  • Create fallback language library

Week 4: Pilot and Iterate

  • Run AI against 10 contracts using your playbook
  • Compare AI flags to human review
  • Adjust rules based on false positives/negatives

The Living Document Rule

A playbook isn't a one-time project. It's a living document that evolves with:

  • New contract types you encounter
  • Changed risk tolerance
  • Lessons from AI misses
  • Updated legal requirements

Bloomberg Law advises: treat playbook maintenance like code maintenance. Schedule quarterly reviews. Update after significant deals. Learn from exceptions.


Tool Review: LegalOn (Playbook Focus)

Continuing our tool review series with a focus on playbook functionality

What It Is

LegalOn's core value proposition is playbook-driven contract review. Unlike tools that just flag generic issues, LegalOn lets you encode your standards and review against them.

Playbook Capabilities

What Users Love:

  • 50+ pre-built expert playbooks provide Day 1 value
  • "My Playbooks" feature allows full customization
  • Clear structure for acceptable/unacceptable/fallback positions
  • Playbook rules surface directly during review

What Users Caution:

  • Pre-built playbooks are starting points, not solutions
  • Customization requires significant upfront investment
  • Learning curve for playbook configuration
  • Most value requires sustained playbook development

The Honest Assessment

LegalOn is built for teams willing to invest in playbook infrastructure. The 95% of teams without playbooks? They'll get some value from the pre-built templates—but they won't unlock the real efficiency gains.

Rating: 4.5/5 for playbook-focused teams. 3/5 if you expect plug-and-play without documentation effort.

Pricing Reality

Estimated $300-500/user/month. The ROI depends entirely on whether you'll actually build and maintain playbooks. If you won't, the investment doesn't make sense.


What's Working: Playbook Success Stories

Success Story #1: The NDA Factory 2.0

Firm type: 25-lawyer corporate boutique Tool: LegalOn with custom playbook Investment: 40 hours to build NDA playbook

Before: 3 different partners, 3 different approaches to NDA review. 2-hour turnaround per NDA.

After: Single documented standard. AI handles first-pass against playbook. Partners review only escalations.

Result: "Same-day NDA turnaround. But the real win is consistency—clients know what to expect from us."

Key insight: They spent 40 hours building the playbook. They estimate it saves 5 hours weekly. Payback in 8 weeks.


Success Story #2: The Governance-First Team

Firm type: 50-lawyer regional firm Tool: None (yet) Investment: 60 hours building playbook documentation before purchasing any AI tool

Approach: Instead of buying first, they documented their contract review standards first. Spent three months codifying partner knowledge into structured playbooks.

Result: "When we finally demoed AI tools, we knew exactly what we needed. We chose a tool that matched our documented workflow, not the other way around."

Key insight: Building the playbook first meant they could evaluate tools against their actual needs—not vendor demos.


Hard Cases: When Playbooks Fail

Hard Case #1: The Exception That Became the Rule

Scenario: Partner approved a non-standard indemnification clause as a one-time exception. It wasn't documented.

Problem: AI continued applying the standard playbook. Junior associates didn't know about the exception. Client complained about inconsistency on the next deal.

User report: "Our playbook was right, but we had 18 months of undocumented exceptions floating around in people's heads."

Lesson: Exception management is part of playbook governance. Document every deviation. Review exceptions quarterly.


Hard Case #2: The Stale Playbook

Scenario: Firm built comprehensive playbook in 2024. Never updated it.

Problem: New data protection requirements emerged. Old playbook didn't address them. AI approved contracts that should have been flagged.

User report: "We had a playbook—it was just 18 months out of date. Might as well have had nothing."

Lesson: Playbooks without maintenance schedules become liabilities. Assign ownership. Schedule reviews.


Hard Case #3: The Too-Rigid Rulebook

Scenario: Team built extremely detailed playbook covering every possible scenario.

Problem: Real contracts don't fit neat categories. Associates started ignoring playbook because it didn't match reality. AI flagged everything as exceptions.

User report: "We over-engineered it. The playbook became more work than just reviewing contracts manually."

Lesson: Start simple. Add complexity only where it adds value. Perfect is the enemy of useful.


Reliability Corner

The Governance-AI Gap

Metric Percentage Source
Legal teams using AI 54% ACC/Everlaw 2025
Teams with comprehensive playbooks 5% LegalOn 2025
Teams with any playbook 46% LegalOn 2025
Organizations with AI policies 48% Industry surveys

The gap: More than half of legal teams are using AI. Fewer than half have documented governance. Only 5% have comprehensive coverage.

This Month's Reminder

69% of generic AI models hallucinate legal information. Purpose-built legal AI tools perform better—but only if you've defined what "correct" looks like.

Without a playbook, you don't know when the AI gets it wrong.


Workflow of the Month: The 4-Week Playbook Starter Kit

Use this checklist to build your first contract playbook.

PLAYBOOK STARTER KIT
━━━━━━━━━━━━━━━━━━━━

WEEK 1: INVENTORY
□ List your 10 most common contract types
□ Rank by: Volume (how many?) | Risk (how bad if wrong?) | Time (how long to review?)
□ Select 2-3 for initial playbook development
□ Identify the "owner" for each contract type

Top 3 contract types to playbook:
1. _________________________________
2. _________________________________
3. _________________________________

WEEK 2: DOCUMENT CURRENT PRACTICE
□ Interview senior lawyers (30 min each)
  - "What do you always check in a [contract type]?"
  - "What makes you reject a [contract type]?"
  - "What's your fallback position on [key clause]?"
□ Review last 20 redlined contracts for patterns
□ Note where practice varies vs. where it's consistent
□ Identify undocumented exceptions in use

Key findings: ___________________________

WEEK 3: CODIFY STANDARDS
For each key clause, document:

□ Clause: _______________________________
  Acceptable: ___________________________
  Acceptable with escalation: ____________
  Unacceptable (red line): _______________
  Fallback language: _____________________

□ Clause: _______________________________
  Acceptable: ___________________________
  Acceptable with escalation: ____________
  Unacceptable (red line): _______________
  Fallback language: _____________________

[Repeat for all key clauses]

WEEK 4: PILOT AND ITERATE
□ Select 10 contracts for pilot review
□ Run AI review against playbook (or manual review using playbook)
□ Compare results to historical decisions
□ Document false positives (flagged but shouldn't be)
□ Document false negatives (missed but should flag)
□ Adjust rules based on findings
□ Schedule first playbook review (30 days out)

Pilot results:
- False positives: _____ (adjust rules to reduce)
- False negatives: _____ (add rules to catch)
- Accuracy rate: _____%

ONGOING MAINTENANCE
□ Assign playbook owner: _________________
□ Schedule quarterly reviews: _____________
□ Document all exceptions when granted
□ Update after significant deals or regulatory changes

Time investment: 20-40 hours for first playbook ROI: 5-10 hours saved weekly once operational


Quick Hits

Market News:

  • 80% of Am Law 100 firms now have AI governance boards
  • Gartner predicts 80% of organizations will formalize AI policies by 2026
  • Only 10% of firms have formal AI governance policies despite 79% AI adoption

Adoption Metrics:

  • AI adoption for contract review grew 75% year-over-year (LegalOn)
  • Legal teams spend average 3.2 hours per contract (LegalOn)
  • 52% of organizations handle 101-1,000 contracts annually

Coming Next Issue:

  • Texas Passes First Major AI Regulation: What the $200K Fines Mean for Your Practice

Ask the Community

We're building resources for the 95% without comprehensive playbooks. Help us help you:

  1. What contract type would benefit most from a playbook template?
  2. What's stopping you from building playbooks? Time? Knowledge? Not knowing where to start?
  3. Would you use a downloadable playbook starter kit for common contract types?

Reply to share. Anonymized contributions welcome.


TwinLadder Weekly | Issue #4 | March 2025

Helping lawyers build AI capability through honest education.


Sources