TWINLADDER
TwinLadder
TWINLADDER
Back to Newsletter

Issue #4

The AI Governance Gap: Why 95% of Legal Teams Lack Documented AI Policies

Thomson Reuters' 2025 survey of 800 law firms found that fewer than 1 in 20 have written AI usage guidelines. We examine what a minimum viable AI policy looks like and how to draft one in under two hours.

AI Governance
Playbooks
Policy Development
Risk Management
March 28, 202515 min read
The AI Governance Gap: Why 95% of Legal Teams Lack Documented AI Policies

Listen to this article

0:000:00

TwinLadder Weekly

Issue #4 | March 2025


Editor's Note

Two weeks ago, a managing partner at a firm in Vienna told me his practice had spent EUR 165,000 on a contract review AI platform. Impressive deployment. Full training programme. Executive sponsor. When I asked to see their playbooks, he looked blank. "The tool came with templates," he said.

They had automated their contract review without ever documenting what "good" looks like. The AI was reviewing contracts against generic best-practice templates that had no relationship to the firm's actual risk appetite, client expectations, or commercial judgment. Three different partners were overriding the AI's recommendations in three different directions. The tool was working perfectly. The governance was nonexistent.

This is not an unusual story. It is the norm. I have heard versions of it from Hamburg, from Warsaw, from Madrid. The technology works. The preparation does not. And the result is firms paying premium prices for AI-powered inconsistency.


The 95% Gap: Why Most Firms Are Flying Blind With AI

LegalOn's 2025 survey quantifies what I have been seeing across the European mid-market: 54% of legal teams have no contract review playbook at all. Another 41% have basic clause libraries only. Just 5% have comprehensive coverage.

Playbook Maturity % of Legal Teams Practical Reality
No playbook at all 54% AI reviews against generic templates — inconsistent, ungoverned
Basic clause libraries only 41% Some standards exist but gaps produce unpredictable outcomes
Comprehensive coverage 5% Documented positions, escalation paths, fallback language

Meanwhile, corporate legal AI adoption more than doubled in one year — from 23% in 2024 to 54% in 2025. More teams using AI. Fewer teams governing how. That is a recipe for the kind of inconsistency that shows up in sanctions reports and, eventually, in malpractice claims.

Before AI, inconsistent review was a quality problem. Associate A flags a liability cap; Associate B misses it. Annoying but fixable through supervision. With AI, inconsistent standards get amplified at scale. Without a playbook, your AI reviews a contract, flags 47 issues, and nobody knows which ones matter. With a playbook, it reviews against your documented standards and escalates the five issues that actually violate your position. The tool does exactly what you tell it — but if you have not defined "acceptable," you are automating chaos.

A playbook is not a legal memo. It is a practical rulebook that captures how your team thinks. For each key clause — limitation of liability, indemnification, governing law — you document: what is acceptable, what is acceptable with escalation, what is a red line, and what fallback language you propose. Plume.law's framework distils this into three steps: Identify the clause. Check against your acceptable range. Act — approve, reject, or propose alternative. It works because it breaks "it depends" judgment calls into structured decisions an AI or junior associate can consistently follow.

For European firms, the playbook challenge has a dimension that American firms do not face. Your contracts routinely involve multiple governing laws, EU-level regulatory requirements that overlay national law, and language variations that affect how clauses are interpreted. An indemnification clause governed by German law operates under fundamentally different principles than one governed by English law. Your playbooks need to account for jurisdictional variation — not as an edge case, but as the baseline operating condition.

Start small. High-volume contracts first — NDAs, standard vendor agreements — where your positions are likely already consistent even if undocumented. Interview your senior lawyers. Ask them three questions: "What do you always check in this contract type? What makes you reject one? What is your fallback position on key clauses?" Review your last twenty redlined contracts for patterns. Note where practice varies versus where it is consistent. Codify the results into position statements for each key clause: acceptable, acceptable with escalation, red line, fallback language. Then pilot the playbook against ten contracts and adjust based on false positives and negatives.

Bloomberg Law advises treating playbook maintenance like code maintenance: schedule quarterly reviews, update after significant deals, learn from exceptions. A playbook is not a one-time project. It is a living document that evolves with new contract types, changed risk tolerance, lessons from AI misses, and updated legal requirements. The EU AI Act's full enforcement in August 2026, for instance, will require playbook updates for any contract touching AI systems or data processing.

The 50-lawyer regional firm in Düsseldorf that spent 60 hours documenting their standards before purchasing any AI tool had the right idea. When they finally demoed products, they knew exactly what they needed. They chose a tool that matched their documented workflow, not the other way around. The 25-lawyer corporate boutique in Munich that invested 40 hours in an NDA playbook before deployment estimates it saves five hours weekly. Payback in eight weeks. But the real value is not the time savings — it is the consistency. Their clients know what to expect.


The Competence Question

A firm I advise in Amsterdam built an extremely detailed playbook covering every conceivable scenario for their standard commercial agreements. Associates started ignoring it within three months. Real contracts did not fit the neat categories. The AI flagged everything as an exception. The playbook became more work than manual review.

The failure was not in the ambition. It was in confusing documentation with understanding. A playbook that nobody uses because it is too rigid is worse than no playbook at all — it creates false confidence that standards exist when they have been functionally abandoned.

The deeper problem is this: when you codify partner knowledge into rules, you need the people applying those rules to understand the reasoning behind them. Otherwise, the first contract that does not fit the template produces paralysis. Your associate stares at an unusual revenue-sharing clause, finds no matching playbook entry, and escalates to a partner — exactly the bottleneck you built the system to eliminate.

There is a parallel to how we train associates. A good training partner does not just tell a junior lawyer what the answer is. She explains why — the commercial logic, the risk calculus, the negotiation dynamics that make a particular position sensible or untenable. A playbook that captures the "what" without the "why" produces rule-followers who cannot adapt. The best playbooks include brief annotations explaining the reasoning behind each position. They teach, not just instruct.

Playbooks work when they encode judgment. They fail when they try to replace it. And that distinction — between encoding and replacing — is the central challenge of every AI deployment in legal practice. The tool that makes documented judgment scalable is powerful. The tool that makes human judgment seem unnecessary is a liability waiting to materialise.


What To Do

  1. Conduct a four-week playbook sprint. Week 1: inventory your ten most common contract types and rank by volume and risk. Week 2: interview senior lawyers and review recent redlines for patterns. Week 3: codify standards with acceptable, escalation, and red-line positions for each key clause. Week 4: pilot against ten contracts and adjust. This is not optional preparation. It is the foundation without which no AI contract tool delivers consistent value.

  2. Assign a playbook owner. Playbooks without maintenance become liabilities. One firm built comprehensive standards in 2024, never updated them, and their AI approved contracts that should have been flagged for new GDPR-related data processing requirements. Assign ownership and schedule quarterly reviews. In my experience, the best playbook owners are senior associates, not partners — close enough to daily practice to notice gaps, senior enough to make judgement calls.

  3. Document every exception. When a partner approves a non-standard clause as a one-time deviation, write it down. Undocumented exceptions floating in people's heads are governance failures waiting to happen. Under the EU AI Act's transparency requirements, this documentation may become not just good practice but a regulatory expectation.

  4. Start simple and add complexity only where it adds value. Perfect is the enemy of useful. A three-page NDA playbook that gets used beats a fifty-page treatise that gets ignored. The Amsterdam firm I mentioned learned this the hard way. Build the minimum viable playbook, deploy it, learn from the failures, and iterate.

  5. Build the playbook before buying the tool. 80% of Am Law 100 firms now have AI governance boards, but only 10% of firms overall have formal AI governance policies despite 79% AI adoption. The European numbers are likely worse. Close the governance gap first. The technology will still be there when you are ready.


Quick Reads

  • Gartner predicts 80% of organisations will formalise AI policies by 2026. The EU AI Act's August 2026 enforcement deadline will accelerate this in Europe. The governance gap is closing, but the firms that close it first gain a competitive advantage in client confidence.

  • Legal teams spend an average of 3.2 hours per contract (LegalOn). Across 52% of organisations handling 101-1,000 contracts annually, that represents 300-3,200 hours of review time. The ROI case for playbook-driven AI practically writes itself — but only if the playbook exists.

  • 69% of generic AI models hallucinate legal information. Purpose-built legal AI performs better, but only if you have defined what "correct" looks like. Without a playbook, you do not know when the AI gets it wrong. With one, errors become visible and correctable.

  • AI adoption for contract review grew 75% year-over-year (LegalOn) — further evidence that the market is moving faster than the governance frameworks to support it. The firms building those frameworks now will be the ones advising their peers later.


One Question

If 54% of legal teams have no documented review standards, were those teams reviewing contracts consistently before AI — or are we only noticing the inconsistency now because the technology exposes it?


TwinLadder Weekly | Issue #4 | March 2025

Helping European professionals build AI competence through honest education.

Included Workflow

4-Week Playbook Starter Kit

Week-by-week guide to building your first contract review playbook. Week 1: Inventory, Week 2: Document practice, Week 3: Codify standards, Week 4: Pilot and iterate.

Start this workflow