TWINLADDER
TwinLadder
TWINLADDER
Back to Archive
TwinLadder Intelligence
Issue #3

TwinLadder Weekly

March 2025

TwinLadder Weekly

Issue #3 | March 2025


Editor's Note

Every contract review AI vendor claims "85% faster." I have been hearing this number for two years now, from different companies, using different methodologies, applied to different contract types. It has become the "four out of five dentists recommend" of legal technology.

So we did something vendors do not love: we synthesised actual user reviews from G2, Capterra, legal tech forums, and practitioner interviews. Not vendor demos. Not cherry-picked case studies. What practitioners say when the sales team is not listening.

The short version: the tools work. The time savings are real. But there is an asterisk the size of a playbook that nobody mentions in the pitch meeting. And the difference between "this tool transformed our practice" and "we paid a fortune for software nobody uses" almost always comes down to what happened before the purchase, not after. I have seen this pattern play out in firms from Stockholm to Lisbon — the technology is only as good as the preparation that precedes it.


Contract Review AI: What Practitioners Actually Report

The market has matured significantly. You are no longer choosing between AI and no AI — you are choosing between platforms, each with distinct strengths and limitations. The landscape in early 2025 looks like this:

Tool Pricing Strength User Rating
LegalOn ~$300-500/user/month Playbook-driven review 4.5/5
Spellbook ~$179/user/month Drafting + review in Word 4.3/5
Luminance Enterprise pricing M&A due diligence 4.6/5
Kira Enterprise pricing High-volume extraction 4.2/5
Ironclad Enterprise pricing Full CLM platform 4.7/5

User ratings cluster between 4.2 and 4.7 out of 5. But ratings do not tell the full story.

Users consistently report significant time savings on first-pass review of standard contracts. NDA reviews that took two hours compress to thirty minutes. Turnaround drops from multiple days to same-day. One user reported: "We're turning around contracts the next day, which used to take multiple days or even weeks." But buried in the positive reviews is a caveat that matters enormously: "The biggest hurdle was the initial time investment to build our playbooks. It's not plug-and-play."

That is the asterisk. The "85% faster" claim assumes you have already built playbooks, trained the system on your preferences, and standardised review criteria. The results when you have done that work are genuine. A 25-lawyer corporate boutique in Munich that spent 40 hours building an NDA playbook now turns NDAs same-day. A solo transactional attorney in Warsaw using Spellbook at $179 per month takes on 30% more matters — at that price, the ROI is positive if it saves a single billable hour monthly. An Am Law 100 M&A group using Luminance compressed a data room review from two weeks with eight associates to three days with two. The more documents, the higher the ROI.

The tools excel at specific things: risk flagging with what users call "surgical accuracy" on unlimited liability and auto-renewal traps, consistency enforcement so every contract gets the same scrutiny regardless of which associate reviews it, speed on predictable contract structures, and redlining assistance that generates suggested alternative language directly in Word. Where they struggle tells you more about the technology's actual maturity.

Cross-jurisdictional complexity breaks the model — and this is where European practitioners should pay close attention. An MSA governed by English law with US subsidiary guarantees and an EU data processing addendum? One practitioner in The Hague reported: "It flagged the obvious stuff but missed the choice-of-law interaction issues. We caught them in human review, but barely." A Berlin lawyer working on a German-law framework agreement with Polish and Czech subsidiaries found that "the AI treated everything as if it were governed by a single jurisdiction." For those of us working across the EU's 27 member states, this is not a marginal limitation. It is a fundamental one.

Novel deal structures with no training data get generic analysis — "The AI kept suggesting standard licensing language for something that wasn't a licence." Heavily negotiated documents confuse pattern matching — by draft five of a JV agreement, customised definitions and cross-references break the system. And context window limits mean a 200-page acquisition agreement with 50 exhibits cannot connect issues in Exhibit A to provisions in Section 12.3.

As MIT Technology Review noted: "AI reduces review time by 75-85% but can't negotiate strategy, assess business risk, or handle novel situations. It's a productivity tool, not replacement." That is the most honest summary available anywhere. For European practice, I would add: it also cannot navigate the jurisdictional complexity that is the daily reality of cross-border European work. Until it can, human judgment is not just valuable — it is irreplaceable.


The Competence Question

I watched a senior associate in Copenhagen demonstrate her team's contract review workflow last month. Upload to LegalOn, review the AI flags, approve or reject, move on. Efficient. Consistent. Fast. When I asked her what she would look for in an unusual indemnification clause if the AI did not flag it, she paused. "I'd check the playbook," she said.

That is the right answer for process. It is the wrong answer for professional development. The playbook is a codification of partner knowledge — but if associates never develop that knowledge independently, the playbook cannot evolve. You end up with a team that enforces standards without understanding them. They know the rules but not the reasoning behind them.

High-volume contract work is exactly where AI delivers the most value. It is also exactly where junior lawyers used to develop foundational skills in close reading, risk assessment, and commercial judgment. The associate who spent two tedious hours reviewing an NDA was not just completing a task. She was learning to read with precision, to notice what is missing, to develop the instinct that something does not feel right about a clause even before she can articulate why.

If we automate all the repetitive work without replacing the learning it provided, we are efficient today and incompetent tomorrow. A partner at a Nordic law firm put it to me bluntly last month: "We've saved two thousand hours this year on contract review. We've also produced associates who can't draft a limitation of liability clause from memory. I'm not sure those numbers net out the way we think."

That is the competence paradox in miniature. And it applies to every firm deploying these tools, regardless of jurisdiction.


What To Do

  1. Before buying any contract AI, document your review standards first. 54% of legal teams have no playbooks at all. The tool is only as good as the standards you feed it. Invest 20-40 hours in your first playbook before spending a single euro on software. The investment pays for itself within weeks.

  2. Start with your highest-volume, most standardised contracts. NDAs, standard vendor agreements, employment contracts. Low risk, high repetition, measurable ROI. Do not start with your most complex deal type. A mid-market firm in Helsinki told me they made the mistake of piloting their AI on a cross-border M&A and nearly abandoned the project. When they restarted with NDAs, the results were transformative.

  3. Calculate your actual ROI honestly. Current monthly hours multiplied by hourly rate, minus tool cost, multiplied by realistic — not vendor-claimed — time savings. For high-volume practices, ROI is almost certainly positive. For low-volume practices, it is marginal. For solo practitioners, Spellbook-tier pricing at $179 per month can make sense. Enterprise pricing almost never does for firms under fifty lawyers.

  4. Assign some contracts as training exercises. Have junior lawyers review manually first, then compare against the AI's output. The discrepancies — what the AI caught that the human missed, and what the human noticed that the AI did not — are precisely where learning happens. This is not inefficiency. It is investment in the judgment your firm will need in five years.

  5. Watch European regulatory developments closely. California SB 574 would require attorneys to "personally review any AI generated work." The EU AI Act already requires documented AI literacy for deployers of AI systems. Whether you are in California or Cologne, the regulatory direction is clear: human oversight is not optional and will need to be demonstrated.


Quick Reads

  • LegalOn survey: 95% of legal teams have playbook gaps, 54% have none at all. Next issue digs into the playbook problem — it is a bigger issue than most firms realise, and the difference between successful and failed AI deployments almost always traces back to this gap.

  • Contract AI adoption among Am Law 100 firms sits at roughly 60%. Mid-market adoption is around 25% and growing. European adoption data is harder to find, but anecdotally tracks behind the US by 12-18 months, with Nordic countries leading and Southern Europe lagging.

  • Spellbook launches "Library" feature for precedent-based drafting — worth watching if you want AI that learns from your own document history rather than generic training data. Particularly relevant for firms with distinctive drafting styles.

  • Artificial Lawyer on contract AI's reliability problem — a useful exploration of what happens when the tool gets it wrong on a live deal. Error tolerance in contract review differs from error tolerance in legal research — a missed clause in a signed contract cannot be corrected by filing an amended brief.


One Question

If your AI flags 47 issues in a contract but you have not defined which ones matter, have you saved time — or just replaced one kind of overwhelm with another?


TwinLadder Weekly | Issue #3 | March 2025

Helping European professionals build AI competence through honest education.