TwinLadder Weekly
Issue #24 | January 2026
Editor's Note
I am wary of predictions issues. Most fall into two categories: the safe ones everyone already knows, and the bold ones nobody stakes reputation on.
This year I cannot start with predictions at all. I have to start with a deadline. EU AI Act Article 4 — mandatory AI literacy for all staff deploying or operating AI systems — took effect on February 2, 2025. Not August 2026. Not "coming soon." It is already law. And most firms I speak with across Europe have not meaningfully responded.
That silence is the story. Not what might happen in 2026, but what has already happened and been ignored.
After addressing the deadline, I offer three predictions. Fewer than the 85 that National Law Review collected. Stronger opinions, deeper evidence. I am willing to be wrong — that is what makes them worth reading.
The Deadline That Already Passed
EU AI Act Article 4: AI Literacy Is Not Optional
This is not a prediction. It is a compliance obligation that entered force on February 2, 2025.
Article 4 requires that all providers and deployers of AI systems ensure their staff have "a sufficient level of AI literacy." For law firms using AI-powered research, drafting, or review tools — which now includes most firms of any size — this means documented training, competency assessments, and ongoing education. Not aspirations. Documentation.
The implementation varies by member state, and that divergence matters. Germany's approach through the Federal Ministry for Digital and Transport emphasises technical risk assessment. France's CNIL has focused on data protection intersections. The Netherlands' Autoriteit Persoonsgegevens links AI literacy to existing GDPR accountability obligations. Latvia, where we are based, is working through the Ministry of Environmental Protection and Regional Development with guidance still emerging.
The practical question for every European firm: can you demonstrate, today, that your staff have received AI literacy training that meets Article 4 requirements? If the answer is no, you are already non-compliant — not with something coming, but with something that arrived eleven months ago.
For firms outside the EU, the extraterritorial scope applies wherever you serve European clients or handle data subject to EU jurisdiction. American and UK firms with European client bases cannot ignore this.
What to do now: Conduct an Article 4 gap assessment this month. Document existing AI training. Identify which staff members deploy or operate AI systems. Build a compliance record before enforcement actions begin — because when they begin, "we were planning to" is not a defence.
Three Predictions for 2026
[HIGH CONFIDENCE — 80%]
Prediction One: Competence Debt Will Become the Profession's Defining Challenge
This is the prediction I am most confident about, and the one least discussed. While the profession obsesses over which AI tools to buy, a quieter crisis is building: the systematic erosion of the skills required to supervise those tools.
The evidence is now peer-reviewed and cross-disciplinary. A field experiment with roughly 1,000 students published in PNAS (Bastani et al., 2025) found that students using unrestricted AI assistance solved 48% more problems correctly during practice — but scored 17% lower on independent assessments without AI access. Performance improved. Understanding declined.
| AI-Assisted Performance | Independent Competence |
|---|---|
| +48% problem-solving during practice (Bastani, PNAS 2025) | -17% on assessments without AI |
| 83% accuracy rate in legal AI tools (Stanford REG Lab) | 17-34% hallucination rate in the same tools |
| 100x productivity gains reported by Am Law 100 firms | 0 firms planning to reduce attorney headcount |
Read that last row again. If productivity truly increased a hundredfold, why is no firm reducing headcount? Either the productivity claims are exaggerated, or firms recognise that human judgment remains irreplaceable. I suspect both are true.
The mechanism is well-established. Lisanne Bainbridge described it in 1983 as the "Ironies of Automation" — a paper with over 4,700 citations that the legal profession has largely ignored. Automating a task reliably degrades the human capacity to perform that task manually. When the automation fails, it fails into the hands of people who are now less competent than before the automation was introduced.
In medicine, The Lancet (2025) documented the effect precisely: when an AI polyp-detection system was removed from gastroenterologists who had worked with it for 18 months, their adenoma detection rates fell 21%. Not worse doctors. Doctors who had stopped practising the perceptual skills the AI had been performing for them.
In law, the pipeline problem is acute. The tasks that AI absorbs first — document review, legal research, due diligence, contract analysis — are precisely the tasks that build the judgment required to supervise legal work. Klarna's experience is instructive: after replacing significant customer service capacity with AI, they acknowledged that the efficiency gains came with a competence cost that required deliberate intervention to address.
Gartner's VP of Research, Mary Mesaglio, projects that 40% of enterprise applications will feature task-specific AI agents by end of 2026. She frames this as progress. I think she is half right. It is progress in capability. It is also an acceleration of competence debt unless firms deliberately preserve training structures.
The profession will split into firms that treat competence preservation as strategic infrastructure — structured training, prediction-first protocols, periodic AI-free assessments — and firms that optimise for throughput until a malpractice claim reveals what they lost.
[HIGH CONFIDENCE — 75%]
Prediction Two: Agentic AI Will Transform Legal Workflows — and Most Firms Will Not Be Ready
The shift from single-prompt AI to autonomous multi-step agents is the technical development that will most reshape legal practice in 2026. This is not incremental improvement. It is a category change.
Litera identifies agentic AI as the defining trend: autonomous systems that execute multi-step tasks — researching a legal question, drafting a memo, checking citations, and formatting the output — without human intervention between steps. LexisNexis's Protege targets 15-20% task automation by 2028. Technologies like OpenClaw are enabling integration across data sources and applications that previously required manual coordination.
The European regulatory response is more advanced than the American one. The EU AI Act's risk-based framework already contemplates autonomous AI systems and imposes specific transparency and oversight requirements. The UK's approach through the AI Safety Institute provides complementary guidance. In the US, the landscape remains a patchwork: the Colorado AI Act takes effect in June 2026, but federal regulation remains fragmented.
| European Regulatory Position | US/UK Position |
|---|---|
| EU AI Act: mandatory, risk-based, August 2026 full enforcement | Colorado AI Act: June 2026, single state |
| Article 4 AI literacy: already in force (Feb 2025) | No federal AI literacy requirement |
| GDPR conditioning: 8 years of compliance infrastructure | Ad hoc state-by-state approach |
| SRA authorised Garfield.Law within structured process | Market-driven adoption, regulate later |
Here is the competence paradox sharpened. Agentic AI does not merely answer questions — it performs entire workflows. When a junior associate's research, drafting, and citation-checking are performed by an agent, what remains of the training pipeline? The associate reviews the output. But reviewing output you have never produced yourself is a fundamentally different — and weaker — form of learning than producing it under supervision.
The firms that deploy agentic AI thoughtfully will use it to augment, not replace, the judgment-building process. The firms that deploy it as a cost reduction tool will discover, in two to three years, that they have associates who can operate AI systems but cannot practise law without them.
My advice: before deploying any agentic AI workflow, map which human competencies it displaces and build explicit training structures to preserve those competencies. The efficiency gain is real. The competence cost is also real. Ignoring either is negligent.
[MEDIUM CONFIDENCE — 50%]
Prediction Three: The First Major AI Malpractice Verdict Will Arrive
I assign this roughly 50% probability for 2026 — not because the trajectory is uncertain, but because litigation timing depends on jurisdictional luck and judicial inclination.
The escalation is documented. Hallucination cases grew from 120 to 660+ in 2025, accelerating from two per week to nearly five per day. In Johnson v. Dunn, the court declared monetary sanctions "proving ineffective". The Butler Snow case showed that even large, well-resourced firms file hallucinated citations. In Australia, a solicitor was prohibited from unsupervised practice for two years. In Canada, Ko v. Li imposed contempt of court sanctions.
The pattern is clear: warnings → sanctions → professional consequences → civil liability. We are deep in the professional consequences phase.
| Stanford REG Lab Findings | Profession's Response |
|---|---|
| Lexis+ AI: 17% hallucination rate | 79% of legal professionals use these tools at scale |
| Westlaw AI-Assisted Research: 34% hallucination rate | No Am Law 100 firm has published accuracy benchmarks |
| 5 queries/day at 17% = ~1 hallucinated response daily | Most firms have no error-tracking procedures |
The European professional indemnity framework will shape this differently. EU member states' professional liability regimes, combined with the AI Act's transparency obligations, create a dual accountability track that does not exist in the US. A European firm with documented AI governance, Article 4 compliance, and systematic verification procedures will be in a fundamentally stronger position than one without — not just ethically, but defensively.
When the verdict arrives, it will establish three things. First, AI-assisted work product carries the same professional standard as any other work product. Second, insurance carriers will impose AI-specific governance requirements. Third, the distinction between firms with documented governance and firms without will determine outcomes.
Build your governance now, while it is proactive rather than reactive.
The Competence Question
It is January 2026. A client — a mid-sized European manufacturer deploying AI in its quality control processes — asks you whether their systems comply with the EU AI Act. Not the broad strokes. The specifics: risk classification, transparency documentation, human oversight protocols, the Article 4 literacy obligations for their staff.
You reach for your AI research tool. It returns a confident, well-structured answer. But something nags. You have not read the implementing regulations yourself. You have not tracked which member state guidance applies to your client's operations. You have relied on the tool for three similar questions this quarter, and each time the answer seemed right. You approved it. You moved on.
Now ask yourself: did you verify those previous answers, or did you trust them? And if the tool hallucinated a regulatory reference — the kind of plausible-sounding citation that Stanford's research shows appears in 17-34% of queries — would you have caught it?
The competence question for 2026 is not whether you use AI. It is whether you are still capable of evaluating what it tells you.
What To Do
Conduct an Article 4 gap assessment by end of February. Document which staff deploy or operate AI systems. Identify training gaps. Build a compliance record now — enforcement will not announce itself.
Map your competence dependencies. For each AI tool your firm uses, identify which human skills it displaces. Where skills are atrophying, institute deliberate practice: periodic AI-free research exercises, structured review protocols, junior associate training that requires producing work before reviewing AI output.
Benchmark your AI tools' accuracy. Run 50 queries where you already know the answer. Track hallucination rates by tool and by practice area. Share results with your team. If you are not measuring error rates, you are guessing about reliability.
Prepare for agentic AI before deploying it. Map any multi-step AI workflow against the competencies it displaces. Establish human checkpoints. Do not automate a process you cannot perform manually.
Review your professional indemnity coverage this quarter. Confirm AI-assisted work product is covered. Ask your insurer what governance documentation they expect. Do this before the landmark malpractice case, not after.
Quick Reads
-
National Law Review collected 85 predictions for AI and law in 2026 — notably, no Am Law 100 firm anticipates reducing attorney headcount despite reported 100x productivity gains. The gap between those numbers deserves scrutiny.
-
Artificial Lawyer's 2026 predictions report that 60%+ of corporate legal teams expect to reduce outside counsel reliance, driven by in-house AI adoption. European in-house teams are moving faster than American counterparts.
-
Litera identifies agentic AI as the defining trend for 2026 — multi-step autonomous execution rather than single-prompt responses. Their prediction aligns with Gartner's 40% enterprise agent adoption forecast.
-
The Bastani et al. PNAS study (2025) provides the strongest evidence yet for the competence paradox: 48% better with AI, 17% worse without it. Essential reading for any firm designing AI training programmes.
-
Clio's $1 billion acquisition of vLex signals that bundled AI in practice management is coming to the mid-market. Test quality carefully before relying on it.
One Question
If your staff took an AI literacy assessment today — as Article 4 already requires — what percentage would pass? And what does that number tell you about the gap between your AI ambitions and your actual competence?
TwinLadder Weekly | Issue #24 | January 2026
Helping lawyers build AI capability through honest education.
