TwinLadder Weekly
Issue #7 | May 2025
Editor's Note
On May 6, 2025, the Solicitors Regulation Authority approved an entity that has no associates, no paralegals, no reception desk, and no hourly rate. Garfield.Law is now authorised to practise as a law firm in England and Wales, making it the first purely AI-driven firm to receive regulatory approval anywhere in the common law world.
I have been practising long enough to know that "landmark moments" are usually oversold. This one is not. Not because Garfield will replace traditional firms — it will not, and it does not try to. But because the SRA has now established that AI-native legal service delivery is a permissible model. The regulatory precedent exists. Others will follow.
What I find most revealing is what the SRA chose to focus on during eight months of review — and what it did not treat as a barrier. For those of us advising firms across the continent, from Riga to Rotterdam, the question is no longer whether AI-native legal services are possible. It is how quickly European regulators will decide whether to lead, follow, or obstruct.
What the SRA Actually Approved (And What It Did Not)
The Scope and the Scrutiny
The SRA's authorisation is specific. Garfield.Law handles small claims debt recovery up to ten thousand pounds in England and Wales. It generates procedural documents — letters, claim forms, default judgment applications. It does not provide general legal advice, cite case law, handle complex litigation, or operate outside its home jurisdiction.
Philip Young described the SRA process as "very exhaustive." The regulator examined the technical architecture (how outputs are generated, what hallucination safeguards exist), operational processes (confidentiality, conflicts, audit trails), accountability frameworks (liability, insurance, named solicitors), and risk management (client approval workflows, escalation procedures). Young reviews all AI outputs during the launch phase, with plans to move to sampling as the system proves itself.
The SRA's willingness to engage is the story within the story. They did not default to rejection. Paul Philip stated that "with so many people and small businesses struggling to access legal services, we cannot afford to pull up the drawbridge on innovations that could have big public benefits." A regulator that views innovation through an access-to-justice lens, rather than a protectionist one, changes the calculus for every AI legal service provider considering the European market.
Three Models Emerge
The International Bar Association noted that this represents "a regulator explicitly sanctioning a business model where AI, rather than human labour pyramids, constitutes the primary mechanism of service delivery." That is a significant conceptual shift. The traditional law firm model depends on leverage — partners supervising associates whose labour generates margin. Garfield eliminates this architecture entirely. The economics change from fifteen hundred pounds per matter to fifty pounds through trial. That is not a 10% efficiency gain. It is a 95% cost reduction for equivalent procedural work.
Three distinct AI-native law firm models are now emerging:
| Model | Example | Approach | Target |
|---|---|---|---|
| Regulatory-first | Garfield.Law | Narrow scope, direct SRA authorisation | Consumer & SME debt recovery |
| Acquisition-first | Lawhive + Woodstock Legal | AI platform acquires regulated entity | Broader consumer, GV-backed at £44M |
| Enterprise-first | Norm Law + Blackstone ($50M) | AI-native from inception | Institutional clients, $30T+ in assets |
Each model has merit. None has proved sustainable at scale. But all three exist now, where none did eighteen months ago. The speed of development is itself significant — from concept to regulatory approval to institutional investment in under two years suggests this is not a passing experiment.
The European Regulatory Comparison
For those watching from the European continent, the UK's approach offers a critical comparison. The SRA's willingness to authorise Garfield within eight months contrasts with the EU AI Act's phased compliance timeline extending to August 2026 for most obligations. The UK's sector-specific, principles-based approach through existing regulators enables faster innovation than the EU's cross-sectoral risk classification framework.
| UK Approach | EU Approach |
|---|---|
| SRA authorised Garfield in 8 months | AI Act phased timeline through August 2026 |
| Sector-specific, principles-based | Cross-sectoral risk classification |
| Innovation through existing regulators | New oversight structures being built |
| Access-to-justice framing | Consumer protection + fundamental rights framing |
Neither is inherently superior — they reflect different regulatory philosophies about how to balance innovation against consumer protection. But for practitioners advising clients on where to develop and deploy AI legal services, the jurisdictional differences matter enormously.
Liga Paulina, who advises on Latvian and EU regulatory matters, notes that Article 4 of the EU AI Act — the AI literacy obligation already in force since February 2, 2025 — adds another dimension. Any firm deploying AI systems, whether Garfield-style or traditional, must ensure staff have "a sufficient level of AI literacy." The UK has no equivalent obligation. European firms face a dual challenge: innovating with AI while simultaneously proving their people understand it. [HIGH CONFIDENCE]
The Competence Question
Clio CEO Jack Newton predicted that "the billable hour model cannot survive" AI-driven productivity gains. I think he is half right. The billable hour cannot survive for procedural, high-volume, predictable work. Garfield just proved that. But complex, bespoke, judgment-intensive work — the kind that requires a lawyer to sit with ambiguity, weigh competing considerations, and make a call that no algorithm would dare attempt — that work not only survives but becomes more valuable as routine work gets automated away.
The competence question for mid-market practitioners is not whether AI-native firms will take your work. It is whether you have identified which of your work is procedural (and therefore vulnerable) and which is genuinely advisory (and therefore defensible). A firm that charges fifteen hundred pounds for small claims debt recovery is exposed. A firm that charges fifteen hundred pounds for advice on whether to pursue a claim at all, given the commercial relationship, the counterparty's likely response, and the client's broader strategic interests — that firm is offering something no AI can replicate.
Lawhive's AI paralegal "Lawrence" scored 81% on the SQE, well above the 55% pass threshold. If your associates' value proposition is "I can pass the professional exams," that is no longer a differentiator. Their value must lie in the judgment, creativity, and client understanding that examinations do not test.
The founding story behind Garfield deserves mention here. Philip Young's brother-in-law, a plumber, could not economically pursue unpaid invoices through traditional channels. Young, despite being a senior City litigator, is self-described as "a very big nerd" who learned to programme on ZX Spectrums in the 1980s. He built what did not exist. The most useful AI legal tools may not come from legal tech vendors at all. They may come from practitioners who understand both the problem and the technology — who have sat across from the plumber and understood that the barrier is not legal complexity, it is economics.
What To Do
-
Audit your practice for procedural vulnerability. Which matters follow predictable workflows with defined outcomes? Those are the segments AI-native competitors will target first. Have an honest conversation about pricing and positioning.
-
Before recommending any AI legal service to clients, verify regulatory authorisation and check named solicitors. The copycat problem is real — not all AI legal tools will have eight months of regulatory scrutiny. Ask about hallucination safeguards, insurance, and escalation procedures.
-
Consider scope communication carefully. Garfield's narrow scope is a feature. But clients may not understand the boundaries. When AI handles the letter before action and the debtor responds with a counterclaim alleging fraud, the client needs to know that the AI cannot help with what comes next. Clear scope communication prevents abandonment experiences.
-
Study the three emerging models (regulatory-first, acquisition-first, enterprise-first) and consider which competitive dynamics affect your practice area and client base. European firms should pay particular attention to how the EU AI Act's requirements may shape which models prove viable on the continent.
-
Read Philip Young's interview at Geek Law Blog. The founding story of a senior City litigator who learned to programme on ZX Spectrums in the 1980s and built what did not exist is instructive about where the most useful AI legal tools will come from: practitioners who understand both the workflow and the technology.
Quick Reads
-
Legal aid organisations adopt AI at 74% — nearly double the 37% rate across the general legal profession. Access-to-justice organisations understand the efficiency imperative better than most commercial firms.
-
Legal Cheek's coverage of the SRA approval captures the profession's mixed reaction: enthusiasm for access to justice, anxiety about market disruption, scepticism about whether the model can scale.
-
Thomson Reuters data: 26% of legal organisations actively integrating generative AI, with 45% planning AI to be central to workflow within one year. The adoption curve is steepening — and European firms are no exception.
-
Civil justice statistics confirm that 66% of small claims result in default judgment — reinforcing that Garfield is automating a process where the outcome is often predetermined.
One Question
If AI-native firms take the procedural work that traditional firms cannot profitably serve, is that disruption — or is it the profession finally admitting that some legal work was never worth what we charged?
TwinLadder Weekly | Issue #7 | May 2025
Helping European professionals build AI competence through honest education.
