TwinLadder Weekly
Issue #23 | January 2026
Editor's Note
Here is a number that should bother you: 80% of Am Law 100 firms have established AI governance boards. These are not discussion groups. They are formal oversight structures with authority over tool procurement, policy, training, and incident response.
Meanwhile, estimated governance adoption at mid-market firms is around 25%. At small firms, roughly 10%. Solo practitioners, under 5%.
I understand why. Large firms have dedicated legal operations staff, technology budgets measured in millions, and client pressure that makes governance urgent. A 30-lawyer firm has none of those. What it does have is the same accountability standard. Courts do not ask about firm size before imposing sanctions. The Butler Snow case -- a large, sophisticated firm sanctioned for AI hallucinations despite having resources for proper safeguards -- proved that even enterprise governance is not protection without enforcement. Having no governance at all is becoming a liability.
The European dimension makes this more urgent, not less. Article 4 of the EU AI Act does not ask whether your firm has a governance board. It asks whether you can demonstrate that staff who deploy or operate AI systems have sufficient AI literacy. For European firms, governance is not a best practice conversation. It is a compliance conversation. And it has been since February 2025.
This issue is about closing that gap. Not with enterprise complexity, but with what actually works at your scale.
Why Governance Matters Now -- And What It Actually Requires
[HIGH CONFIDENCE]
Let me strip away the corporate language. AI governance is not about boards and committees and frameworks. It is about five questions your firm needs to answer, in writing, before something goes wrong.
One: What tools are your people using? In most mid-market firms, the honest answer is "we do not know." Individual lawyers have adopted tools on their own -- ChatGPT subscriptions, Claude accounts, one partner who got Harvey access through a client relationship. Without an inventory, you cannot assess risk, ensure confidentiality compliance, or respond coherently when a court or client asks about your AI practices.
Two: How are AI outputs verified? Hallucination cases grew from 120 to 660+ in 2025. In Johnson v. Dunn, the court declared monetary sanctions "proving ineffective" and indicated that something more is needed. Courts now evaluate whether the firm had AI policies, whether verification protocols were followed, whether training was provided, and who supervised the work. Documented verification procedures are protective. Absent procedures are aggravating.
Three: Who is responsible? When an associate submits AI-assisted work product, who reviewed it? The supervising attorney bears the professional obligation. But if there is no policy defining supervision requirements for AI-assisted work, that obligation is ambiguous until something fails -- and then it becomes contentious.
Four: Is confidential information protected? Over 30 states have issued guidance on AI ethics, and confidentiality obligations apply to AI inputs in every jurisdiction. If your lawyers are pasting client information into general-purpose AI tools without understanding how those tools handle data, your firm has a confidentiality problem. Not a hypothetical one. A current one.
Five: Are your people competent? The ABA's Task Force report says adoption has surpassed understanding. AI competence is now professional competence. That means training -- not as professional development, but as competence maintenance.
| Governance by Firm Size | What You Need | Time Investment |
|---|---|---|
| Solo practitioner | Written tool inventory, verification protocol, data policy, quarterly self-review | 1 hour to write, 1 hour/quarter to review |
| Small firm (2-10) | Designated AI coordinator, written policy, annual all-attorney review | 1-2 hours/month |
| Mid-market (10-50) | Committee of 3-5, monthly meetings, documented policy, training coordination, incident review | 2-4 hours/month from committee |
| Large firm (50+) | Formal governance board, dedicated budget, tool procurement authority, compliance monitoring | Enterprise-scale investment |
These five questions can be answered in a one-page document for a solo practitioner or a ten-page policy for a 50-lawyer firm. The form does not matter. What matters is that answers exist in writing before they are needed.
Here is what governance at different scales actually looks like. A solo practitioner needs a written statement of which tools they use, how they verify outputs, what data they will not input, and when they last reviewed their approach. Time investment: an hour to write, an hour per quarter to review.
A small firm (2-10 lawyers) needs a designated AI coordinator, a written policy, and an annual review with all attorneys. The coordinator tracks bar guidance, coordinates training, and reviews any AI-related concerns. Time investment: one to two hours per month.
A mid-market firm (10-50 lawyers) needs a committee of three to five members, monthly meetings, a documented policy, training coordination, and incident review capability. Time investment: two to four hours per month from committee members.
Any of these is achievable. None requires enterprise budgets. The barrier is not resources. It is priority.
[MODERATE CONFIDENCE]
For European firms, governance serves a dual purpose that American firms do not face. Beyond the professional obligation (which is universal), Article 4 of the EU AI Act creates a regulatory obligation to ensure AI literacy among all staff deploying or operating AI systems. Your governance framework needs to address both.
The practical implication: a European firm's AI governance policy should include not just the five questions above, but also a training register documenting who has received AI literacy training, what the training covered, when it was delivered, and how competency was assessed. This is not best practice. It is what Article 4 requires. And the firms that build this into their governance now -- rather than treating it as a separate compliance exercise later -- will find it far less burdensome.
National implementation varies. Germany's approach through the Federal Ministry for Digital and Transport emphasises technical risk assessment. France's CNIL has focused on data protection intersections. The Netherlands links AI literacy to existing GDPR accountability obligations. Latvia, where I am based, is working through the Ministry of Environmental Protection and Regional Development with guidance still emerging. But the Article 4 obligation does not wait for national guidance to crystallise. It is in force now, and "we were waiting for clarity" is not a defence.
The Competence Question
A senior partner at your firm drafts a client memorandum using AI assistance. The tool generates a plausible analysis with four case citations. The partner, trusting the tool and pressed for time, submits the memorandum without verifying the citations. One is fabricated.
The client discovers the error when opposing counsel points it out. The client sues for malpractice. During discovery, the opposing side asks: Does the firm have an AI use policy? Was the partner trained on AI verification? Are there documented procedures for checking AI outputs?
The answers are no, no, and no. Your malpractice insurer asks the same questions. The answers have not improved.
This is not about the hallucinated citation. Citation errors existed before AI. It is about the absence of the systems that would have caught it -- and the fact that courts and insurers are beginning to distinguish between "an error occurred" and "an error occurred because no reasonable safeguards existed." Governance is not about preventing every mistake. It is about demonstrating that you took reasonable steps to prevent them. Without it, you are defending not just the error but the absence of any process that might have caught it.
What To Do
-
Inventory your firm's AI use this week. Ask every lawyer: what AI tools do you use, for what purposes, and how do you verify outputs? You need this information before you can write a policy.
-
Write a one-page AI policy within 30 days. Cover five things: approved tools, verification requirements, confidentiality protections, supervision standards, and training expectations. Perfection is the enemy of progress -- a simple policy updated quarterly is better than a comprehensive policy that never gets written.
-
Designate responsibility. Someone at your firm needs to own AI governance. At a small firm, that is one attorney with a standing quarterly review. At a larger firm, it is a committee with regular meetings. Assign the role now.
-
Confirm your insurance coverage. Contact your malpractice or professional indemnity insurer in writing. Ask specifically whether AI-assisted work product is covered and whether they have requirements for AI governance. Keep their response in your governance file.
-
Build your Article 4 training register. If your firm operates in or serves clients within the EU, create a simple document tracking AI literacy training for each staff member who deploys or operates AI systems. Record the date, content covered, and method of assessment. This is a compliance requirement under Article 4, and it is far easier to build as you go than to reconstruct retrospectively when an enforcement inquiry arrives.
Quick Reads
-
80% of Am Law 100 have AI governance boards -- formal structures with real authority over AI tool procurement, policy, and training. The mid-market gap is a liability, not a luxury.
-
Harvard Law School Forum on Corporate Governance examines AI governance obligations beyond law firms -- essential reading for practitioners advising corporate boards on their own Article 4 obligations.
-
EDRM analysis of sanctions for AI hallucinations shows courts increasingly evaluating whether governance and verification systems existed -- the absence of systems is now an aggravating factor.
-
The EU AI Act Article 4 has been in force since February 2, 2025. Full enforcement of remaining provisions begins August 2026. Governance planning for European firms is not forward-looking -- it is overdue.
One Question
If a court asks tomorrow whether your firm has an AI governance policy, what is your answer -- and if you practise in Europe, can you also show the Article 4 compliance record that the regulation already requires?
TwinLadder Weekly | Issue #23 | January 2026
Helping lawyers build AI capability through honest education.
Included Workflow
AI Governance Board Starter Kit
Implementation guide for establishing formal AI governance. Covers 5 phases: Foundation, Structure, Policy Development, Implementation, and Operations with success metrics.
Start this workflow
