TwinLadder logoTwinLadder
TwinLadder
TwinLadder logoTwinLadder
Back to Archive
TwinLadder Intelligence
Issue #19

TwinLadder Weekly

November 2025

TwinLadder Weekly

Issue #19 | November 2025


UK Bar Council Updates AI Guidance: What Changed

Updated guidance emphasizes mandatory verification. The stakes are clear. Here's what the November 2025 update means for barristers—and what lessons apply globally.


The Update

On November 25, 2025, the Bar Council published updated guidance on generative AI for barristers. The document—"Considerations when using ChatGPT and generative artificial intelligence"—represents an evolution of the original January 2024 paper rather than a wholesale rewrite.

But evolution matters. The timing isn't coincidental.

In the months preceding the update, courts issued sanctions in AI hallucination cases globally. The MyPillow case made headlines. Ko v. Li established Canadian precedent. Lord Justice Birss warned about AI misuse from the bench.

The Bar Council's message is clear: understand these tools or risk professional consequences.

What's New

The November 2025 guidance expands on the January 2024 original in several key areas:

1. Broader Tool Coverage

January 2024: Referenced ChatGPT and Google's Bard primarily.

November 2025: Explicitly covers Google's Gemini, Perplexity, Harvey, and Microsoft Copilot, along with legal-specific LLM tools.

Why it matters: Barristers can't claim ignorance about purpose-built legal AI. The guidance makes clear that all LLM-based tools carry similar risks regardless of their marketing.

2. Recent Case Law Integration

New addition: The guidance now references recent case law on AI misuse by lawyers, providing concrete examples of what can go wrong.

The context: High Court rulings, including Lord Justice Birss's comments on Garfield.Law and warnings about AI verification failures, are now part of the professional expectation framework.

3. Stanford Study Citation

New addition: The guidance references academic research on AI legal research tool reliability—specifically the Stanford study showing 17%+ hallucination rates even in purpose-built legal AI tools.

Why it matters: The Bar Council isn't relying on anecdote. The guidance acknowledges that even the best legal AI tools hallucinate at rates that require verification.

4. Enhanced Data Handling Emphasis

Evolution: While the original guidance warned about confidentiality, the 2025 update adds emphasis on:

  • Understanding how each specific tool handles inputs
  • Reviewing terms and conditions for compatibility with Core Duty 6, rC15.5, and data protection law
  • Considering cyber risk from AI tool usage

Why it matters: "I didn't read the terms of service" isn't a defense against professional conduct violations.

Core Duties Implications

The guidance connects AI use to existing professional obligations:

Core Duty AI Implication
CD1: Act with integrity Don't submit work you haven't verified
CD3: Act in client's best interests AI errors harm clients; verify before use
CD4: Maintain independence Don't outsource judgment to algorithms
CD5: Behave competently Understand AI limitations before using
CD6: Keep affairs confidential Don't input privileged info without safeguards

The guidance makes explicit what was implicit: professional duties apply regardless of which tool a barrister uses. AI doesn't create exceptions to fundamental obligations.

The Key Warning

The guidance's most important paragraph:

"Crucially, barristers must understand that LLMs, while sophisticated, are not infallible. They are predictive tools, prone to generating plausible but entirely false information—a phenomenon known as 'hallucinations.' LLMs are not a substitute for human legal expertise, critical judgment or diligent verification. The ultimate responsibility for all legal work remains with the barrister."

This isn't guidance; it's a warning. The Bar Council is documenting that barristers have been told. Future disciplinary proceedings can reference this notice.

Barbara Mills KC's Statement

The Chair of the Bar Council issued a statement accompanying the guidance noting that recent cases have emphasized "the dangers of the misuse by lawyers of artificial intelligence, particularly large language models, and its serious implications for public confidence in the administration of justice."

She continued: "We recognise that the growth of AI tools in the legal sector is inevitable and occurring at a fast pace. As the guidance explains, the best-placed barristers will be those who make the efforts to understand these systems so that they can be used with control and integrity."

The message: AI adoption is inevitable. Professional responsibility is non-negotiable.

January 2024 vs. November 2025: What Changed

Aspect January 2024 November 2025
Tools covered ChatGPT, Bard All LLMs including Harvey, Copilot, Gemini
Case law references None (too early) Specific UK and international cases
Academic citations Limited Stanford hallucination study included
Data handling General warnings Specific guidance on settings, T&Cs, cyber risk
Tone Informational Warning with documented expectations
Contextual framing Emerging technology Established risk requiring management

Tool Review: AI Guidance Across Jurisdictions

Comparing regulatory approaches to legal AI

UK Bar Council (November 2025)

Approach: Principles-based guidance; not formal BSB rules

Key Features:

  • Connects AI use to existing Core Duties
  • References specific failure cases
  • Emphasizes verification obligation
  • Addresses data handling and confidentiality

Enforcement: Through existing professional conduct framework

Assessment: Comprehensive guidance that creates documented expectations without creating new rules

Rating: 4/5 for clarity and practicality


SRA (Solicitors Regulation Authority)

Approach: Complementary guidance for solicitors (separate from barristers)

Key Features:

  • Similar principles-based approach
  • Authorized Garfield.Law under existing framework
  • Focus on risk management and client communication

Notable Action: First AI-only law firm authorization (May 2025)

Assessment: Demonstrates that regulation can enable innovation while maintaining standards

Rating: 4/5 for balancing innovation and protection


US State Bars (Varied)

Approach: Inconsistent across jurisdictions; some guidance, some rules, some silence

Key Features:

  • New York and California issued guidance
  • Some courts require AI use disclosure
  • No unified national approach

Challenge: Practitioners must navigate patchwork of requirements

Assessment: Fragmented approach creates compliance complexity

Rating: 2.5/5 for clarity; varies by jurisdiction


EU AI Act

Approach: Risk-based cross-sectoral regulation

Key Features:

  • High-risk AI systems subject to specific requirements
  • Legal advice applications potentially classified as high-risk
  • Mandatory compliance begins August 2026

Assessment: Different approach from UK sector-specific guidance; implications for legal AI still clarifying

Rating: Too early to assess for legal sector specifically


What's Working: Firms Ahead of Guidance

Success Story: The Proactive Policy

Chambers type: London commercial set

Challenge: Associates using AI without systematic oversight

Approach: Developed AI use policy 6 months before November guidance

Policy elements:

  1. Mandatory verification checklist for AI-assisted work
  2. Prohibited categories (client-identifiable data in general tools)
  3. Approved tool list with specific permitted uses
  4. Training requirement before AI tool access

Outcome: When guidance issued, chambers already compliant. No policy changes needed.

Key insight: "We saw where this was heading. Getting ahead of the guidance was easier than reacting to it."


Success Story: The Training Investment

Chambers type: Regional criminal set

Challenge: Mixed AI literacy across experience levels

Approach: Structured training program developed with IT Panel input

Training structure:

  • 2-hour introduction to LLM fundamentals
  • Practical session on hallucination identification
  • Role-play: explaining AI use to clients and courts
  • Written assessment on verification procedures

Result: 100% completion across 35 barristers. Consistent understanding of limitations.

Key insight: "The juniors knew how to use ChatGPT. The seniors knew professional conduct. We needed everyone to know both."


Hard Cases: Where Guidance Doesn't Help

Hard Case #1: The Speed-Quality Tradeoff

Scenario: Barrister receives instructions 48 hours before hearing. Uses AI to accelerate research.

Challenge: Full verification of all AI-assisted research takes 8 hours. Available time: 12 hours total.

The tension: Guidance requires verification. Reality requires completion.

How it was handled: Barrister verified the 5 most critical propositions in detail; flagged the remainder as "initial research requiring confirmation"; disclosed to instructing solicitor.

Lesson: Verification can be prioritized when comprehensive verification isn't possible. Transparency about limitations is essential.


Hard Case #2: The Client Who Wants AI

Scenario: Corporate client asks barrister to use AI to reduce costs.

Challenge: Client has heard AI is faster and cheaper. Expects barrister to use it.

The tension: Using AI may introduce risks. Not using AI may lose the client.

How it was handled: Barrister explained AI capabilities and limitations; proposed AI-assisted approach with clear verification protocols; documented client's informed consent to methodology.

Lesson: Client demand for AI doesn't eliminate professional obligations. Manage expectations upfront.


Hard Case #3: The Opposing Counsel Question

Scenario: Barrister suspects opposing counsel's submissions contain AI hallucinations.

Challenge: How to raise without appearing unprofessional? What if the suspicion is wrong?

How it was handled: Verified the suspect citations independently. Two were fabricated. Filed appropriate motion noting the citation errors (without speculating about cause).

Lesson: Focus on the error, not the suspected tool. Courts don't need you to prove AI caused the mistake.


Reliability Corner

UK AI Legal Guidance Timeline

Date Development
January 2024 Bar Council original AI guidance
May 2025 SRA authorizes Garfield.Law (first AI-only firm)
November 2025 Bar Council updates AI guidance
Expected 2026 Further guidance as case law develops

The Bar Council IT Panel Perspective

The guidance was developed by the Bar Council's IT Panel in consultation with Ethics and Regulation panels. It's presented as reflecting "current professional expectations" rather than formal BSB guidance—a distinction that matters for enforcement but perhaps less for practical compliance.

This Month's Perspective

The Bar Council's guidance update isn't revolutionary. But it documents expectations in a way that creates professional accountability. Barristers who ignore it can't later claim they didn't know.

The guidance recognizes AI adoption is inevitable while insisting that professional responsibility is non-negotiable. That's the right balance.


Workflow of the Month: UK Bar Council AI Compliance Checklist

Use this to ensure compliance with November 2025 guidance before any AI-assisted work product.

UK BAR COUNCIL AI COMPLIANCE CHECKLIST
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

MATTER: _________________________________
AI TOOL(S) USED: ________________________
DATE: __________________________________
BARRISTER: _____________________________

PRE-USE CHECKLIST
━━━━━━━━━━━━━━━━━
□ Have I reviewed this tool's terms of service?
  Last reviewed: ______________
□ Have I understood how this tool handles inputs?
  □ Data used for training: YES / NO / UNCLEAR
  □ Data retained: YES / NO / UNCLEAR
  □ Third-party sharing: YES / NO / UNCLEAR
□ Are protective settings configured appropriately?
  □ Chat history disabled (if applicable)
  □ Data sharing opt-out (if available)
  □ Enterprise/privacy mode (if available)

CONFIDENTIALITY CHECK (CORE DUTY 6)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
□ Does input contain privileged information?
  □ NO - proceed
  □ YES - do not input OR use approved privacy tool
□ Does input contain client-identifiable information?
  □ NO - proceed
  □ YES - anonymize before input
□ Is this tool approved for this matter type?
  □ Approved tool list consulted
  □ Matter-specific restrictions considered

VERIFICATION CHECKLIST
━━━━━━━━━━━━━━━━━━━━━
For ALL AI-generated content:

CASE CITATIONS
□ Each case verified as existing?
  Method: Westlaw / BAILII / Other: _________
□ Citation format verified accurate?
□ Year and court designation correct?
□ Neutral citation checked where applicable?

QUOTATIONS
□ Each quotation verified verbatim?
□ Paragraph references checked?
□ Context of quotation confirmed?

LEGAL PROPOSITIONS
□ Each proposition supported by cited authority?
□ Cases actually stand for stated principle?
□ No material mischaracterization of holdings?

STATUTORY REFERENCES
□ Legislation cited is current?
□ Section/subsection numbers accurate?
□ Any amendments checked?

QUALITY CONTROL
━━━━━━━━━━━━━━
□ Would I stake my professional reputation on this?
□ Have I applied the same scrutiny I would to
  any junior's research?
□ Is this work I would be comfortable explaining
  to the court if asked?

DOCUMENTATION
━━━━━━━━━━━━━
□ AI use documented in file note
□ Verification steps recorded
□ Any unverified content flagged

DISCLOSURE CONSIDERATION
━━━━━━━━━━━━━━━━━━━━━━
□ Do local court rules require AI disclosure?
  Checked: YES / NO
  Disclosure required: YES / NO
  If yes, disclosure included: □

CERTIFICATION
━━━━━━━━━━━━━
I have complied with Bar Council guidance on
generative AI. All AI-assisted content has been
verified. I understand that ultimate responsibility
for this work remains with me.

Signature: _______________ Date: __________

NOTES:
_________________________________________
_________________________________________
_________________________________________

Time investment: 15-45 minutes depending on work product complexity Why it matters: Documented compliance protects you professionally


Quick Hits

Guidance Updates:

Case Law Context:

Broader Context:

  • EU AI Act mandatory compliance begins August 2026
  • UK maintains principles-based approach through existing regulators
  • Legal AI specifically addressed in both frameworks

Coming Next Issue:

  • 2025 Legal AI Year in Review: What Worked, What Didn't

Ask the Community

The updated guidance raises questions we're researching:

  1. For barristers: How has the guidance changed your AI use practices?
  2. For chambers administrators: Have you implemented chambers-wide AI policies?
  3. For instructing solicitors: How are you communicating AI expectations to counsel?
  4. Would you share compliance checklist templates we could compare?

Reply to share. Anonymized contributions welcome.


TwinLadder Weekly | Issue #19 | November 2025

Helping lawyers build AI capability through honest education.


Sources