TwinLadder Weekly
Issue #11 | July 2025
Editor's Note
On July 29, 2024, the ABA Standing Committee on Ethics and Professional Responsibility did something I had been waiting for: it issued Formal Opinion 512, the first comprehensive guidance on lawyers' ethical obligations when using generative AI.
I should say upfront: this is American guidance, and I write from a European perspective. The ABA's Model Rules are not binding on anyone until adopted by state bars. They do not apply in London, Frankfurt, or Riga. But they matter to all of us for two reasons. First, American regulatory frameworks tend to ripple outward — the EU's own AI Act conversations reference US ethical standards. Second, and more interesting to me, Opinion 512 forces a question that transcends jurisdiction: does compliance with ethical rules actually develop professional judgment, or does it just create a new category of checkbox behaviour?
That question has been on my mind since a partner at a large firm in Berlin told me, with evident satisfaction, that his firm was "fully 512-compliant." When I asked whether his associates could independently evaluate AI output quality, he looked at me as though I had changed the subject. I had not.
Liga Paulina, who tracks regulatory convergence between the US and EU approaches, offered a useful frame: "Opinion 512 and the EU AI Act are solving the same problem from opposite directions. The Americans start with ethics rules and ask how they apply to AI. The Europeans start with AI and ask what rules it needs. The destination — competent, transparent, accountable use of AI by professionals — is the same. The journey tells you something about each legal culture."
ABA Formal Opinion 512: The Six Pillars and What They Actually Require
A Transatlantic Framework
Opinion 512 addresses six Model Rules as they apply to generative AI use. The opinion is not binding law, but it offers authoritative interpretation that state bars adopting similar rules will likely follow. Let me walk through them with a practitioner's eye — and, for each, note where the European framework converges or diverges.
Competence (Rule 1.1) is where Opinion 512 is strongest. The Committee asserts that technological competence extends to AI tools: lawyers must understand what AI can and cannot do, where hallucination risks exist, and how to verify outputs. This is not advisory — it is framed as an ongoing professional obligation. You do not need a computer science degree, but you must understand the difference between retrieval-augmented generation and bare model output, know when citation risk is highest, and recognise the limitations of your specific tools.
European parallel: Article 4 of the EU AI Act goes further. It does not merely recommend competence — it mandates documented AI literacy for all staff deploying or operating AI systems. The obligation is not tied to ethics rules. It is statutory, with enforcement mechanisms. [HIGH CONFIDENCE]
Confidentiality (Rule 1.6) requires protecting client information entered into AI systems with the same care as any client data. This means understanding whether your vendor trains on inputs, where data is stored, and the difference between enterprise and consumer data handling. Most enterprise legal AI tools (Harvey, CoCounsel, Lexis+ AI) offer protections that consumer tools like free ChatGPT do not.
European parallel: GDPR imposes far more specific requirements — data processing agreements, cross-border transfer restrictions, data minimisation principles. A European firm using any AI tool must satisfy both professional confidentiality rules and GDPR. The intersection is complex and under-analysed.
Communication (Rule 1.4) requires informing clients about material AI use. The judgment call is in the word "material" — not every spell-check needs disclosure, but substantive contributions to legal analysis likely do. I would argue that the safest approach is standard engagement letter language covering AI use generally, with specific disclosure when AI significantly shapes work product.
Candor to Tribunal (Rules 3.1/3.3) is the rule that Mata v. Avianca made famous. Every citation must be verified. Every case must exist. Every quote must be accurate. AI generation does not excuse submission errors. Full stop.
Supervision (Rules 5.1/5.3) requires that AI tools receive oversight similar to paralegals and non-lawyer staff. You cannot blame AI for errors any more than you can blame a paralegal. Firms must establish policies on approved tools, required review procedures, and training.
European parallel: The EU AI Act's human oversight requirements under Article 14 are structurally similar but broader. They apply not just to legal professionals but to any deployer of AI systems in professional contexts.
Fees (Rule 1.5) is where practice meets economics. The Committee's position: lawyers may not charge clients for time spent learning AI tools for general use. If AI reduces research time from ten hours to two, billing ten hours is unreasonable. Value-based arrangements that fairly compensate expertise while reflecting efficiency gains are the suggested path forward.
The Convergence Table
| Obligation | ABA Opinion 512 | EU AI Act + GDPR |
|---|---|---|
| Competence / Literacy | Rule 1.1: ongoing professional obligation | Article 4: statutory mandate, documented training required |
| Confidentiality | Rule 1.6: same care as any client data | GDPR: DPAs, cross-border transfer rules, data minimisation |
| Transparency | Rule 1.4: material AI use disclosure | Articles 13-14: transparency and human oversight |
| Verification | Rules 3.1/3.3: citation verification | Implied by deployer obligations under Article 26 |
| Supervision | Rules 5.1/5.3: oversight like paralegals | Article 14: human oversight for all AI deployers |
| Billing | Rule 1.5: no inflated hours | Not directly regulated; competition law principles apply |
The state-level landscape is developing rapidly. California, Florida, and New York have issued detailed guidance. Multiple other states have task forces active. The direction is consistent: AI use is permitted with appropriate safeguards. Check your jurisdiction, because specific requirements vary.
For those of us practising in Europe, the framework is instructive even where it does not directly apply. The EU AI Act, the SRA's position in England and Wales, and emerging guidance from continental bar associations all converge on similar principles: competence, transparency, human oversight. The vocabulary differs; the substance does not.
The Competence Question
Here is what concerns me about the compliance approach to AI ethics. Consider a firm that implements every element of Opinion 512 diligently. They have inventoried their AI tools. They have documented capabilities and limitations. Associates complete mandatory training. Engagement letters include disclosure language. Citation verification protocols are in place. Billing practices reflect efficiency gains.
That firm is compliant. But is it competent?
Compliance is procedural. It asks: have you followed the rules? Competence is substantive. It asks: can you independently assess whether AI output is good enough for this specific matter, this specific client, this specific jurisdiction?
Imagine a third-year associate in Stockholm reviewing an AI-generated research memo on Swedish consumer protection law. The citations all verify — the cases exist, the quotes are accurate. The associate checks the boxes on the verification protocol and sends it to the partner. But the memo misses a relevant line of authority from a recent CJEU decision that would have changed the analysis under Swedish implementation of the directive. The associate did not know to look for it because she relied on the AI's completeness rather than applying her own research judgment.
No protocol catches this. Only professional judgment does. And professional judgment develops through practice, not compliance. The firms that will genuinely meet the competence standard are those that use AI as a starting point for analysis, not an endpoint — where associates are expected to improve on AI output, not merely verify it.
The distinction between "verified" and "competent" is where Opinion 512's real challenge lies, and it is a challenge no checklist can resolve.
What To Do
-
Update your engagement letters now. If your engagement letters do not address AI use, add standard language this week. Something like: "Our firm may use AI-assisted tools for research, drafting, and review. All AI-assisted work is reviewed by qualified lawyers. We maintain appropriate confidentiality protections." Proactive disclosure prevents disputes.
-
Distinguish enterprise from consumer AI. Audit which lawyers are using which tools. Consumer ChatGPT for client matters is an ethics violation waiting to happen — and, for European firms, a GDPR violation as well. Ensure your firm has a clear approved-tools list and that consumer-grade tools are excluded for client-related work.
-
Build verification into workflow, not onto it. Verification that feels like extra work gets skipped under time pressure. Integrate citation checking into your research process — use tools with built-in source links (Lexis+ AI, CoCounsel) rather than adding a separate verification step.
-
Require associates to identify what AI missed. For every AI-assisted research memo, ask associates to identify at least one additional authority or argument the AI did not surface. This builds the habit of treating AI as a starting point, not a final answer. For European practices, require cross-referencing against CJEU jurisprudence and national implementing legislation — areas where AI tools remain weakest.
-
Review your billing practices against the fee standard. If you are still billing ten hours for work AI completed in two, you are at risk. Document your approach and discuss value-based alternatives with clients before they ask. European firms should additionally consider whether their Article 4 compliance documentation can demonstrate the efficiency gains AI provides — turning a billing challenge into a competence demonstration.
Quick Reads
-
ABA Formal Opinion 512 announcement — the primary source, worth reading in full rather than relying on summaries.
-
State-by-state AI ethics guidance tracker — essential reference for understanding which US jurisdictions have moved beyond the ABA framework with their own requirements. European practitioners: use this to benchmark against your own bar's guidance.
-
UNC Law Library analysis of Opinion 512 — the most thorough academic treatment, useful for firms drafting internal policies.
-
Oregon State Bar Opinion 2025-205 — goes further than the ABA on billing, with the clearest language yet on AI time-billing obligations.
One Question
If your firm is fully compliant with every AI ethics requirement but your associates cannot independently evaluate AI output quality, are you meeting the competence standard — or just documenting your failure to meet it?
TwinLadder Weekly | Issue #11 | July 2025
Helping European professionals build AI competence through honest education.
