ABA Formal Opinion 512: A Practical Compliance Guide
The ABA's first formal ethics guidance on AI is fifteen pages long. Here is what it requires, what it implies, and what you need to do about it.
On July 29, 2024, the ABA Standing Committee on Ethics and Professional Responsibility released Formal Opinion 512 — the first formal ethics guidance on generative AI in legal practice. I have used this opinion as a training foundation for months now, and I want to share how I translate its requirements into practical compliance.
Because the opinion is not particularly long or complex. The challenge is not understanding it. The challenge is implementing it.
What Opinion 512 Is — and Is Not
The opinion is not new law. It does not create new rules. It applies existing Model Rules of Professional Conduct to the specific context of generative AI tools — ChatGPT, Claude, Lexis+ AI, Harvey, and similar technologies.
This matters because it means the obligations are not future requirements. They exist now, under rules that have been in force for years. Opinion 512 simply makes explicit what was already implicit: professional duties apply to AI use in the same way they apply to any other aspect of legal practice.
The opinion is also not binding on its own. Model Rules must be adopted by state bars to have regulatory force. But Opinion 512 signals the ABA's interpretation, and it will influence state bar guidance, malpractice standards, and disciplinary proceedings. Ignoring it because it is "just an opinion" would be a serious miscalculation.
The Six Obligations, Practically
1. Competence (Rule 1.1)
The requirement: Understand the benefits and risks of the AI tools you use. Keep that understanding current.
What I tell training participants: Competence does not mean you can build an AI system. It means you understand three things: what the tool can do reliably, where it fails predictably, and how to tell the difference.
Practical compliance:
- Before using any AI tool professionally, learn its specific capabilities and known limitations
- Understand why AI hallucinates (it is a fundamental feature of text generation, not a bug)
- Monitor for tool updates that change capabilities or failure modes
- Document your understanding — notes from training, vendor materials reviewed, research conducted
The key phrase: Opinion 512 says uncritical reliance on AI output without appropriate verification violates the competence duty. "Uncritical reliance" is doing significant work here. It means that using AI output without checking it is not just imprudent — it is an ethical violation.
2. Confidentiality (Rule 1.6)
The requirement: Protect client information when using AI tools.
What I tell training participants: Know where your data goes. If you cannot explain what happens to information you enter into an AI tool, you should not be entering client information into it.
Practical compliance:
- Review privacy policies and terms of service for every AI tool before entering client data
- Use enterprise versions with data protection commitments for client work
- If using open models, obtain informed consent that describes the specific risks
- Establish which tools are approved for which information categories
The critical distinction: Consumer AI tools (free or personal accounts) typically retain inputs for model training. Enterprise tools typically do not, but verify the specific terms. The distinction determines whether entering client information constitutes an unauthorised disclosure.
3. Communication (Rule 1.4)
The requirement: Disclose AI use to clients when material.
What I tell training participants: If the client would care, tell them. When in doubt, tell them.
Practical compliance:
- Add AI disclosure language to engagement letters
- Discuss AI use during initial client meetings when it will play a significant role
- Document client consent to AI use
- Be prepared to explain what tools were used and how
Important nuance: Not every AI interaction requires disclosure. A spell-check AI or a scheduling assistant probably does not require client notification. AI used for legal research, analysis, or drafting probably does.
4. Candor to Tribunal (Rules 3.1 and 3.3)
The requirement: Everything filed with a court must be accurate. Period.
What I tell training participants: This is the obligation that has produced every AI sanctions case. If you file an AI-generated citation without verifying it, and the case does not exist, you have violated your duty of candor. The fact that AI produced the error does not mitigate the violation.
Practical compliance:
- Verify every citation against primary sources — every one, every time
- Confirm that cited cases support the propositions attributed to them
- Run currency checks on all authorities
- If errors are discovered post-filing, correct them immediately
- Document your verification process
Non-negotiable standard: There is no acceptable rate of unverified AI citations in court filings. The rate is zero. Build verification into your workflow so it happens structurally, not optionally.
5. Supervision (Rules 5.1 and 5.3)
The requirement: Firms must have AI policies. Supervising lawyers must ensure compliance.
What I tell training participants: If you are a partner, you are responsible for every AI-assisted work product that leaves the firm under your supervision. If you do not have AI policies and training in place, you are personally exposed when something goes wrong.
Practical compliance:
- Develop written AI use policies covering approved tools, verification requirements, and confidentiality protections
- Train all attorneys and staff on the policies
- Establish review procedures for AI-assisted work product
- Audit compliance periodically
- Document everything
The Morgan & Morgan lesson: A supervising attorney was sanctioned for a filing he did not create because his name was on it. Supervisory liability does not require direct involvement. It requires adequate oversight. Without governance, adequate oversight does not exist.
6. Reasonable Fees (Rule 1.5)
The requirement: AI-related billing must be honest and transparent.
What I tell training participants: Bill for the time you actually worked. Disclose AI costs before charging them. Do not bill clients for learning how to use AI.
Practical compliance:
- Track actual time on AI-assisted tasks — do not inflate based on pre-AI benchmarks
- If passing AI tool costs to clients, disclose and obtain consent in advance
- Do not bill general AI learning time to any client
- If a specific client requests a specific tool, learning time for that tool may be billable with disclosure
- Review AI-assisted billing for reasonableness before finalising
The Compliance Checklist
I use this with every organisation I train:
Competence: Do your lawyers understand the AI tools they use — capabilities, limitations, and failure modes? Is there a process for staying current? Are verification protocols in place?
Confidentiality: Have you reviewed data handling for every AI tool? Are enterprise versions used for client data? Is informed consent obtained when needed?
Communication: Do engagement letters address AI use? Are client preferences documented?
Candor: Are all citations verified before filing? Is there a correction procedure for post-filing errors?
Supervision: Do written AI policies exist? Is everyone trained? Is compliance monitored?
Fees: Is AI-related billing consistent and transparent? Are costs disclosed before charging?
If you can answer "yes" to every question, you are in compliance. If you cannot, you know where the gaps are.
The Continuing Obligation
Opinion 512 explicitly states that the duty to stay abreast of AI benefits and risks "is not a static undertaking." Compliance today does not guarantee compliance tomorrow. Tools evolve. Regulations change. Risks shift.
This means your AI governance programme must include a mechanism for ongoing review and update. Annual policy reviews, regular training refreshers, and active monitoring of regulatory developments are not optional extras — they are components of the competence obligation itself.
The opinion is fifteen pages. Implementation is ongoing. Start now, and plan to keep going.

