TwinLadder Weekly
Issue #11 | July 2025
ABA Formal Opinion 512: Your Ethical Obligations with AI
The American Bar Association's first AI ethics guidance is here. Here's what it requires—and what it means for your practice.
Last issue, we examined document automation choices between rules-based and AI-powered approaches. This issue, we analyze the ethics framework that governs all AI use in legal practice: ABA Formal Opinion 512.
The Landmark Guidance
On July 29, 2024, the ABA Standing Committee on Ethics and Professional Responsibility issued Formal Opinion 512, titled "Generative Artificial Intelligence Tools." This is the ABA's first comprehensive guidance on lawyers' ethical obligations when using generative AI.
The opinion isn't binding law—but it offers authoritative interpretation of the Model Rules of Professional Conduct. State bars adopting similar rules will likely follow this framework.
The Six Pillars
Formal Opinion 512 addresses six primary ethical obligations:
| Rule | Obligation | AI Application |
|---|---|---|
| 1.1 | Competence | Understand AI capabilities and limitations |
| 1.6 | Confidentiality | Protect client data in AI systems |
| 1.4 | Communication | Inform clients about AI use |
| 3.1/3.3 | Candor | Verify AI outputs before submission |
| 5.1/5.3 | Supervision | Oversee AI like you would staff |
| 1.5 | Fees | Bill reasonably for AI-assisted work |
Let's examine each in detail.
1. Competence (Rule 1.1)
The Requirement: A lawyer's duty of competence extends to the use of generative AI. Attorneys must understand both the legal and technical aspects of AI tools they employ.
What This Means:
You don't need a computer science degree. But you do need to understand:
- What the AI tool can and cannot do
- Where hallucination risks exist
- How to verify AI-generated outputs
- When AI assistance is appropriate vs. inappropriate
Practical Application:
Before using any AI legal tool, invest time understanding:
- What training data informed the model?
- Does it generate case citations? (Highest hallucination risk)
- What verification does the vendor perform?
- What are known limitations?
The Competence Evolution: The ABA is asserting that there is a requirement to maintain technological competence with evolving technologies such as AI. This isn't optional—it's part of your ongoing professional obligation.
2. Confidentiality (Rule 1.6)
The Requirement: Client information entered into AI systems must be protected with the same care as any client data.
What This Means:
When you input client facts into ChatGPT, Harvey, or any AI tool:
- Does the vendor train on your inputs?
- Where is data stored?
- Who has access?
- What happens to data after your session?
Practical Application:
Before entering client information:
- Review the vendor's data handling policies
- Understand whether inputs become training data
- Verify enterprise vs. consumer data handling
- Consider whether the information is particularly sensitive
The Enterprise Distinction: Most enterprise legal AI tools (Harvey, CoCounsel, Lexis+ AI) offer data protections that consumer tools (free ChatGPT) do not. Know which category your tool falls into.
3. Communication (Rule 1.4)
The Requirement: Lawyers have a duty to communicate with clients about AI use.
What This Means:
Clients should know when AI contributes to their matter. The depth of disclosure depends on:
- The significance of AI's role
- Client sophistication
- Client preferences
Practical Application:
Consider developing standard language for:
- Engagement letters (AI use policy)
- Matter updates (AI assistance disclosure)
- Work product delivery (noting AI involvement where material)
The Judgment Call: Not every AI-assisted spell check requires disclosure. Material contributions to legal analysis or strategy likely do. Use professional judgment.
4. Candor to Tribunal (Rules 3.1/3.3)
The Requirement: AI-generated content submitted to courts must meet the same accuracy standards as any other submission.
What This Means:
The Mata v. Avianca situation—where fabricated cases were submitted to court—violates Rule 3.3 regardless of whether a human or AI generated the fiction.
Practical Application:
Every citation must be verified. Every case must exist. Every quote must be accurate. AI generation doesn't excuse submission errors.
The Verification Burden: If AI accelerates research, some of that time savings should go to verification. The net productivity gain remains positive—but verification isn't optional.
5. Supervision (Rules 5.1/5.3)
The Requirement: AI tools require proper supervision, similar to the oversight necessary for paralegals and other non-lawyer legal professionals.
What This Means:
You can't blame AI for errors any more than you can blame a paralegal. The supervising attorney remains responsible for all work product.
Practical Application:
Establish firm-wide AI policies addressing:
- Which tools are approved
- What tasks can use AI assistance
- Required review procedures
- Documentation of AI involvement
- Training requirements for users
The Firm Responsibility: Managerial attorneys should establish clear policies for AI use. Firms should provide training on ethical and practical aspects. All AI-generated content requires careful review before use.
6. Fees (Rule 1.5)
The Requirement: Fees must be reasonable, which has specific implications for AI-assisted work.
What This Means:
Lawyers may not charge clients for time spent learning a technology to be used for client matters generally, unless a client specifically requests the use of a particular AI tool.
Practical Application:
Consider:
- Learning time: Generally not billable (firm overhead)
- Tool costs: May be passed through if disclosed and consented
- Efficiency gains: Must be reflected in billing (not charging 10 hours for 1 hour of AI-assisted work)
- Per-use charges: Permitted if explained and consented in advance
The Billing Tension: If AI reduces research time from 10 hours to 2, you can't bill 10. But you can discuss value-based billing arrangements that fairly compensate expertise while reflecting efficiency gains.
Tool Review: AI Ethics Compliance Approaches
How different platforms address Formal Opinion 512 requirements
Harvey
Confidentiality Approach:
- Enterprise data isolation
- No training on client data
- AWS/Azure/Google Cloud security certifications
- SOC 2 Type II compliance
Competence Support:
- Onboarding and training programs
- Documentation of capabilities and limitations
- Customer success support
Rating for Ethics Compliance Support: 4.5/5
CoCounsel (Thomson Reuters)
Confidentiality Approach:
- Westlaw data environment
- Thomson Reuters enterprise security
- Clear data handling documentation
Competence Support:
- Training integrated with Westlaw workflows
- Clear scope documentation
- Research-focused (lower hallucination risk for citations)
Rating for Ethics Compliance Support: 4.5/5
ChatGPT (OpenAI Consumer)
Confidentiality Approach:
- Default: Data may be used for training
- Opt-out available but requires action
- Not designed for client confidential information
Competence Support:
- General tool, not legal-specific
- No verification of legal accuracy
- User bears full verification burden
Rating for Ethics Compliance Support: 2/5 (Enterprise version rates higher)
Lexis+ AI
Confidentiality Approach:
- LexisNexis enterprise environment
- Established legal data handling practices
- Citation verification integrated
Competence Support:
- Legal-specific design
- Hallucination mitigation through citation links
- Training resources available
Rating for Ethics Compliance Support: 4/5
What's Working: Ethics Implementation Success Stories
Success Story #1: The Engagement Letter Update
Firm type: Mid-size regional firm Challenge: No standard language for AI disclosure
Approach: Updated engagement letter template with AI use policy:
"Our firm may utilize artificial intelligence tools to assist with legal research, document drafting, and other tasks. All AI-assisted work is reviewed by licensed attorneys. We maintain appropriate confidentiality protections for any client information used with these tools. If you have questions about our AI practices, please contact your matter attorney."
Result: Clear disclosure, client consent at engagement, reduced ad hoc questions.
Key insight: Proactive disclosure in engagement letters addresses communication requirements systematically.
Success Story #2: The AI Review Protocol
Firm type: Large litigation practice Challenge: Inconsistent AI output verification
Implementation:
- Every AI-generated citation requires Westlaw/Lexis verification
- AI drafts flagged in document metadata
- Review checklist for AI-assisted briefs
- Random audit of AI research by senior associates
Result: Zero citation errors in 6 months. Clear accountability trail. Partners confident in AI-assisted work product.
Key insight: Systematic verification protocols address competence and candor requirements while maintaining efficiency gains.
Hard Cases: Ethics Challenges in Practice
Hard Case #1: The Client Who Doesn't Want AI
Scenario: Sophisticated client explicitly prohibits AI use on their matters. Your firm uses AI extensively.
Problem: Honoring the prohibition creates inefficiency and competitive disadvantage. But the client's preference should be respected.
Approach: Discuss the scope of the prohibition. Does it cover:
- Research assistance?
- Document review?
- Drafting support?
- All of the above?
Lesson: Client communication includes learning their AI preferences. Don't assume consent.
Hard Case #2: The Billing Conundrum
Scenario: AI reduces research time from 8 hours to 90 minutes. Traditional billing would capture 8 hours of value. Actual time is 90 minutes.
Problem: Billing 8 hours is arguably unreasonable. Billing 90 minutes undervalues the work product.
Approaches:
- Bill actual time (90 minutes) - fair but undercompensates expertise
- Value-based fee arrangement - captures value without time fiction
- Fixed fee for research - removes time-based tension entirely
Lesson: AI accelerates the shift toward value-based billing. Time-based billing creates unsustainable tensions.
Hard Case #3: The Verification Burden
Scenario: AI research identifies 47 relevant cases. Verifying each requires 5 minutes. Total verification time: nearly 4 hours.
Problem: The verification burden erodes efficiency gains. But unverified AI output creates ethics risk.
Reality: Some verification burden is irreducible. But consider:
- Risk-stratifying verification (case law more rigorous than procedural info)
- Using verified legal research tools (CoCounsel, Lexis+ AI with citations)
- Building verification into workflow rather than treating it as overhead
Lesson: Choose tools that reduce verification burden through design, not just generation speed.
Reliability Corner
ABA Formal Opinion 512 Summary Table
| Model Rule | Requirement | Key Takeaway |
|---|---|---|
| 1.1 | Competence | Understand AI before using it |
| 1.6 | Confidentiality | Protect client data in AI systems |
| 1.4 | Communication | Disclose material AI use to clients |
| 3.1/3.3 | Candor | Verify everything before tribunal |
| 5.1/5.3 | Supervision | Treat AI like supervised staff |
| 1.5 | Fees | Bill reasonably, reflect efficiency |
State-Level AI Ethics Guidance
| State | Status | Notable Provisions |
|---|---|---|
| California | Guidelines issued | Detailed AI disclosure requirements |
| Florida | Guidelines issued | Emphasis on verification |
| New York | Guidelines issued | Commercial division AI rules |
| Texas | Task force active | Proposed guidance pending |
| Illinois | Task force active | Focus on confidentiality |
Multiple states preceded the ABA with their own guidance. Check your jurisdiction for specific requirements.
This Month's Perspective
Formal Opinion 512 concluded: "With the ever-evolving use of technology by lawyers and courts, lawyers must be vigilant in complying with the Rules of Professional Conduct to ensure that lawyers are adhering to their ethical responsibilities and that clients are protected."
The guidance isn't restrictive—it's clarifying. AI use is permitted. The ethical rules you already follow still apply.
Workflow of the Month: ABA AI Ethics Compliance Checklist
Use this checklist to assess your AI practice compliance:
ABA FORMAL OPINION 512 COMPLIANCE CHECKLIST
============================================
FIRM: _____________________________
DATE: _____________________________
REVIEWER: _________________________
COMPETENCE (RULE 1.1)
---------------------
[ ] AI tools inventory documented
List tools in use: _______________
[ ] Capabilities and limitations understood
Documentation reviewed: YES / NO
[ ] Training completed for AI users
Training date: ___________________
[ ] Hallucination risks identified
High-risk tasks: _________________
[ ] Verification procedures established
Procedure documented: YES / NO
CONFIDENTIALITY (RULE 1.6)
--------------------------
[ ] Data handling policies reviewed for each tool
Tool 1: _____________ Reviewed: ___
Tool 2: _____________ Reviewed: ___
Tool 3: _____________ Reviewed: ___
[ ] Training data practices understood
Does vendor train on inputs? YES / NO / VARIES
[ ] Enterprise vs. consumer distinction clear
Consumer tools prohibited for client data? YES / NO
[ ] Sensitive matter protocols exist
Additional protections documented: YES / NO
COMMUNICATION (RULE 1.4)
------------------------
[ ] Engagement letter includes AI disclosure
Current language approved: YES / NO / PENDING
[ ] Client preferences documented
System for tracking preferences: YES / NO
[ ] Material AI use disclosed to clients
Standard practice established: YES / NO
[ ] Client questions addressed proactively
FAQ or talking points exist: YES / NO
CANDOR TO TRIBUNAL (RULES 3.1/3.3)
----------------------------------
[ ] Citation verification mandatory
Protocol documented: YES / NO
[ ] AI drafts identified in review workflow
Marking system in place: YES / NO
[ ] Quote accuracy checked
Verification step included: YES / NO
[ ] Court AI disclosure rules known
Jurisdiction requirements reviewed: YES / NO
SUPERVISION (RULES 5.1/5.3)
---------------------------
[ ] Firm AI policy exists
Policy location: __________________
[ ] Approved tools list maintained
Current list date: ________________
[ ] Review requirements documented
Who reviews AI output: ____________
[ ] Training requirements defined
Minimum training hours: ___________
[ ] Audit procedures established
Audit frequency: __________________
FEES (RULE 1.5)
---------------
[ ] Billing practices address AI efficiency
Approach: _________________________
[ ] Learning time not billed to clients
Policy documented: YES / NO
[ ] Tool costs disclosed if passed through
Disclosure method: ________________
[ ] Value-based alternatives considered
Options documented: YES / NO
OVERALL COMPLIANCE ASSESSMENT
-----------------------------
[ ] Competence: COMPLIANT / GAPS / NON-COMPLIANT
[ ] Confidentiality: COMPLIANT / GAPS / NON-COMPLIANT
[ ] Communication: COMPLIANT / GAPS / NON-COMPLIANT
[ ] Candor: COMPLIANT / GAPS / NON-COMPLIANT
[ ] Supervision: COMPLIANT / GAPS / NON-COMPLIANT
[ ] Fees: COMPLIANT / GAPS / NON-COMPLIANT
GAPS IDENTIFIED:
1. _________________________________
2. _________________________________
3. _________________________________
REMEDIATION PLAN:
_________________________________
_________________________________
_________________________________
TARGET COMPLETION DATE: ____________
NEXT REVIEW DATE: _________________
APPROVED BY: _____________ DATE: _______
Time investment: 60-90 minutes for initial assessment Frequency: Quarterly review recommended Why it matters: Documented compliance demonstrates reasonable efforts to meet ethical obligations.
Quick Hits
ABA Guidance:
- Formal Opinion 512 issued July 29, 2024
- First comprehensive ABA guidance on generative AI
- Six Model Rules addressed: 1.1, 1.4, 1.5, 1.6, 3.1/3.3, 5.1/5.3
State Updates:
- Multiple states issued guidance before or after ABA
- California, Florida, New York leading with detailed requirements
- Check your jurisdiction for specific obligations
Practice Implications:
- AI use permitted with appropriate safeguards
- Verification, disclosure, and supervision are non-negotiable
- Fee structures should reflect AI efficiency gains
Coming Next Issue:
- Mid-Year Legal AI State of the Market Review
Ask the Community
Formal Opinion 512 creates practical implementation questions:
- For firm administrators: How are you documenting AI tool approval and training?
- For billing professionals: How are you addressing the efficiency/billing tension?
- For litigators: What verification protocols are you using for AI-assisted briefs?
- Would you share your engagement letter AI disclosure language with the community?
Reply to share. Anonymized contributions welcome.
TwinLadder Weekly | Issue #11 | July 2025
Helping lawyers build AI capability through honest education.
Sources
- ABA: First Ethics Guidance on AI Tools
- UNC Law Library: ABA Formal Opinion 512 Analysis
- NCBE Bar Examiner: Generative AI Tools
- ABA Business Law Today: Ethics Opinion Framework
- FKKS Technology Law: Comprehensive Formal Ethics Opinion Analysis
- Florida Bar News: ABA AI Ethics Guidance
- Steno: Legal AI Rules by State
- WashU Law: Legal Ethics and AI
- Oregon State Bar: Formal Opinion 2025-205
- NYC Bar: Ethics Guidance on AI Analysis
