Building an AI Governance Framework for Your Firm
Structured oversight prevents ethical violations and positions firms to capture AI's efficiency gains.
With 79% of law firms adopting AI tools but only 10% implementing formal governance, the legal profession faces a critical gap. High-profile sanctions cases throughout 2024, from Mata v. Avianca to the Morgan & Morgan citations incident, demonstrate that courts have zero tolerance for AI-related negligence.
Building a governance framework is no longer optional. The question is how to structure it appropriately for your firm's size and risk profile.
The Regulatory Landscape
The American Bar Association issued Formal Opinion 512 in July 2024, establishing the ethical framework governing AI use across the profession. The opinion addresses six primary areas: competence (Model Rule 1.1), confidentiality (Model Rule 1.6), communication with clients (Model Rule 1.4), candor toward the tribunal (Model Rules 3.1 and 3.3), supervisory responsibilities (Model Rules 5.1 and 5.3), and reasonable fees.
At the state level, Colorado enacted the most comprehensive AI law in May 2024, effective June 2026. Texas followed with TRAIGA in June 2025, effective January 2026. California, New York, and Illinois have adopted or proposed comprehensive AI accountability laws addressing transparency, bias mitigation, and documentation requirements.
More than 1,000 AI-related bills have been introduced across nearly every state in 2024-2025. This patchwork of requirements makes centralized governance essential for firms operating across jurisdictions.
Governance Structure Options
Enterprise Model: Dedicated AI Governance Board
80% of AmLaw 100 firms have established AI governance boards. This model works for large firms where:
- AI deployment spans multiple practice areas
- Significant technology investment requires oversight
- Client requirements demand formal governance documentation
- Regulatory exposure across jurisdictions requires coordinated response
The board typically includes representatives from:
- Technology/IT leadership
- Risk management and compliance
- Ethics and professional responsibility
- Practice group leaders from high-utilization areas
- Information security
This structure enables centralized policy development, vendor assessment, training standards, and incident response. The cost is meaningful overhead that smaller firms cannot absorb.
Mid-Market Model: Distributed Responsibility
Firms with 50-200 attorneys often lack resources for dedicated governance infrastructure. A distributed model assigns governance responsibilities to existing roles:
- Managing partner or executive committee: Policy approval and strategic direction
- IT director: Tool evaluation, security assessment, vendor management
- Ethics partner: Compliance monitoring, guidance on novel issues
- Practice group leaders: Implementation oversight within their groups
This approach integrates governance into existing management structures rather than creating new bureaucracy. Success depends on clear accountability and regular coordination.
Small Firm Model: Partner-Led Governance
For firms under 50 attorneys, governance often falls to a single partner or small committee. Key elements include:
- Written AI acceptable use policy
- Defined approval process for new tools
- Mandatory training for all users
- Incident reporting mechanism
- Periodic policy review
The focus should be on preventing the most common failures: confidentiality breaches, unverified citations, and undisclosed AI use where required.
Essential Policy Components
Data Classification and Handling
Any effective AI policy must address what information can and cannot be input into AI systems. Categories typically include:
Prohibited inputs:
- Client confidential information in public AI tools
- Privileged communications
- Personally identifiable information
- Information subject to protective orders
Conditional inputs (with appropriate enterprise tools):
- Anonymized case facts
- General legal research queries
- Document drafts for internal review
The policy must specify which tools are approved for which data categories. Using public AI tools for client work without human verification is now a clear ethical violation under Opinion 512.
Verification Requirements
Every governance framework needs explicit verification obligations:
- All AI-generated legal citations must be verified against primary sources
- AI-drafted content requires review before submission to courts or clients
- Factual assertions generated by AI require independent confirmation
The Ko v. Li and Morgan & Morgan cases illustrate the consequences of verification failures. In Ko v. Li, the lawyer avoided contempt sanctions only because the court exercised discretion; the substantive violations were clear.
Disclosure Obligations
Federal judges have issued over 200 standing orders requiring AI disclosure in court submissions. State courts increasingly impose similar requirements. Pennsylvania mandates explicit disclosure of AI use in all court submissions as of August 2024.
Policies should address:
- Jurisdictions where disclosure is mandatory
- Internal standards for voluntary disclosure
- Client communication about AI use in their matters
- Documentation requirements for AI assistance
Training and Competency
Opinion 512 requires lawyers to have "a reasonable understanding of the capabilities and limitations of AI tools they use." This standard cannot be met without training.
Effective training programs cover:
- Tool-specific functionality and limitations
- Verification workflows for different use cases
- Ethical obligations and professional responsibility
- Firm-specific policies and procedures
- Updates as tools and regulations evolve
Training should be mandatory before access to AI tools is granted, with periodic refreshers as the landscape changes.
Supervision and Accountability
Model Rules 5.1 and 5.3 impose supervisory obligations on partners and managers. In the AI context, this means:
- Ensuring subordinate lawyers and staff understand AI policies
- Monitoring compliance with verification requirements
- Addressing violations promptly
- Documenting oversight activities
The Morgan & Morgan case illustrates supervisory exposure: supervising attorney T. Michael Morgan was fined $1,000 despite not being involved in creating the problematic filing, because his signature appeared on it.
Implementation Considerations
Phased Rollout
Most firms benefit from phased implementation:
- Policy development (4-8 weeks): Draft policies, gather stakeholder input, obtain approval
- Training development (2-4 weeks): Create materials, identify trainers, schedule sessions
- Pilot deployment (4-8 weeks): Limited rollout with enhanced monitoring
- General availability (ongoing): Broader access with standard oversight
- Continuous improvement (ongoing): Policy updates based on experience and regulatory changes
This approach allows refinement before firm-wide exposure.
Technology Controls
Policies work best when supported by technical controls:
- Approved tool lists enforced through IT systems
- Access controls limiting AI tool availability to trained users
- Logging and monitoring of AI tool usage
- Data loss prevention controls for sensitive information
Vendor Assessment
Third-party AI tools require due diligence on:
- Data handling and retention practices
- Security certifications and audit reports
- Compliance with applicable regulations
- Contract terms on confidentiality and data use
- Vendor stability and support capabilities
Looking Ahead
Predictions for 2026 suggest legal AI will shift from generic chat interfaces to specialized tools embedded in legal workflows. As adoption matures, firms will demand evidence-based outputs with full traceability and governance controls.
Building governance infrastructure now positions firms to capture AI's efficiency gains while avoiding the ethical and reputational risks that have already affected practitioners who moved without adequate controls.
Key Takeaways
- 80% of AmLaw 100 firms have AI governance boards; the industry average shows 79% adoption but only 10% governance
- ABA Formal Opinion 512 (July 2024) establishes the ethical framework for AI use, covering competence, confidentiality, and supervision
- Policy essentials include data classification, verification requirements, disclosure obligations, and training mandates
- Over 200 federal judges have issued standing orders requiring AI disclosure; Pennsylvania requires disclosure in all court submissions
- Supervisory lawyers face exposure even when not directly involved in AI-generated content, as demonstrated in recent sanctions cases
Sources
[Crafting an AI Policy for Your Law Firm: 2025 Edition]
CaseMark step-by-step guide to developing comprehensive AI policies, including templates and implementation considerations. Read Full Source ->
[ABA Formal Opinion 512]
The American Bar Association's first formal ethics opinion on generative AI, covering competence, confidentiality, fees, and supervisory obligations. Read Full Source ->
[Legal AI Governance Frameworks]
US Legal Support analysis of governance structures, including NIST AI RMF application to legal practice and cross-functional oversight models. Read Full Source ->
[2026 AI Legal Forecast: From Innovation to Compliance]
Baker Donelson predictions for legal AI regulation and governance requirements, including state-level legislative developments. Read Full Source ->

