AI Legal Ethics: ABA Model Rules and Formal Opinion 512
Operator-grade read on AI legal ethics. Model Rules 1.1, 1.6, 5.1/5.3, 7.1 and ABA Formal Opinion 512 in practice for attorneys.
The short version: AI use is permitted, even encouraged, when handled competently. The lawyer remains accountable for the output. The supervisory and confidentiality obligations are not delegated.
The four Model Rules that matter most
Rule 1.1 (Competence) — A lawyer must provide competent representation, including understanding the benefits and risks of relevant technology. Generative AI is now considered relevant technology. Lawyers should understand:
- How AI tools work at a basic level
- The risks of hallucination, bias, and confidentiality breach
- When AI is appropriate for which tasks
Rule 1.6 (Confidentiality) — Lawyers must not disclose client confidences. AI tools that retain client data, use it for training, or share it externally violate this rule. Practical implications:
- Free consumer ChatGPT, Claude, or other AI tools: do not use with client data
- Enterprise tiers with proper data handling: appropriate when configured correctly
- Document review platforms: must have proper data handling certifications
- Supervising lawyers must ensure AI is used competently
- Junior attorneys should not unilaterally adopt AI for client work
- Verification of AI output is non-delegable
- Workflow design must include attorney review
- AI-drafted marketing content requires attorney review
- Claims of expertise, success rates, or capabilities must be substantiated
- Compliance officers should review AI marketing workflows
What Formal Opinion 512 actually says
The opinion's core conclusions:
- Competence requires AI understanding. Lawyers who use AI must understand its capabilities and limitations sufficient to evaluate output.
- Confidentiality requires deliberate tool selection. Lawyers must select AI tools that protect client confidentiality. Free or consumer-tier AI typically does not.
- Supervisory obligations apply. Partners and supervising lawyers must oversee AI use by junior attorneys and staff just as they oversee non-lawyer assistants.
- Billing must be honest. Lawyers cannot bill clients for time AI completed in seconds at hourly rates as if the work took the historical time. Value-based billing and modified hourly approaches may be appropriate.
- Candor to tribunal applies. AI-generated citations and arguments must be verified. The Mata v. Avianca facts illustrate the consequence of skipping verification.
What this means in practice
At the individual lawyer level:
- Use enterprise-tier AI tools only for client work
- Verify every citation, quote, and legal proposition AI generates
- Disclose AI use to clients when appropriate (engagement letter)
- Bill honestly for AI-assisted work
- Document supervisory review
- Adopt firm-wide AI policy under Model Rules
- Train attorneys on AI ethics and use
- Configure AI tools for confidentiality
- Document workflow design for AI-assisted work
- Annual compliance review
State-by-state developments
In addition to the ABA Model Rules, individual states have issued AI guidance:
- California: State Bar guidance on practical use issued 2023, updated 2025
- New York: Bar Association AI Task Force published guidance
- Florida: Specific opinion on AI use for legal research
- Texas: Bar guidance with focus on confidentiality
- Illinois: ARDC guidance addressing supervision
- Washington D.C.: Bar opinion on AI in advocacy
Honest billing for AI-assisted work
Formal Opinion 512 specifically addresses billing:
- Cannot bill 8 hours for what AI completed in 30 minutes
- Can bill for attorney time spent verifying, supervising, and refining
- Can use value-based fees that better reflect AI-augmented work
- Cannot bill hourly rates for "savings" passed back to clients
Engagement letter language
Many firms now include AI disclosure in engagement letters:
"In providing legal services, our firm may use artificial intelligence tools to assist with research, drafting, document review, and related tasks. All output is reviewed and verified by the attorneys representing you, and client confidentiality is maintained through tools that protect privileged information. Our billing reflects the value of our work, including the time spent reviewing and verifying AI-assisted output."
Tune to firm style. The disclosure protects client expectations and the firm.
Insurance and risk management
Legal malpractice insurers are starting to ask:
- Do you use AI in client work?
- What tools?
- What supervision is in place?
- What training have attorneys received?
What can go wrong
The patterns we see at firms that get into trouble:
Pattern 1: Unauthorized use. Associate uses consumer ChatGPT with client documents. Firm has no policy. Confidentiality is breached.
Pattern 2: Unverified citations. Mata v. Avianca pattern. Brief filed with hallucinated cases. Sanctions or worse.
Pattern 3: Honest billing failures. Firm bills hourly rates for AI-completed work at historical durations. Client discovers. Reputation and ethics consequences.
Pattern 4: Inadequate supervision. Junior lawyers run AI tools without partner review. Errors compound. Malpractice exposure.
Pattern 5: Marketing claims. AI-generated marketing makes unsupported claims. Bar discipline.
Each pattern is preventable with a structured AI policy and training. Each is increasingly common when firms deploy AI without ethics infrastructure.
What we recommend
For firms deploying AI:
- Written firm-wide AI policy under Model Rules
- Approved tool list (Tier 1: approved without conditions; Tier 2: approved with documented use; Tier 3: prohibited)
- Annual attorney training (90 minutes minimum, documented)
- Engagement letter language addressing AI use
- Workflow design with attorney verification gates
- Quarterly compliance review of AI usage
- Annual policy refresh
Bottom line
AI ethics for attorneys is not a barrier to AI use — it's the operating manual. ABA Formal Opinion 512 makes the framework clear: competent use, confidentiality protection, supervisory oversight, honest billing, candor to tribunal, accurate communications.
Firms that build this infrastructure can deploy AI aggressively and ethically. Firms that ignore it deploy AI at growing legal and reputational risk. The competitive firms in 2027-2030 will be the ones that took the ethics work seriously today.
Frequently asked questions
What is ABA Formal Opinion 512?
Issued July 2024, it's the ABA's most comprehensive AI ethics guidance for attorneys. It clarifies how Model Rules 1.1 (competence), 1.6 (confidentiality), 5.1/5.3 (supervision), 7.1 (communications), and others apply to generative AI use in legal practice.
Can I bill clients hourly for AI-assisted work?
Not at historical rates for time AI saved. You can bill for attorney time spent verifying, supervising, and refining AI output. Many firms are shifting to value-based or capped fees for AI-augmented work to align billing with value delivered.
Do I have to disclose AI use to clients?
Not strictly required under most rules, but increasingly recommended. Many firms include AI disclosure in engagement letters as a transparency and risk-management practice. State rules vary — check your jurisdiction.
What tools can I use without violating confidentiality?
Enterprise-tier tools with proper data handling (data not used for training, encryption, audit logs, configurable retention). Examples: ChatGPT Enterprise, Claude Team/Enterprise, Microsoft Copilot, Westlaw Precision, Lexis+ AI, Casetext CoCounsel, Harvey. Avoid free consumer-tier AI with client data.
Will using AI in legal work affect my malpractice insurance?
Some insurers ask about AI policy and training as part of underwriting. Some offer modest premium reductions for firms with documented AI policies and training. Some add exclusions for AI-related errors without verification. Read your policy carefully.
Related guides
Need help implementing this?
//prometheus does onsite AI consulting and implementation in Milwaukee. We set it up, train your team, and make sure it works.
let's talk