Attorney-Client Privilege + AI: When the Privilege Survives
Lawyers using AI on client matters need to know whether the privilege survives. Most state bar opinions land in similar places. Here's the synthesis.
Attorneys using AI on client matters face a specific risk: does transmitting client information to an AI service waive the privilege?
The answer is jurisdiction-specific. The general pattern is consistent across most opinions.
This is not legal advice. Talk to your firm's ethics counsel.
The general rule
Attorney-client privilege protects confidential communications made for the purpose of obtaining or providing legal advice. Disclosure to a third party generally waives the privilege unless the third party is "necessary" for the legal representation.
Necessary third parties traditionally include: paralegals, secretaries, expert consultants. The question for AI: is it analogous to these necessary third parties, or is it a separate third party that waives the privilege?
How state bar opinions are landing
State bar opinions (California, New York, Florida, others) have generally concluded that AI tools CAN be used without waiver IF:
1. The AI vendor has terms that maintain confidentiality (no training on client data, no third-party disclosure) 2. The attorney has taken reasonable steps to verify the vendor's security 3. The AI vendor is contractually obligated to maintain confidentiality 4. The client has been informed (in some jurisdictions) and/or consents
This is similar to the analysis for cloud services that came before. The new wrinkle is that AI services are often configured to train on inputs by default, which would defeat the analysis.
Patterns that preserve privilege
Enterprise AI with confidentiality terms. Claude for Enterprise, ChatGPT Enterprise, Microsoft Copilot via tenant. Confirm in writing that no training occurs on inputs. Confirm data is not used for any purpose other than serving your requests. Confirm appropriate security.
Lexis+ AI, Westlaw Precision AI, Bloomberg AI. Legal-research-specific AI tools with terms built for legal practice. Most have confidentiality terms aligned with privilege requirements.
Self-hosted models. Models running on infrastructure you control. No third-party disclosure to argue about.
De-identified inputs. Where possible, strip identifying information before AI processing. The privilege analysis is easier when the AI doesn't see the privileged content directly.
Patterns that risk waiver
Consumer-tier AI services for client matters. Most consumer terms permit training on inputs and don't promise confidentiality. Pasting client communications into ChatGPT consumer is a high-risk pattern.
AI tools that share data with subprocessors without disclosure. Some AI services use third-party data processors. You need to know the full chain.
AI tools whose terms permit secondary use. "We may use your data to improve our services" is fine for non-confidential data. For client matters, this is a problem.
Public-facing AI prompts. Some attorneys use AI through public chat interfaces (ChatGPT website, Claude.ai website). Even with enterprise tiers, the consumer interface may have different terms than the API.
The client consent question
Some state bars now suggest disclosing AI use to clients. A growing best practice (and increasingly required) is to address AI in the engagement letter.
A version that's worked:
"We may use AI-assisted tools to support our work on your matter. These tools meet our confidentiality and security standards, and any data shared with these tools is protected by contractual terms with the AI vendor. We do not believe such use waives attorney-client privilege, but you should know that we use these tools. If you have any concerns or wish to opt out, please contact us."
Specific firms add more detail. Some require explicit consent for AI use, especially in high-stakes matters.
The Citation Hallucination Question
Separate from privilege, AI-generated case law citations are a known risk. Multiple attorneys have been sanctioned for filing briefs containing fake AI-generated citations.
The rule is simple: verify every citation generated by AI against a primary source. Lexis, Westlaw, Bloomberg, or the court's own records. AI-generated citations should NEVER be filed without independent verification.
This isn't a privilege issue. It's a competence and candor issue. Equally important.
The internal use vs external use distinction
AI for purely internal work (firm operations, legal research orientation, internal communications) is less risky on privilege than AI for client-facing work product.
If you're starting AI adoption at a firm, internal use is the safer entry point. Build muscle. Expand to client-facing once your governance is solid.
What firms should have
- -Written AI policy reviewed by ethics counsel
- -Approved AI tools list (with the terms verified)
- -Banned AI tools list (consumer services for client work)
- -Annual training for attorneys on AI use
- -Citation-verification protocol
- -Disclosure language in engagement letters
Firms without these are exposed. Firms with these are operating responsibly.
The bottom line
Privilege + AI is workable. Enterprise tools with confidentiality terms preserve privilege under most state bar analyses. Consumer tools risk waiver.
The firms in trouble are using consumer AI and skipping verification. The firms doing it right are using enterprise tools, verifying outputs, and disclosing use to clients.
Not legal advice. Talk to your ethics counsel before adopting AI for client matters.
Want the full guide? Check out our deep-dive page for more context, FAQs, and resources.
read the full guide