// field notesby JoshMay 12, 20269 min read

The Law Firm That Almost Shipped an AI Policy That Would Have Gotten Them Sanctioned

A 30-attorney boutique was 48 hours from launching a firm-wide AI policy. I read the draft and my stomach dropped. Here's what was wrong, what we fixed, and what every firm should check before they publish theirs.

The Law Firm That Almost Shipped an AI Policy That Would Have Gotten Them Sanctioned

The managing partner forwarded me the policy at 11 PM on a Tuesday with the subject line "we're launching this Thursday, anything we missed?"

I read it twice. I drafted a reply. I deleted the reply. I called him.

The policy was not bad in tone. It was thoughtful. It had been through three drafts and reviewed by their general counsel. It just had three things in it that would have ended someone's career.

The first problem: client consent boilerplate

The policy said: "Where AI tools are used in client matters, attorneys should disclose use in matter-opening communications when appropriate."

When appropriate. That phrase is the entire problem.

Several state bar opinions out at the time required disclosure of generative AI use in any matter where AI-generated output materially contributed to attorney work product. Not "when appropriate." Period.

If you ship a policy that gives attorneys discretion when the rule gives them no discretion, every attorney who exercises that discretion wrong is operating outside the firm's policy and outside the bar rules. The firm gets to argue they had a policy. The attorney does not.

We rewrote it. Disclosure is required, in writing, in the engagement letter, for any use that contributes materially to drafting or analysis. Period.

The second problem: confidentiality and prompts

The policy listed "approved AI tools" for attorney use. ChatGPT Plus was on the list. Claude Pro was on the list. Both used in attorneys' personal accounts.

This is a privilege disaster waiting to happen.

When an attorney pastes a client communication into ChatGPT for summarization, they are transmitting potentially privileged material to a third party. Whether that breaks privilege depends on the state and the specific facts. But you do not want to find out in front of a malpractice carrier.

The fix is twofold. First, the firm gets enterprise contracts with explicit data-use terms (Claude for Enterprise, ChatGPT Enterprise, or Microsoft Copilot via the firm tenant). Personal accounts are banned for client material full stop. Second, the policy specifies what can be put into AI tools without further consent (publicly available information, generic legal research) and what requires either redaction or client consent.

It's not a fun policy to write because there's no clean line. But the firm-wide rule should err toward "if you wonder, ask."

The third problem: supervisory duty

ABA Model Rule 5.1 and 5.3 require partners to make reasonable efforts to ensure subordinates comply with rules. The firm's policy said: "Attorneys are responsible for verifying AI output before use in client work."

Sounds fine. Isn't.

The supervisory obligation under 5.1 and 5.3 doesn't dissolve when AI is the producer of the work. The supervising attorney's duty to ensure competent work product extends to AI-produced drafts. If an associate uses Claude to draft a motion and a partner signs off without reading carefully, the partner owns the result.

The policy needed to make this explicit. Supervisory attorneys must conduct meaningful review of AI-assisted work product. Meaningful is defined in the policy (citation check, factual verification, jurisdictional accuracy).

The original draft would have let partners argue "I told the associate to verify." Insufficient under 5.1.

What we shipped instead

The launched policy was 7 pages. The original draft was 3. The extra length is mostly examples and the citation-check protocol.

Key changes from the draft: - Disclosure required, not encouraged - Enterprise-tier tools only, personal accounts banned for client work - Citation check protocol mandatory (cite-checking AI-generated case law in two independent sources before any filing) - Supervisory review protocol mandatory - Annual training requirement - Documented breach reporting process

The annual training is the part most firms skip. Don't skip it. Policies don't enforce themselves. Training is how the policy becomes practice.

What the firm was getting right

In fairness to them, the impulse to publish a policy was correct. Most firms I see haven't written one. They're using AI tools in client matters with no firm-level governance. That's worse than a flawed policy.

The firm also had a good instinct on tooling. They picked Lexis+ AI and Westlaw Precision AI for case research, both of which have privilege-protective enterprise terms. They had not yet picked a general-purpose drafting tool, which I recommended Claude for Enterprise for.

What I'd tell another firm starting from zero

Write your policy before your associates write it for you by accident.

Use the ABA Model Rules and your state bar's most recent guidance as the floor. Pull the actual opinions. Don't rely on a policy template you find online because the template will be six months stale, which is years in AI time.

Get malpractice counsel to read the policy before you launch it. Pay for the hour. It's worth it.

Train every attorney on the policy in person. Not a video. Not an email. A session where they can ask questions. The questions tell you where the policy is unclear.

Audit AI use quarterly. Pick three matters at random. Ask the assigned attorneys how AI was used. Verify against the policy. Adjust the policy based on what you find.

What this isn't

This isn't an "AI is dangerous for law" story. AI in law firms is a productivity multiplier on the order of 2-3x for the right tasks (research, summarization, first-draft generation). Firms that don't adopt are going to lose to firms that do.

It's a story about how the firm-level governance has to match the speed of adoption. The firm I worked with did the right thing — they slowed down, called for a second opinion, and shipped a better policy two weeks late instead of a flawed policy on time.

That's the entire game right now. Not whether to adopt. How to adopt without getting your license suspended in the process.

law firmlegal aicomplianceethicscase studyfield notes
// go deeper

Want the full guide? Check out our deep-dive page for more context, FAQs, and resources.

read the full guide
// keep reading

Related posts

// ready to ship?

Let's build yours.

Reading is the easy part. We do the work. Tell us what's broken and we'll tell you straight up whether we can help.