AI for Attorneys & Law Firms

AI Citation Verification Workflow: The Mata v. Avianca Discipline

How attorneys verify AI-generated citations to avoid Mata v. Avianca outcomes. Step-by-step workflow, tools, and time investment.

In 2023, Mata v. Avianca established the legal community's most-cited AI cautionary tale. Attorneys filed a brief generated by ChatGPT that cited six fictional cases. The court sanctioned them. The case became shorthand for "what happens when you don't verify AI output."

Verification discipline is not optional. It's the difference between confident AI deployment and career-ending malpractice exposure.

Here's the workflow.

The four-step verification

For every AI-generated citation in any client document:

Step 1: Confirm case exists

  • Search the citation in Westlaw, Lexis, or Google Scholar
  • If the case doesn't appear, the citation is hallucinated
  • Stop. Do not file. Find an actual case or remove the proposition.
Step 2: Pull the case and read the relevant portion
  • Don't trust AI summaries — read the case
  • Identify the specific holding or passage AI references
  • Confirm it actually says what AI claims
Step 3: Confirm any quoted language
  • AI sometimes "quotes" language that isn't in the case
  • Search the case for the exact quoted text
  • If not present, the quote is hallucinated
Step 4: Check current treatment
  • Run Shepard's, KeyCite, or equivalent
  • Verify the case hasn't been overruled, criticized, or distinguished in problematic ways
  • Note any negative treatment in your memo or brief
This is non-negotiable for any client deliverable.

Time investment

Initial reaction: "this slows AI research down significantly." Actual reality:

  • AI initial research: 30-60 min
  • Verification: 60-90 min (Step 1 quick, Steps 2-4 take the time)
  • Total: 90-150 min versus 8-12 hours manual
Still a 70-80% time savings versus manual research. The verification doesn't eliminate the AI advantage; it makes the AI advantage safe.

Tools that help verification

Westlaw KeyCite — Standard for U.S. case treatment analysis.

Lexis Shepard's — Equivalent for Lexis users.

Casetext — Includes treatment analysis.

Google Scholar — Free basic citation lookup.

Specialized verification tools — Some AI platforms now include built-in verification with confidence scoring.

The supervisor's role

For supervising attorneys reviewing junior associates' AI-assisted work:

  • Verify the junior attorney actually performed verification (not just claimed to)
  • Spot-check 2-3 citations independently
  • Read at least one case fully to confirm the synthesis
  • Sign off only after verification confidence
This is ABA Model Rule 5.1/5.3 supervision applied to AI workflows.

Documenting verification

For each AI-assisted research project, document:

  • Which AI tool was used
  • What query was run
  • Which citations were verified (and by whom)
  • Any citations that failed verification (and what replaced them)
  • Date of verification
This documentation matters for:
  • Client files
  • Malpractice insurance compliance
  • Internal compliance audit
  • Bar discipline defense if questioned

Patterns that cause hallucination

AI is more likely to hallucinate when:

  • Asked about very recent cases (within 30 days)
  • Asked about specific jurisdictions that have less data
  • Asked about niche practice areas
  • Pushed for specificity it doesn't have
  • Asked leading questions that suggest a desired outcome
Recognizing these patterns helps you raise verification scrutiny.

When verification fails

If verification reveals a hallucinated citation:

  • Stop using that proposition unless you can independently support it
  • Document the hallucination as a tool issue
  • Continue verifying — one hallucination often signals more in the same output
  • Consider whether the tool is appropriate for your use case
Do not file a brief with hallucinated citations. Even if you "fix" some, the others may be wrong too. Rebuild the research with verified sources.

The Mata v. Avianca lessons

From the actual case:

  • AI generated citations that didn't exist. The lawyers used ChatGPT for research and didn't verify.
  • Lawyers signed and filed the brief anyway. ABA Rule 11 obligates lawyers to certify the truth of submissions.
  • Opposing counsel discovered the issue. The fictional cases were caught by adversaries, not the filing attorneys.
  • Court sanctioned the attorneys. Sanctions, public embarrassment, professional consequences.
  • The case became precedent for AI verification standards. Cited in subsequent ethics opinions and bar guidance.
The lesson is simple: verify before filing. Always. No exceptions.

What we deploy

For firms working with us on AI verification:

  • Verification workflow integrated with research process
  • Attorney training on the four-step verification
  • Documentation framework for verification records
  • Supervisory review process
  • Quarterly compliance audit of AI-assisted work
The verification discipline isn't a barrier to AI deployment. It's the foundation that makes AI deployment safe.

Bottom line

AI hallucinates. Sometimes obviously (entire fictional cases), sometimes subtly (correct case, wrong proposition). The four-step verification workflow catches both.

The time cost of verification is real but manageable. The cost of not verifying is career-ending. Every firm and every attorney using AI for legal work needs structured verification discipline.

This isn't a "nice to have." It's the operating manual for AI-assisted legal practice in 2026. Build it into your workflow, train your staff on it, supervise its application, document its execution. The Mata v. Avianca lesson is the discipline that prevents the next Mata v. Avianca.

Frequently asked questions

What happened in Mata v. Avianca?

Attorneys used ChatGPT for legal research, filed a brief citing six fictional cases AI generated, and were sanctioned. The case is the standard cautionary tale for AI use without verification.

How do I verify an AI citation?

Four steps: (1) Confirm the case exists in Westlaw/Lexis/Google Scholar; (2) Pull the case and read the relevant portion; (3) Confirm any quoted language is actually in the case; (4) Check current treatment via KeyCite or Shepard's. Do all four before filing.

How much time does verification add to AI research?

60-90 minutes per research project. Total AI-assisted research with verification still saves 70-80% versus manual research. The verification makes AI deployment safe, not slow.

Can I trust AI summaries of cases?

Use them as starting points only. Always pull and read the actual case before citing it, quoting it, or relying on it. AI can summarize incorrectly even when the case exists.

What if I find AI hallucinated a citation?

Stop using that proposition unless independently supportable. Document the hallucination. Continue verifying — one hallucination often signals more in the same output. Do not file a brief with any hallucinated content.

Related guides

Need help implementing this?

//prometheus does onsite AI consulting and implementation in Milwaukee. We set it up, train your team, and make sure it works.

let's talk