AI Conflict Check for Law Firms in 90 Lines
Conflict checking is the highest-stakes mundane work in a law firm. Here's a working AI-augmented pattern that runs in 8 seconds, catches name variations, and flags adjacencies humans miss.
Conflict checks are the most-disliked work in a law firm. They're slow, error-prone, and consequential. Miss one and you lose a client or violate ethics rules.
This is a working pattern I deployed at a 14-attorney firm. The conflict check runs in 8 seconds. It catches name variations and corporate adjacencies that the old system missed.
What the firm had before
Their CMS (Aderant in their case) had a conflict check tool. It was a substring match. Type "Acme Corporation," it returns matters involving "Acme Corporation."
It missed: - "Acme Corp" vs "Acme Corporation" - "Acme Holdings" as a parent of Acme Corp - "John Smith" vs "John A. Smith" vs "Jonathan Smith" - A former employee of opposing counsel now working at the prospect - Adjacent matter types (we represent Acme in IP; new prospect is suing Acme's subsidiary in employment)
Each miss was potentially malpractice. The firm's risk partner spent an hour per conflict check doing manual cross-references.
The build
Step 1: Build a clean prospect record. When a new matter comes in, capture: - Client name + any aliases - Parent/subsidiary relationships if known - Key individuals (officers, attorneys, witnesses) - Adverse parties - Matter type / practice area - Relevant time period
Step 2: Pull all candidates from the CMS. Substring match on the prospect's name, parties, key individuals. This is what the old system did. It returns 30-200 candidates.
Step 3: Claude scores each candidate. For each candidate matter from the CMS, send to Claude:
``` Score this potential conflict.
Prospect: {prospect_data} Existing matter: {existing_matter_summary} Relationship between names: {string_similarity, parent_subsidiary_check}
Return: - conflict_likelihood: 0-10 - conflict_type: "direct" | "former_client" | "positional" | "adjacent" | "personnel" | "none" - reasoning: one sentence - recommended_action: "decline" | "consent_required" | "screen" | "no_conflict"
Score 6+ requires human review. Score 9+ should be assumed conflict pending clearance. ```
Step 4: Aggregate and present. The system returns a ranked list. The risk partner reviews the top 10 (or all 9+ scores). The 30-200 candidates have been narrowed to a handful with clear reasoning.
Step 5: Name variation pre-check. Before scoring, expand prospect names using a name-normalization step: - "Corporation" / "Corp" / "Inc" / "LLC" / "Ltd" all map to a normalized form - Common first-name nicknames are expanded (Bob → Robert, Bill → William, etc.) - Last-name suffixes (Jr, Sr, III) are stripped for matching but flagged separately
The pre-check expands the candidate pool. Claude then narrows it.
The 90-line claim
The whole pattern (excluding the CMS query layer which is firm-specific) is about 90 lines of code:
- -~20 lines for name normalization
- -~15 lines for parent/subsidiary lookup (we used Clearbit's company graph as a hint, then human verification)
- -~30 lines for the Claude scoring loop
- -~25 lines for aggregation and presentation
The CMS query layer is whatever your CMS exposes. Aderant has an API. Other firms have other paths. Plan on a few days of integration work there.
What it caught
In the first 3 months at this firm:
- -6 conflicts that the old substring tool would have missed. Three of those would have created clear ethics issues. The firm declined the matters with documented clearance.
- -About 40% reduction in time per conflict check on average. Some clearances are still complex (multi-party, large institutional clients) and take real partner time. But the simple ones now take minutes instead of an hour.
- -One false positive that the partners decided was actually a real issue we hadn't categorized. We added "personnel_history" to the conflict_type taxonomy.
What broke
Claude initially over-flagged. First version of the prompt was too cautious. Score 7+ was set as "review required" and the partners were drowning in reviews. We tightened the prompt — score 7+ now means specific evidence of a likely conflict, not "names appear similar."
Parent/subsidiary data is hard. Clearbit is decent but not authoritative. SEC filings are authoritative but harder to parse. We built a hybrid that flags potential parent/subsidiary relationships from Clearbit and asks a paralegal to verify before clearance is final.
The system needs the human in the loop. Claude doesn't make conflict decisions. It surfaces likely conflicts with reasoning. A licensed attorney (the risk partner) makes the call.
What this isn't
This is not a replacement for the firm's conflict-clearance process. It's a triage tool. The clearance decision is still a human, licensed attorney's decision.
It's also not a substitute for thorough new-matter intake. Garbage in, garbage out. The system depends on the prospect record being clean. If the partner doing intake skips parts, the system misses things.
What to build first
If you're at a law firm with this problem:
One, the name normalization layer. This alone improves the existing CMS search materially. No AI required.
Two, the Claude scoring of candidates. Layer this on top of the existing CMS search. You don't need to replace the CMS — you augment it.
Three, the parent/subsidiary check. Last because it requires data sourcing decisions.
Total build: about a week for a dev who knows their CMS API. The risk-partner time savings pays it back in two months.
Want the full guide? Check out our deep-dive page for more context, FAQs, and resources.
read the full guide