AI eDiscovery Review Workflow for Law Firms
Operator workflow for AI-assisted eDiscovery review. Tools, workflow steps, attorney verification, and the economics that make it work.
AI-assisted review changes the math. Volume is no longer the bottleneck. The bottleneck is workflow design, attorney supervision, and verification discipline.
Here's the operator workflow.
What AI handles in eDiscovery
- Predictive coding / Technology-Assisted Review (TAR): Categorize documents as relevant, non-relevant, or privileged based on attorney training set
- Concept clustering: Group similar documents for batch review
- Communication analysis: Identify key custodians and communication patterns
- Privilege detection: Flag potentially privileged documents
- Key fact identification: Surface documents addressing specific issues
- Production preparation: Format and redact for production
What attorneys handle
- Initial classification training (seed set)
- Verification of AI categorization
- Privilege review confirmation
- Strategic decisions on production
- Discovery dispute resolution
- Final attorney sign-off
The tools
Relativity with aiR: Industry standard. Has been the dominant eDiscovery platform for years. The aiR AI layer adds modern AI capabilities.
DISCO: Modern challenger. Strong AI features, cloud-native architecture. Growing share at firms switching from Relativity.
Reveal (acquired Logikcull): AI-powered eDiscovery, growing capability.
Everlaw: Modern eDiscovery with strong AI features.
Brainspace (acquired by Veritone): Analytics-focused.
Zylab: International strength.
Choice depends on firm size, matter type, and existing infrastructure. Most AmLaw firms run Relativity. DISCO is gaining share at firms valuing modern architecture.
The standard workflow
Phase 1: Setup and processing (days to weeks)
- Collect documents from custodians and sources
- Process documents (deduplication, near-duplication, threading)
- Prepare for review platform ingestion
- Set up review universe
- Train AI on seed set of attorney-coded documents (~500-2000 documents)
- Run AI prediction across full document population
- AI returns predicted relevance categorization with confidence scoring
- Attorney reviews AI output and refines training
- Review batches of AI-categorized documents
- Code variances (where attorney disagrees with AI)
- Refine training set
- Re-run AI prediction
- Repeat until model converges to acceptable accuracy
- Apply final AI categorization
- Privilege review of relevant population
- Redaction of privileged or protected information
- Production formatting
- Lead attorney reviews production set
- Confirms relevance, privilege, and production accuracy
- Produces to opposing counsel
- Maintains production log
Where AI compresses time most
For a typical 500,000-document matter:
Without AI:
- Linear attorney review at typical pace (50-100 docs/hour) = 5,000-10,000 attorney hours
- Cost at $300-500/hour blended associate rate = $1.5M-5M
- AI categorizes full population
- Attorneys review and verify ~30-40% of documents (relevance, privilege)
- Total attorney review: 1,500-3,000 hours
- Cost: $450k-1.5M
Defensibility of AI-assisted review
Courts have generally accepted predictive coding / TAR as defensible since Da Silva Moore v. Publicis Groupe (2012). Current state:
- AI-assisted review is defensible when properly executed
- Documentation of the process is critical
- Attorney supervision is required throughout
- Both parties may use AI; transparency about AI use is increasingly expected
- Clear protocol established at outset
- Sound seed set training
- Iterative refinement documented
- Reasonable validation testing
- Attorney sign-off on final categorization
Privilege protection
AI assists but doesn't replace privilege review. Standard practice:
- AI flags potentially privileged documents (typically based on attorney-client communications, work product indicators)
- Attorneys verify each flagged document
- Privilege log generated for produced documents that reference privileged matter
- Quality control sampling of "non-privileged" categorization
Compliance and ethics
eDiscovery AI touches:
- Rule 26 cooperation obligations
- Rule 34 production scope
- Privilege and work product protection
- Sanctions exposure (Rule 37)
- State bar AI guidance
The attorney remains accountable. AI accelerates the work; the attorney signs the production.
What can go wrong
Pattern 1: Inadequate seed set. Poor training leads to poor AI prediction. Train carefully on representative documents.
Pattern 2: Insufficient iteration. First-pass AI accuracy is rarely good enough. Refine through multiple rounds.
Pattern 3: Privilege miss. AI categorizes a privileged document as non-privileged, gets produced. Build redundant privilege review.
Pattern 4: Production scope errors. AI categorizes too broadly or narrowly. Validate against Rule 26 scope.
Pattern 5: Lack of documentation. Process isn't documented; defensibility is weakened. Document everything.
Economics by matter size
Small matter (under 10,000 docs): AI may not justify setup. Manual review competitive.
Mid-size matter (10,000-100,000 docs): AI compresses 40-60% of review time. Clear ROI.
Large matter (100,000-1M docs): AI essential. Without it, matter becomes economically unviable.
Mega matter (1M+ docs): AI mandatory. The only way to manage at scale.
What we deploy
For firms working with us on eDiscovery AI:
- Standard platform selection (Relativity, DISCO, or Reveal based on firm needs)
- Workflow design for AI-assisted review
- Attorney training on TAR protocols
- Compliance documentation framework
- Quality control sampling process
- Defensibility documentation
Bottom line
AI-assisted eDiscovery review in 2026 is essential at any meaningful scale. The economics don't work without it for matters above 100,000 documents. The technology is mature, the defensibility framework is established, and the workflow is well-understood.
Done right, AI eDiscovery delivers 60-70% review hour reduction with maintained or improved accuracy. Done wrong — without proper training, without supervision, without verification — it creates production errors, privilege breaches, and sanctions exposure.
The discipline is what makes it work. Train the AI carefully. Iterate the model. Supervise the output. Verify privilege. Document the process. The defensibility comes from the discipline.
Frequently asked questions
Is AI eDiscovery review defensible in court?
Yes — courts have generally accepted predictive coding and TAR since Da Silva Moore v. Publicis Groupe (2012). Defensibility requires clear protocol, sound seed set training, iterative refinement, validation testing, and attorney sign-off. Document the process.
How much time does AI save on eDiscovery review?
Typical savings: 60-70% of attorney review hours and cost on large matters. For a 500,000-document matter, that's the difference between $1.5M-5M and $450k-1.5M in review cost. Savings scale with document volume.
What eDiscovery platforms have the best AI?
Relativity with aiR is the industry standard. DISCO is the modern challenger with strong AI. Reveal, Everlaw, and Brainspace are also strong. Choice depends on firm size, matter type, and existing infrastructure.
Can AI miss privileged documents?
Yes — AI can categorize a privileged document as non-privileged if the privilege indicators are subtle. Build redundant privilege review with attorney verification of every flagged document plus quality control sampling of 'non-privileged' categorization.
When is manual review still appropriate?
Small matters under 10,000 documents where AI setup may not justify the cost. Highly sensitive matters where every document must be attorney-reviewed regardless of AI categorization. Manual still works at small scale; AI is essential at large scale.
Related guides
Need help implementing this?
//prometheus does onsite AI consulting and implementation in Milwaukee. We set it up, train your team, and make sure it works.
let's talk