SOC 2 + AI Vendors: What Audit Actually Wants
If your customers are enterprise, your SOC 2 auditor will ask about your AI vendors. Here's what they want to see and how to prepare without panicking.
If your customers are enterprise, your SOC 2 auditor is going to ask about your AI vendors. The questions are predictable. The answers should be ready before the audit.
This is not legal advice. Consult your SOC 2 auditor.
What auditors are asking
For each AI vendor and each AI integration in your product, expect:
- -What data is sent to the vendor
- -Whether that data includes customer data, PII, or sensitive data
- -The vendor's data retention and training policies
- -Your contract terms with the vendor (DPA, no-training, deletion rights)
- -How you monitor vendor compliance
- -Your incident response plan if the vendor has a breach
- -Whether you have a vendor risk assessment on file
The five most-asked vendors
OpenAI. ChatGPT API, GPT-4, Whisper, embedding models. Auditors want: enterprise tier confirmation, no-training opt-in proven, data residency aligned with customer commitments.
Anthropic. Claude API. Auditors want: enterprise terms, no-training confirmation, data handling documentation.
Microsoft Azure OpenAI. Auditors want: tenant configuration, data residency, BAA if HIPAA applies, no-training confirmation.
Google Vertex AI. Auditors want: project-level confidentiality controls, data residency, BAA if HIPAA applies.
AWS Bedrock. Auditors want: model selection, data residency, no-training confirmation, IAM-level access controls.
Have one-page summaries for each you use.
What to have on file
1. Vendor inventory. A list of every AI vendor and integration. Updated quarterly.
2. Data flow diagrams. For each AI integration, a diagram showing what data flows where and what's sent to the vendor.
3. Vendor DPAs. Signed data processing agreements with every vendor that handles customer data.
4. No-training confirmations. Written confirmation that your vendor does not train on your data (or that you've opted out where opt-out is available).
5. Incident response runbook. What you do if a vendor reports a breach affecting your customer data.
6. Vendor monitoring records. Evidence you're checking vendor compliance over time (SOC 2 reports, attestations, breach notifications).
The new question: AI-specific risk assessment
SOC 2 auditors are increasingly asking for an AI-specific risk assessment. Not just vendor risk. The risks of AI as a category.
Topics covered: - Hallucination risk in AI outputs (and your mitigations) - Prompt injection risk (and your mitigations) - Data leakage via model outputs (and your mitigations) - Bias risk in AI-driven decisions (and your mitigations) - Customer data used to fine-tune (if any) and how that's communicated
For most companies a 4-6 page document covering these is sufficient.
What to NOT do
Don't claim AI vendors are "secure" without evidence. Auditors want documentation. Vendor's marketing copy is not evidence.
Don't skip the vendor DPA. Even if your vendor says "we don't train," get it in writing in the DPA.
Don't have a different AI policy than what's actually happening. If your AI policy says "we use only Microsoft Copilot" and your engineers are also using ChatGPT consumer, auditors will find it.
Don't ignore the data residency question. If your customer commitments include "data stays in the EU" and your AI vendor processes in the US, that's a finding.
Patterns that pass cleanly
AI vendors with attestations. Most enterprise AI vendors now provide SOC 2 reports, ISO 27001, or similar attestations. Collect them. Reference them in your audit prep.
Explicit data classifications for AI input. Your engineers know what data can/cannot go to which vendor. Documented. Trained.
AI usage logs. You can show auditors: in the last 90 days, X requests to vendor Y, here's the data classification of those requests.
Regular vendor reviews. Quarterly or annually, you re-confirm vendor terms haven't changed. Documented.
The customer commitment question
If you've told customers "we don't use their data to train AI" or "AI outputs are not retained," your auditor will verify. They'll look for: contractual commitments from your vendor matching what you told customers, monitoring evidence, training documentation.
The cost of getting this wrong is double — auditor finding plus customer notification obligation.
The bottom line
SOC 2 + AI is workable. The discipline is real. Vendor management has expanded to include AI specifically.
The firms that pass cleanly are the ones who treated AI vendor management as a first-class concern from day one. The firms that scramble during audit are the ones who treated AI as "not yet a real vendor category."
It's a real vendor category. Manage it accordingly.
Not legal advice. Work with your auditor and counsel.
Want the full guide? Check out our deep-dive page for more context, FAQs, and resources.
read the full guide