EU AI Act: What Changes for US Firms With EU Clients
The EU AI Act applies even to non-EU firms whose AI systems are used in the EU. Here's what US firms need to know and the categories that matter most.
The EU AI Act is the most significant AI-specific regulation in force. It applies to providers and deployers of AI systems used in the EU, regardless of where they're headquartered.
If you're a US firm with EU customers, this applies to you. Here's the practical map.
This is not legal advice. Talk to your EU counsel.
The risk-based framework
The Act categorizes AI systems into risk tiers:
Prohibited. Certain uses are banned: social scoring by governments, real-time biometric ID in public spaces (with exceptions), emotion recognition in workplaces and schools, AI exploiting vulnerabilities.
High-risk. AI used in: critical infrastructure, education and vocational training, employment, access to essential services (credit scoring, public benefits), law enforcement, migration and border control, administration of justice. High-risk systems have substantive requirements: risk management, data governance, transparency, human oversight, accuracy.
Limited risk. Systems that interact with humans (chatbots), generate or manipulate content (deepfakes). Transparency obligations apply.
Minimal or no risk. Most AI applications. No specific obligations.
What categorizes your product
For most B2B SaaS, the questions to ask:
1. Does our AI make decisions about employment, credit, education, public benefits, or other "essential services"? If yes, likely high-risk. 2. Does our AI interact with EU individuals as a chatbot? Limited-risk transparency obligations. 3. Does our AI generate content that could be mistaken for human? Limited-risk disclosure. 4. Is our AI prohibited in any way? Then you can't deploy in the EU at all.
Most products fall into minimal-risk or limited-risk categories. The high-risk category is narrower than it sounds.
What limited-risk transparency means
If you have an AI chatbot interacting with EU users: - Disclose to users that they're interacting with AI (unless obvious from context) - Disclose at the start of the interaction, not buried in terms
Many products already do this. Most are not yet doing it explicitly enough.
For AI-generated content (text, image, audio, video) that could be mistaken for human: - Disclose that it's AI-generated - For deepfakes specifically, more stringent disclosure
What high-risk compliance involves
If you're providing or deploying high-risk AI, you need: - Risk management system - Data quality and governance - Technical documentation - Record-keeping (automatic logs) - Transparency to deployers/users - Human oversight design - Accuracy, robustness, cybersecurity - Conformity assessment - CE marking - Registration in the EU database
This is a substantive compliance program. Months of work for most firms. Not something you bolt on in a sprint.
The general-purpose AI provider question
If you're providing general-purpose AI models (foundation models), additional rules apply. Most readers of this post are using such models, not providing them. The providers (OpenAI, Anthropic, Google, etc.) carry these obligations.
But if you fine-tune a foundation model and offer it to others, you may be a provider in your own right. Get advice.
The deployer (operator of AI) question
If you USE high-risk AI systems in your operations affecting EU individuals, you're a deployer. Deployer obligations include: - Use the system per the provider's instructions - Assign human oversight - Monitor and log - Inform affected individuals (in some cases)
This applies even if you didn't build the AI. Using a third-party AI for employment decisions (resume screening, for example) makes you a deployer.
A specific scenario: AI in your hiring tool
If your firm uses AI to screen resumes from EU applicants, you're deploying AI for "employment" — high-risk category.
You need: - Confirmation the AI provider is compliant (CE marked, etc.) - Human oversight in your screening process - Transparency to applicants about AI use - Bias monitoring and documentation - Records of decisions
This is a real compliance program. Most firms using off-the-shelf hiring AI haven't built it yet.
A specific scenario: AI customer support chatbot
For an EU-facing customer support chatbot: - Disclose at start: "I'm an AI assistant" - Provide a path to human support (some products require this) - Document the AI's design and intended use - Monitor for accuracy
Lighter requirements than the hiring example. Still real.
What changes in timeline
The Act has phased implementation: - Prohibited practices: already in effect - General-purpose AI obligations: effect August 2025 - High-risk AI obligations: effect August 2026 for most systems - Full enforcement: August 2027
Use the runway. Compliance programs take time.
What US firms should do
- -Map your AI uses and categorize each
- -Identify any high-risk uses
- -Identify limited-risk transparency gaps (chatbots, generated content)
- -Build a compliance program for high-risk uses, if any
- -Update product UX to disclose AI use where needed
- -Confirm your AI vendors are aligned with their obligations
The bottom line
EU AI Act is real, comprehensive, and extraterritorial. US firms with EU customers don't have an opt-out.
The good news: most firms have minimal-risk or limited-risk products that need simple transparency updates. The minority with high-risk products have real work ahead.
Don't ignore. Don't panic. Map your exposure and address each category.
Not legal advice. Talk to your EU counsel.
Want the full guide? Check out our deep-dive page for more context, FAQs, and resources.
read the full guide