The AI Assistant That Got the Founder Sued (Almost)
A coaching practice's AI scheduling assistant promised a refund it shouldn't have. The 'almost' is because the founder caught it before legal. Here's what went wrong and the guardrails we should have had.
A coaching practice's AI scheduling assistant handled inquiries while the founder was traveling. The bot could book calls, answer FAQs, and route certain questions to humans.
A prospect asked about the refund policy. The bot said "yes, we offer full refunds within 30 days." The actual refund policy was "refunds at our discretion, prorated based on usage."
The prospect signed up. Used the program for 28 days. Asked for a full refund citing the bot's promise. The founder said no. The prospect threatened to sue and got close enough to filing that legal got involved.
The founder caught the chat log just in time. The prospect dropped the claim after seeing they had documentation of the bot's incorrect promise but the founder also had documentation that the bot was an assistant, not a representative. It was a near-miss settled informally.
What happened
The bot's system prompt described the company. It did not include the refund policy. When asked about refunds, the model produced a plausible-sounding answer based on common practice in the coaching industry — "yes, 30-day refund" — because that's a common pattern.
The bot wasn't lying. It was confabulating from priors.
Root cause
Three failures stacked.
One, the system prompt didn't tell the bot what NOT to do. It described the company, the tone, the available actions. It did not say "do not answer questions about refund policy, pricing terms, or anything contractual."
Two, the bot didn't have a refusal pattern. When asked something outside its scope, it defaulted to "be helpful and answer" instead of "I'll route this to a human."
Three, there was no scope-violation logging. The bot answered the refund question without flagging that it had answered something it shouldn't.
What we did instead
We rebuilt with three layers of protection.
Layer one: explicit out-of-scope list in the system prompt. The list covers: - Refund and cancellation terms - Pricing changes or discounts - Guarantees or outcome promises - Legal or contractual statements - Compensation or comp time - Anything specific to a client's account beyond scheduling
If the user asks about any of these, the bot says: "Good question. Let me get the founder to follow up on that personally. What's the best way to reach you?"
Layer two: confidence-based escalation. If the bot's confidence on any answer is below 0.8, route to human before answering.
Layer three: scope-violation logging. Every conversation gets scanned (separately, after the fact) for whether the bot answered anything in the out-of-scope list. If yes, alert the founder.
What I tell prospects now
If your AI agent is going to interact with prospects or clients, the system prompt needs more "don'ts" than "dos." The "dos" are what you want the agent to do. The "don'ts" are what gets you sued.
Specifically: any topic that could create a binding obligation, modify terms, or constitute legal/financial/medical advice — explicit refusal, route to human.
For coaching specifically: refunds, guarantees, results promises, specific personal advice. For wealth: trade recommendations, performance promises, specific tax advice. For law: any legal opinion. For healthcare: any diagnostic or treatment suggestion.
The negative list is the safety net. Without it, the model defaults to "be helpful and confident" which is exactly the wrong default for risk surfaces.
The lesson
A helpful AI is a risky AI when it's helpful about the wrong things. Constrain aggressively. Let the human handle anything that creates obligation.
The agent's value is in handling the 80% that's safe and routing the 20% that's risky. Most agent failures I see are agents over-helping into the 20%.
The thing nobody mentions
After this incident the founder added a one-line disclosure to the bot's first message: "I'm an AI assistant for [Founder Name]. For questions about refunds, terms, or anything we'd need to make a commitment on, I'll connect you to the human."
The disclosure is honest. Prospects don't mind. The disclosure also creates legal cover. Win on both sides.
If your agent doesn't say it's an agent up front, you're carrying liability that's easy to drop.
Want the full guide? Check out our deep-dive page for more context, FAQs, and resources.
read the full guide