// ai postmortemsby JoshApril 26, 20265 min read

Why the Chatbot Rollout Killed CSAT (And What We Did Instead)

A SaaS company rolled out an AI chat agent. CSAT dropped 0.6 points in 30 days. The rollback exposed the actual problem — and the fix wasn't a better bot.

Why the Chatbot Rollout Killed CSAT (And What We Did Instead)

A SaaS client deployed an AI chatbot as their first line of customer support. The rollout was clean. The bot was well-trained on the documentation. It answered questions correctly most of the time.

CSAT dropped from 4.4 to 3.8 over the first month. Renewals started getting harder. Support tickets to humans went down (as designed), but the ones that escalated to humans took longer and were angrier.

We rolled it back. The actual problem revealed itself.

What the bot did well

The bot answered fast. Average response time went from 8 minutes to 30 seconds. Accuracy on FAQ-style questions was 92%. Cost per resolved ticket dropped 60%.

By every internal metric, the bot was a success.

What the bot did badly

Customers hated it.

The qualitative feedback in CSAT comments was specific. "I just wanted to talk to a human." "Felt like I was being processed." "The answers were correct but I felt unheard."

The bot was correct AND unsatisfying. The two are not opposites.

Root cause

The customers weren't reaching out for answers. They were reaching out for relationship.

This was a high-touch product. Customers paid $400-1200/mo. They expected to be known. When they hit support, they were often frustrated or confused — the support contact was as much an emotional touchpoint as an informational one.

The bot gave them information faster. It did not give them connection. Customers felt like the company had downgraded them to a tier where they got the bot instead of the human.

The bot wasn't doing a worse job than the human at answering. It was doing a worse job at the underlying purpose of support, which wasn't actually answering.

What we did instead

We didn't kill the bot. We changed when it ran.

Now the bot runs as a "pre-flight" before connecting to a human. When a customer opens chat, the bot says: "I'm Sarah's AI assistant. I can answer most basic questions immediately, OR I can connect you to Sarah directly. Which would you prefer?"

For customers who want speed, the bot handles it. For customers who want connection, the bot routes them.

About 35% of customers choose the bot. About 65% choose the human. The 35% are happy with the bot (because they chose it). The 65% never had a downgrade experience.

CSAT recovered to 4.5 within 6 weeks.

What I tell prospects now

Before building any customer-facing AI agent, ask:

What is this contact for? Information? Connection? Resolution? Validation?

If it's primarily information, an agent might be net positive.

If it's primarily connection, an agent is dangerous regardless of how good it is.

If it's mixed (which most are), the customer must have the choice. Force-routing to AI is the trap.

For high-touch products specifically, the agent should never be the default. Always opt-in. Always with a visible human path.

For low-touch products (mass-market consumer SaaS), AI-first is often fine because customers don't expect connection at price points under $30/mo.

The lesson

The internal metrics (response time, cost, accuracy) measured the wrong things. They measured agent efficiency. They didn't measure customer experience.

The customers' metric was "did I feel heard." That's not on any operational dashboard.

We added one survey question: "Did this conversation feel like the company knows you?" That single question correlates with renewals better than any other support metric we track. It's now the leading indicator we watch.

The thing nobody mentions

The team that built the bot didn't want to roll it back. They saw the internal metrics. They thought customers were being dramatic.

The team was wrong. Internal metrics that don't include customer feeling are missing the data.

If you're building AI in customer-facing roles, weight customer qualitative data heavily. Numbers without "did I feel respected" miss the entire point.

What this isn't

This isn't an "AI in support is bad" story. AI in support is great when used for the right contacts. The mistake was using it for ALL contacts.

The fix wasn't a better bot. The fix was giving customers the choice.

If you take one thing from this post: never force-route customers to an agent on a high-touch product. Always opt-in. The bot's role is to be available, not to be the gate.

postmortemchatbotcsatcustomer supportai failure
// go deeper

Want the full guide? Check out our deep-dive page for more context, FAQs, and resources.

read the full guide
// keep reading

Related posts

// ready to ship?

Let's build yours.

Reading is the easy part. We do the work. Tell us what's broken and we'll tell you straight up whether we can help.