// field notesby JoshApril 30, 20266 min read

The 90-Day Window Where a Client Almost Killed the Project

Weeks 7 through 16 are where every AI implementation either dies or commits. I've watched 12 projects hit this wall. Here's what's actually happening, how to spot it early, and the move that gets you through it.

The 90-Day Window Where a Client Almost Killed the Project

Every AI implementation I've run has a 90-day window where the client wants to quit. It happens between week 7 and week 16. The specific complaint varies. The shape of the complaint is identical every time.

The first time it happened I thought the project was dying. The second time I knew it was a pattern. By the fifth time I had a name for it: the dip.

What the dip looks like

Weeks 1-6 are honeymoon. Demos go well. Stakeholders are bought in. The first quick wins ship. Everyone is impressed by AI.

Then around week 7, three things happen at once.

The early adopters who loved it have already integrated it into their workflow. They're now invisible because they're just using it. They've stopped giving you positive feedback because they no longer think about the system. They use it.

The skeptical middle starts trying to use it for the first time. They run into edge cases. The edge cases generate frustrated emails. The frustrated emails reach the executive sponsor.

The executive sponsor has by now told their board / partners / spouse about this great AI thing. They are now exposed to the question "how's it going?" Their answer needs to be confident. The frustrated emails undermine the confidence.

By week 9 or 10 you get the meeting. The sponsor says something like "I'm getting feedback that this isn't working as well as we'd hoped."

What's actually happening

Three things, none of which are technical.

1. The early wins were structural improvements that don't generate ongoing positive feedback. The system is doing its job and being ignored, which is exactly what good infrastructure does, but it doesn't look like progress to a sponsor expecting continuous excitement.

2. The skeptical middle is generating complaints at a higher rate than the early adopters generated praise. Complaints are louder than satisfaction. Sponsors hear the noise floor.

3. The sponsor is comparing month 3 to month 1 in feeling, not in metrics. Month 1 had the thrill of newness. Month 3 is operational. Operational always feels less impressive than launch.

If you do not surface this to the sponsor explicitly, the project dies in this window.

The move that gets you through

Week 7, you go to the sponsor with a metrics review. Not optional. Schedule it before you start the engagement.

The metrics review covers three things:

1. Quantitative outcomes. Time saved, dollars recovered, jobs handled, calls answered. Specific numbers compared to baseline. 2. Adoption curve. Who's using it, who's not, what's blocking the non-adopters. 3. The complaint queue, with each complaint categorized as "real issue / fixable," "real issue / requires roadmap," "user training gap," or "AI doing exactly what it should, user's expectation needs adjustment."

The third category is the most important. Most week-9 complaints are category 3. The system is working correctly. The user is asking it to do something it shouldn't do or expected the wrong outcome.

If you walk into the week-9 sponsor meeting with the metrics review in hand, you have changed the conversation. The sponsor was bringing complaints. You are bringing context. The meeting becomes "here is what's working, here is what needs adjustment, here is what's user expectation."

If you walk in without the review, you are reacting to complaints with explanations. You will lose every time.

What I learned the hard way

The third engagement I ran, I skipped the week-7 metrics meeting because the project was going well. By week 11 I was in an emergency meeting with the CFO who wanted to cancel the contract.

The system was performing. The user adoption was good. The CFO had heard a series of edge-case complaints and assumed the project was failing. He had no metrics to look at because I hadn't given him any.

We saved the engagement but I had to write the metrics review on the fly, in front of him, in real time. It took 3 hours of clean-up to recover credibility. The metrics, when assembled, showed we were ahead of every milestone. The optics had been catastrophic.

I have not skipped a week-7 review since.

How to set this up at engagement start

Week 1 of the engagement, baseline everything you can measure. Time per task. Throughput. Customer-satisfaction proxies. Cost per output. Whatever your business metrics are, capture today's number.

Week 7, you measure them again. Most of them will have moved in the right direction. Some of them will be flat. A few will have gotten worse (usually because the system pulled volume from somewhere or shifted what counts).

You put the comparison in front of the sponsor. You walk through what's moved and why. You preempt the complaints with the data.

This is not optional. This is the single most important meeting in any AI engagement. Schedule it before you start coding.

The pattern across industries

This works the same way for CPA firms, RIAs, law firms, trades, coaches, agencies. I've now run this 12 times. The shape is identical every time.

Week 7. Metrics review. Sponsor calibration.

If you're inside an AI implementation right now and you're in week 4-5, set the meeting up now. Future you will thank present you.

If you're already past week 9 and the meeting hasn't happened and you're feeling the heat from skeptics, schedule it for next week. Build the deck on the way. Better late than dead.

change managementai consultingcase studyfield notes
// go deeper

Want the full guide? Check out our deep-dive page for more context, FAQs, and resources.

read the full guide
// keep reading

Related posts

// ready to ship?

Let's build yours.

Reading is the easy part. We do the work. Tell us what's broken and we'll tell you straight up whether we can help.