The dominant narrative in AI is to move fast. Build, iterate, launch. And in many cases that’s true. But nobody talks about the projects where the right decision was not to start.
At Redstone Labs we’ve said “no” to AI projects more often than we’ve said “yes.” Not because we couldn’t build them. But because building them would have been a mistake.
Here are three real examples (with details changed for confidentiality) of projects we evaluated, rejected, and why it was the right decision.
Project 1: The customer service chatbot
The ask: A financial services company wanted an AI chatbot to handle 80% of customer inquiries.
Why it sounded good: They had 15 people in the call center, inquiries were repetitive, and operating costs were high. The business case on paper was flawless.
Why we said no: When we analyzed the actual conversations, 80% of the “repetitive” inquiries had nuances that completely changed the answer. “When will my card arrive?” sounds simple, but the answer depended on whether it was a new card, fraud replacement, renewal, or product change. Each case had a different flow.
A chatbot that answered correctly 70% of the time and wrong 30% would generate more problems than it solved. In financial services, an incorrect answer isn’t an annoyance. It’s a regulatory risk.
What we recommended: Before automating responses, structure the information. Build a well-organized internal knowledge base so human agents could find the right answer faster. Less exciting, more effective.
Project 2: Demand prediction with machine learning
The ask: A distribution company wanted to predict product demand to optimize inventory. Classic ML case.
Why it sounded good: They had 3 years of sales data, tight margins, and significant losses from overstock and stockouts.
Why we said no: The data existed, but it lived in 4 different systems that didn’t talk to each other. Product categories had changed twice in 3 years. There were entire months with no data due to ERP migration. And most importantly: purchasing decisions were made by a manager with 20 years of experience who adjusted everything by gut feel based on factors not in any system (weather, local holidays, direct competition).
Training a model on that data wouldn’t produce predictions. It would produce noise with nice formatting.
What we recommended: First, unify the data into a single system with consistent categories. Second, document the experienced manager’s decision criteria (the stuff in his head that wasn’t in any system). In 6 months, with clean data and explicit criteria, a prediction model would make sense. Before that, it was burning money.
In ecology they call it carrying capacity: the maximum number of organisms an ecosystem can sustain. If your data ecosystem can’t sustain an ML model, forcing it won’t work. First, enrich the ecosystem.
Project 3: Computer vision for quality control
The ask: A manufacturer wanted to use AI cameras to detect defects on their production line.
Why it sounded good: Defect rates were 3-4%, manual inspection was slow, and computer vision examples in manufacturing are abundant.
Why we said no: The product had natural variations in color, texture, and shape that were normal but indistinguishable from some defects for an automated system. Training a model required thousands of correctly labeled images, and the definition of “defect” varied among human inspectors. What one approved, another rejected.
It wasn’t an AI problem. It was a quality definition problem the company hadn’t resolved internally. A computer vision system would have automated inconsistency, not quality.
What we recommended: Standardize quality criteria first. Create a visual manual with clear examples of what’s acceptable and what’s not. Train human inspectors on those criteria. Then, when human consistency improved, use those consistent inspections as training data for the model.
The pattern
All three projects have something in common: the problem wasn’t technical. It was organizational. Data wasn’t ready, processes weren’t defined, or expectations weren’t aligned with reality.
The temptation is to say “AI will solve it.” But AI doesn’t solve organizational problems. It amplifies them. If your process is inconsistent, AI will be too. If your data is bad, results will be bad. If your expectations are unrealistic, disappointment will be real.
Why this matters
Saying “no” to a project isn’t losing a client. It’s gaining credibility. All three clients from these examples came back later with better-defined projects, better prepared, and with real results.
Honesty doesn’t sell in the pitch. But it sells in the long run. And in consulting, the long run is the only thing that matters.