We gave AI our entire product roadmap and asked it to predict our failure points. It was brutal.
We ran an experiment 2 weeks ago.
Control group: a two-hour roadmap review meeting. Six people in a room (virtual). We debated features. We argued about timelines. We discussed dependencies. We left feeling productive.
Test group: We fed the same roadmap into Claude. No slides. No politics. No one trying to protect their pet project. Just the raw plan. The prompt: "Analyze this roadmap. Identify the three most likely failure points. Use first‑principles reasoning. Assume we will follow your recommendations without ego. If you need more data, ask for it."
The results were not symmetrical.
Failure point #1: The unvalidated feature.
The roadmap had a feature labeled "Priority 1." Estimated build: six weeks. Claude flagged it immediately.
"No user research cited. No support ticket volume. No search volume in your category. This is a solution looking for a problem. What signal are you acting on?"
We went back through our internal docs. Zero customer interviews. Zero mentions in support tickets over 12 months. Zero search volume in our own analytics. The feature was in the roadmap because a senior engineer thought it would be "elegant."
We had been three weeks into development.
Failure point #2: The integration with no ROI.
Another line item: "Integrate with Platform X." Estimated: eight weeks.
Claude's analysis: "What outcome does this drive? No stated goal. No success metric. No evidence customers are asking for this. Without these, this is a cost center disguised as a feature."
We pulled our customer call transcripts from the past six months. 147 calls. Not one mention of Platform X. We pulled our competitor tracking data. Not one competitor in our space had built this integration. The feature was in the roadmap because a partner had mentioned it once in a casual conversation.
We were about to assign two engineers to it for two months.
Failure point #3: The pricing change without data.
A third item was labeled "Pricing review — Q2." No details. No attached research.
Claude's analysis: "Pricing changes without customer segmentation data, churn analysis, or willingness‑to‑pay research have a 70%+ failure rate in B2B SaaS. What is this based on?"
We checked. We had run zero analysis. We had not interviewed a single customer. We had not modeled the impact on existing vs new customers. The initiative was in the roadmap because we "felt like we should charge more."
What we learned about how we work:
The control group meeting lasted two hours. We spent 40 minutes debating Feature X. Not one person said "do we have any evidence anyone wants this?" We spent 30 minutes discussing the integration. Not one person asked "has anyone actually asked for this?" We spent 20 minutes on pricing. Not one person requested data.
We left feeling productive because we had made decisions. But we had made decisions in an information vacuum.
The AI analysis took 90 seconds. It asked three questions we should have asked ourselves.
What we're building now:
We've started using this as a forcing function. Before any roadmap item gets approved, we run it through a simple checklist:
What customer signal are we acting on? (Support tickets, interviews, search data.)
What outcome are we trying to drive? (Not "launch feature," but "increase retention by X.")
What data do we have that this will work? (Not "it makes sense," but "we tested it and X happened.")
We also started feeding our own data back into the process. We track AI visibility stats across 7 systems — things like how often brands get cited, what content formats win, where the gaps are. We realized we weren't applying that data to our own roadmap. Now we run new feature ideas through that same lens. If our own data says nobody in our category is citing something, maybe we shouldn't build it.
If an item can't answer those three questions, it doesn't go on the roadmap. Not because AI said so. Because we realized we were spending months building things nobody asked for.
What we're curious about:
What's your process for catching yourself before you build something nobody wants? Do you use data? Do you use AI? Or do you find out when nobody uses it?
Imed Radhouani
Founder & CTO – Rankfender
Built on feedback, not ego



Replies
A lot of insights in this post. My take is that, in my case, the cost of building the feature is lower than the cost of the data gathering and analysis, depending on what I am building it is worth the shot, sometimes just to use it as a marketing pitch and excuse to publish on social media, even when I am aware noone is going to use it.
One thing I think many businesses struggle with is identifying which products or features to build that will a) enhance the product offering and b) make the users and community feel heard.
There's something really interesting about the fact that AI has no stake in the outcome. No ego, no awkwardness, no bad vibes after the call. Getting honest feedback from real users can sometimes be uncomfortable; people are polite, or they ghost, or the feedback stings a little.
At getrecall.ai, we have a feature request board where users can submit their feature requests, and others can upvote them. Features with the most upvotes generally get prioritized in future builds. It's low friction for them, but real signal for us. We also have calls with frequent users to get honest feedback.
Would love to hear how the AI analysis landed with the team, though, did it ruffle any feathers?
Rankfender
@nicole_howitt You're right about the ego thing. That's the part nobody talks about. Getting feedback from real users is great, but there's always that moment where someone says something and you feel yourself getting defensive. Even when they're right. Especially when they're right.
The upvote board is smart. Low friction, real signal. The problem we had was that the loudest users weren't always the ones who actually knew what they were talking about. We had a feature with tons of upvotes. Built it. Nobody used it. Turns out people were upvoting because it sounded cool, not because they needed it.
The AI analysis landed better than I expected. I was nervous about how the team would take it. But when you strip out the politics and just look at the data, it's hard to argue. No one was offended because no one was being blamed. The data just showed us what we already knew but weren't saying out loud.
The feature nobody asked for? That was my idea. AI called it out. I had to sit there and realize I was three weeks into building something nobody wanted. That stung. But it's better to find out now than after launch.
Did the upvote board ever lead you down the wrong path?
The "zero mentions in 147 customer calls" line is the killer. that's exactly the kind of signal humans skip because the feature "feels right."
Seeing the same pattern on the idea validation side — founders build things nobody asked for because one friend said "that's cool." The brutal part isn't the AI saying no. it's realizing you could've found out weeks ago if you'd just looked at the data.
This is one of the more honest uses of AI for product work I have seen. Most people use it to validate what they already believe. Asking it to actively find failure points is a very different posture.
The brutal feedback is usually the most valuable. We did something similar with Hello Aria (AI assistant, launching PH April 10th) - gave Claude the full product spec and user flows and asked: what will users misunderstand on day one? The output was uncomfortable but directly shaped our onboarding.
Two things I would add to your experiment: (1) ask it to predict failure points from the competitor's perspective too, and (2) give it actual user feedback if you have any - it gets much more specific when grounded in real signals rather than just spec documents.
What was the most uncomfortable prediction it made?