The feature that almost killed our product was the one users asked for the most
For months, our most requested feature at Murror was a chat function. Users wanted to talk to the AI the way they talk to a friend. It seemed obvious. Every competitor had it. Every feedback form mentioned it.
So we built it.
And within two weeks, our core metrics started dropping. Session length went down. Return rate went down. The thing users said they wanted was actively making the product worse.
Here is what we discovered when we dug into the data: the chat format changed how people related to Murror. Instead of reflecting on their emotions, they started treating it like a customer service bot. "Fix my anxiety." "Tell me why I'm sad." The entire dynamic shifted from self-discovery to outsourcing their emotional processing.
The original experience, which was more structured and guided, worked precisely because it created space for people to sit with their feelings. The chat format removed that space.
We ended up pulling the feature after three weeks and replacing it with something we called "guided conversations." It looks like chat on the surface, but it has built-in pauses, reflective prompts, and intentional pacing. It does not let you rush through your own emotions.
The result was better than both the old experience and the pure chat. But we never would have gotten there if we had just built what users asked for without questioning why they wanted it.
I think this is one of the hardest lessons in product building: your users can tell you what they feel is missing, but they cannot always tell you what the solution should look like. That gap between the expressed need and the right solution is where product intuition lives.
Has anyone else experienced this? Built something users demanded only to find it hurt the product?



Replies
The idea of built in pacing is understand . Something the best design choice is deliberately slowing the user down instead of optimizing for speed.
Murror
@Paige Lauren @Evelyn White You both nailed it β users are incredibly good at identifying the problem, but the solution they imagine is filtered through what they have seen before. Our job as builders is to listen deeply to the pain and then take a creative leap on the fix. That gap between what users ask for and what actually helps is where the real product work happens.
@Joshua Hayes That is such an important point. Engagement metrics can create a dangerous illusion. We saw this firsthand β chat sessions looked great on paper, but the outcomes people cared about (feeling understood, gaining clarity) actually declined. Now we focus much more on outcome-based metrics rather than pure engagement. It is harder to measure but far more honest.
@monatruong_murrorΒ I really like this reframing especially the shift from "should we add chat?" to "how do we preserve reflection while adding a conversational feel."
It highlights a bigger product truth: users are often accurate about the pain, but not the shape of the solution.
Curious did that realization happen gradually through experimentation or was there a specific moment where it became clear chat wasn't actually solving the underlying goal?
Honestly this why product metrics can be misleading .Users often engage more with easier interfaces even if outcomes are worse.
Murror
@Paige Lauren @Evelyn White You both nailed it β users are incredibly good at identifying the problem, but the solution they imagine is filtered through what they have seen before. Our job as builders is to listen deeply to the pain and then take a creative leap on the fix. That gap between "what users ask for" and "what actually helps" is where the real product work happens.
@Joshua Hayes That is such an important point. Engagement metrics can create a dangerous illusion. We saw this firsthand β chat sessions looked great on paper, but the outcomes people cared about (feeling understood, gaining clarity) actually declined. Now we focus much more on outcome-based metrics rather than pure engagement. It is harder to measure but far more honest.
This is a strong reminder that users are usually very accurate about the pain, but much less accurate about the interface that should solve it. The guided-conversation pivot feels like the real product insight here, not the rollback.
Did they say why they wanted the chat feature?
Murror
A few more replies to the newer comments here:
@Luca Ardito Thank you, and I think you captured it well. The rollback itself was not the insight β it was what came after. When we started designing the guided conversations, we realized the real question was never "should we add chat?" It was "how do we preserve the reflective space users need while giving them the conversational feel they are drawn to?" That reframe changed everything about how we approached the solution.
@Malahat Hosseini Great question. When we dug into the feedback, most users said they wanted chat because they wanted to feel like they were "talking to someone who gets them." The underlying need was connection and responsiveness, not necessarily a freeform text box. They associated that feeling with chat because that is the format they knew from other apps. Once we understood the emotional need behind the request, we were able to design guided conversations that delivered that sense of connection without sacrificing the structure that made Murror effective.
This is a really interesting example of how the interface shapes behavior more than the feature itself.
It feels like the request for βchatβ was valid, but the interpretation of it changed the outcome completely.
Curious, do you now treat feature requests more as signals of intent rather than something to build directly?
Feels like the real challenge is not collecting feedback, but interpreting what users actually mean underneath it.
The strongest part of this story is that the metric drop forced you to question not just the feature, but the relationship the product creates.
That is a much harder level of product thinking than feature voting.
Curious if there was a specific metric that made the problem undeniable first.
Murror
Catching up on a few more thoughtful replies here:
@Miles Anthony - Great question. The pauses were not intentional at first. In the original guided experience, there were natural gaps between prompts simply because of how the flow was structured. When we built the chat version, those gaps disappeared and everything felt rushed. It was only after we saw the data decline that we realized the pauses were not dead space β they were where the real processing happened. So when we designed guided conversations, we made the pauses deliberate and even extended some of them. It was one of those cases where a "bug" in the old design turned out to be a feature.
@Paige Lauren - We actually did explore a few other approaches before landing on guided conversations. One was a "prompt of the day" model where users got a single reflective question each day. Another was a voice-based experience. Both had merit but felt too limiting. The chat framing kept coming up because users associated that format with feeling heard and responded to. Once we understood that the core desire was responsiveness and connection rather than freeform input, guided conversations became the natural middle ground.
@Luca Ardito - The metric that made it undeniable was return rate. Session length dropping was concerning but could be explained away. When we saw that users who tried chat were coming back less frequently than those who stayed on the guided path, it became clear something deeper was off. That was the signal that made us pull the feature rather than try to iterate on it.
@Gilmore - Yes, that shift has been fundamental for us. We now treat almost every feature request as a signal of intent rather than a specification. When someone says "I want chat," we ask ourselves what emotional need is driving that request. It has completely changed our product process. We spend more time in the "why" before we ever touch the "what." It slows down our shipping speed slightly, but the features we do ship land much better because they are solving the right problem.