Mona Truong

The feature that almost killed our product was the one users asked for the most

by

For months, our most requested feature at Murror was a chat function. Users wanted to talk to the AI the way they talk to a friend. It seemed obvious. Every competitor had it. Every feedback form mentioned it.

So we built it.

And within two weeks, our core metrics started dropping. Session length went down. Return rate went down. The thing users said they wanted was actively making the product worse.

Here is what we discovered when we dug into the data: the chat format changed how people related to Murror. Instead of reflecting on their emotions, they started treating it like a customer service bot. "Fix my anxiety." "Tell me why I'm sad." The entire dynamic shifted from self-discovery to outsourcing their emotional processing.

The original experience, which was more structured and guided, worked precisely because it created space for people to sit with their feelings. The chat format removed that space.

We ended up pulling the feature after three weeks and replacing it with something we called "guided conversations." It looks like chat on the surface, but it has built-in pauses, reflective prompts, and intentional pacing. It does not let you rush through your own emotions.

The result was better than both the old experience and the pure chat. But we never would have gotten there if we had just built what users asked for without questioning why they wanted it.

I think this is one of the hardest lessons in product building: your users can tell you what they feel is missing, but they cannot always tell you what the solution should look like. That gap between the expressed need and the right solution is where product intuition lives.

Has anyone else experienced this? Built something users demanded only to find it hurt the product?

353 views

Add a comment

Replies

Best
Mona Truong

Thank you all for the thoughtful comments! To answer a few questions: @Jade Melissa - Great question. There was actually a surprising mismatch. Our most vocal requesters were power users already engaged with the guided format, but once chat launched, newer users gravitated toward it and their retention dropped fastest. The power users tried it briefly but mostly went back on their own. @Edward Baker - We looked at which moments in the original experience had the highest emotional engagement and kept those as anchors. Then we layered in chat-like input around them so it felt conversational without losing the reflective structure. @Leah Josephine - The drop was not immediate. The first few days looked promising because novelty drove usage up. It was around day 5-6 when we noticed session lengths shrinking and return rates declining. That delay made it harder to catch early.

Jade Melissa

@monatruong_murror The mismatch is honestly most interesting part. The people asking loudest were not actually the ones whose behavior changed the most once it shipped. Feel like a great reminder that feature demand and feature fit are not always the same thing. Also makes sense why this would be so hard to catch early if the initial usage looked strong.

Mona Truong

Exactly right. Feature demand and feature fit are two very different things. We now track not just what users ask for, but who is asking and how they currently use the product. It has changed how we prioritize our roadmap. The loudest voices are not always the most representative ones.

Jade Melissa

@monatruong_murror That make total sense, tracking who is asking versus how they actually engage seems like a subtle but huge shift in product intuition. It's a great reminder that loud feedback doesn't always equal representative feedback, and it really highlights why understanding real user behavior over time matters more than immediate feature requests.

Leah Josephine

@monatruong_murror The delay is honestly what makes this so tricky. If the first few days looked strong I can see how it would have been so easy to read it as validation instead of novelty. Really good reminder that short term engagement can sometimes hide long term behavior changes.

Mona Truong

@leah_josephine That is something we talk about a lot internally now. We have started separating novelty engagement from habitual engagement in how we read our metrics. The first few days after any change are almost always misleading. We now wait at least two weeks before drawing conclusions about whether a feature is actually working.

Leah Josephine

@monatruong_murror That makes a lot of sense. Separating novelty from habitual engagement feels like such an important shift, especially since early signals can be so misleading. Waiting longer before drawing conclusions seems like a much more reliable way to understand real user behavior.

Jade Melissa

Curious whether the people asking for chat were the same people who actually used it the most once it launched.

Ian Maxwell

That shift from self reflection to emotional outsourcing feels like a huge product behavior change.

Leah Josephine

Did you see the drop happen immediately, or did it only become obvious after a few days of usage?

Miles Anthony

The built in pauses part really stands out. Sometimes less speed creates a better experience.

Mona Truong

Really appreciate all the thoughtful replies here. A few responses: @Ian Maxwell - That shift was the biggest surprise for us. The same user, same product, completely different relationship just because the interface changed. It taught us that how you ask someone to engage shapes what they are willing to feel. @Kyle Bennett @Miles Anthony @Edward Curtis - The pacing element has become central to how we think about Murror now. We actually found that the moments of silence between prompts are where the deepest self-reflection happens. Faster is not always better when the goal is emotional clarity. @Paige Lauren @Evelyn White - Completely agree. We have started framing it internally as "users diagnose the symptom, our job is to find the cause." It keeps us from being reactive while still honoring what people tell us. @Joshua Hayes - That is such a good point about metrics being misleading. Easier interfaces can feel better in the moment while producing worse outcomes. We now look at outcome metrics alongside engagement metrics to catch that gap early.

Ian Maxwell

@monatruong_murror Absolutely, it's fascinating how design choices shape behavior so strongly. Makes me think outcome metrics are just as important as engagement metrics for understanding real impact.

Miles Anthony

@monatruong_murror That's really interesting, especially how those quiet moments ended up being the most impactful part.

It's almost feel like the pauses aren't just pacing, but something users actively need to process what's happening. Without them, the experience probably becomes easier to move through but less meaningful.

Curious if those pauses were something you intentionally designed early on or something you learned into after seeing how users interacted?

Edward Curtis

The idea of built in pacing is understand . Something the best design choice is deliberately slowing the user down instead of optimizing for speed.

Joshua Hayes

Honestly this why product metrics can be misleading .Users often engage more with easier interfaces even if outcomes are worse.

Mona Truong

@Paige Lauren @Evelyn White You both nailed it — users are incredibly good at identifying the problem, but the solution they imagine is filtered through what they have seen before. Our job as builders is to listen deeply to the pain and then take a creative leap on the fix. That gap between "what users ask for" and "what actually helps" is where the real product work happens.

@Joshua Hayes That is such an important point. Engagement metrics can create a dangerous illusion. We saw this firsthand — chat sessions looked great on paper, but the outcomes people cared about (feeling understood, gaining clarity) actually declined. Now we focus much more on outcome-based metrics rather than pure engagement. It is harder to measure but far more honest.

Mona Truong

@Paige Lauren @Evelyn White You both nailed it — users are incredibly good at identifying the problem, but the solution they imagine is filtered through what they have seen before. Our job as builders is to listen deeply to the pain and then take a creative leap on the fix. That gap between what users ask for and what actually helps is where the real product work happens.

@Joshua Hayes That is such an important point. Engagement metrics can create a dangerous illusion. We saw this firsthand — chat sessions looked great on paper, but the outcomes people cared about (feeling understood, gaining clarity) actually declined. Now we focus much more on outcome-based metrics rather than pure engagement. It is harder to measure but far more honest.

123
Next
Last