Mona Truong

The feature that almost killed our product was the one users asked for the most

byβ€’

For months, our most requested feature at Murror was a chat function. Users wanted to talk to the AI the way they talk to a friend. It seemed obvious. Every competitor had it. Every feedback form mentioned it.

So we built it.

And within two weeks, our core metrics started dropping. Session length went down. Return rate went down. The thing users said they wanted was actively making the product worse.

Here is what we discovered when we dug into the data: the chat format changed how people related to Murror. Instead of reflecting on their emotions, they started treating it like a customer service bot. "Fix my anxiety." "Tell me why I'm sad." The entire dynamic shifted from self-discovery to outsourcing their emotional processing.

The original experience, which was more structured and guided, worked precisely because it created space for people to sit with their feelings. The chat format removed that space.

We ended up pulling the feature after three weeks and replacing it with something we called "guided conversations." It looks like chat on the surface, but it has built-in pauses, reflective prompts, and intentional pacing. It does not let you rush through your own emotions.

The result was better than both the old experience and the pure chat. But we never would have gotten there if we had just built what users asked for without questioning why they wanted it.

I think this is one of the hardest lessons in product building: your users can tell you what they feel is missing, but they cannot always tell you what the solution should look like. That gap between the expressed need and the right solution is where product intuition lives.

Has anyone else experienced this? Built something users demanded only to find it hurt the product?

322 views

Add a comment

Replies

Best
Mona Truong

Thank you all for the thoughtful comments! To answer a few questions: @Jade Melissa - Great question. There was actually a surprising mismatch. Our most vocal requesters were power users already engaged with the guided format, but once chat launched, newer users gravitated toward it and their retention dropped fastest. The power users tried it briefly but mostly went back on their own. @Edward Baker - We looked at which moments in the original experience had the highest emotional engagement and kept those as anchors. Then we layered in chat-like input around them so it felt conversational without losing the reflective structure. @Leah Josephine - The drop was not immediate. The first few days looked promising because novelty drove usage up. It was around day 5-6 when we noticed session lengths shrinking and return rates declining. That delay made it harder to catch early.

Jade Melissa

@monatruong_murrorΒ The mismatch is honestly most interesting part. The people asking loudest were not actually the ones whose behavior changed the most once it shipped. Feel like a great reminder that feature demand and feature fit are not always the same thing. Also makes sense why this would be so hard to catch early if the initial usage looked strong.

Mona Truong

Exactly right. Feature demand and feature fit are two very different things. We now track not just what users ask for, but who is asking and how they currently use the product. It has changed how we prioritize our roadmap. The loudest voices are not always the most representative ones.

Jade Melissa

@monatruong_murrorΒ That make total sense, tracking who is asking versus how they actually engage seems like a subtle but huge shift in product intuition. It's a great reminder that loud feedback doesn't always equal representative feedback, and it really highlights why understanding real user behavior over time matters more than immediate feature requests.

Leah Josephine

@monatruong_murrorΒ The delay is honestly what makes this so tricky. If the first few days looked strong I can see how it would have been so easy to read it as validation instead of novelty. Really good reminder that short term engagement can sometimes hide long term behavior changes.

Mona Truong

@leah_josephineΒ That is something we talk about a lot internally now. We have started separating novelty engagement from habitual engagement in how we read our metrics. The first few days after any change are almost always misleading. We now wait at least two weeks before drawing conclusions about whether a feature is actually working.

Leah Josephine

@monatruong_murrorΒ That makes a lot of sense. Separating novelty from habitual engagement feels like such an important shift, especially since early signals can be so misleading. Waiting longer before drawing conclusions seems like a much more reliable way to understand real user behavior.

Mona Truong

Really appreciate all the thoughtful replies here. A few responses: @Ian Maxwell - That shift was the biggest surprise for us. The same user, same product, completely different relationship just because the interface changed. It taught us that how you ask someone to engage shapes what they are willing to feel. @Kyle Bennett @Miles Anthony @Edward Curtis - The pacing element has become central to how we think about Murror now. We actually found that the moments of silence between prompts are where the deepest self-reflection happens. Faster is not always better when the goal is emotional clarity. @Paige Lauren @Evelyn White - Completely agree. We have started framing it internally as "users diagnose the symptom, our job is to find the cause." It keeps us from being reactive while still honoring what people tell us. @Joshua Hayes - That is such a good point about metrics being misleading. Easier interfaces can feel better in the moment while producing worse outcomes. We now look at outcome metrics alongside engagement metrics to catch that gap early.

Ian Maxwell

@monatruong_murrorΒ Absolutely, it's fascinating how design choices shape behavior so strongly. Makes me think outcome metrics are just as important as engagement metrics for understanding real impact.

Kyle Bennett

@monatruong_murrorΒ Totally agree, less speed can mean more depth. The pacing really seems key to maintaining reflection.

Miles Anthony

@monatruong_murrorΒ That's really interesting, especially how those quiet moments ended up being the most impactful part.

It's almost feel like the pauses aren't just pacing, but something users actively need to process what's happening. Without them, the experience probably becomes easier to move through but less meaningful.

Curious if those pauses were something you intentionally designed early on or something you learned into after seeing how users interacted?

Paige Lauren

@monatruong_murrorΒ That distinction between wanting "chat" and wanting to feel understood is really insightful.
It sounds like users weren't really asking for a messaging interface, they were asking for a faster path to that sense of connection and responsiveness.

Curious if you explored other ways to deliver that feeling before settling on guided conversations, or did the "chat" framing come through as the strongest signal early on?

Edward Baker

How did you decide which parts of the chat to keep for guided conversations without losing engagement?

Evelyn White

That last part is probably the trickiest in product development , users usually know the pain well, but not always the best way to solve it .

Jade Melissa

Curious whether the people asking for chat were the same people who actually used it the most once it launched.

Ian Maxwell

That shift from self reflection to emotional outsourcing feels like a huge product behavior change.

Kyle Bennett

The guided conversations idea sounds much smarter than just adding open ended chat because it keeps the purpose intact.

Leah Josephine

Did you see the drop happen immediately, or did it only become obvious after a few days of usage?

Miles Anthony

The built in pauses part really stands out. Sometimes less speed creates a better experience.

Paige Lauren

The last point is probably one of the hardest parts of product building. Users are often very right about the pain, but not always about the shape of the solution.

123
Next
Last