The feature that almost killed our product was the one users asked for the most
For months, our most requested feature at Murror was a chat function. Users wanted to talk to the AI the way they talk to a friend. It seemed obvious. Every competitor had it. Every feedback form mentioned it.
So we built it.
And within two weeks, our core metrics started dropping. Session length went down. Return rate went down. The thing users said they wanted was actively making the product worse.
Here is what we discovered when we dug into the data: the chat format changed how people related to Murror. Instead of reflecting on their emotions, they started treating it like a customer service bot. "Fix my anxiety." "Tell me why I'm sad." The entire dynamic shifted from self-discovery to outsourcing their emotional processing.
The original experience, which was more structured and guided, worked precisely because it created space for people to sit with their feelings. The chat format removed that space.
We ended up pulling the feature after three weeks and replacing it with something we called "guided conversations." It looks like chat on the surface, but it has built-in pauses, reflective prompts, and intentional pacing. It does not let you rush through your own emotions.
The result was better than both the old experience and the pure chat. But we never would have gotten there if we had just built what users asked for without questioning why they wanted it.
I think this is one of the hardest lessons in product building: your users can tell you what they feel is missing, but they cannot always tell you what the solution should look like. That gap between the expressed need and the right solution is where product intuition lives.
Has anyone else experienced this? Built something users demanded only to find it hurt the product?



Replies
This feels like a useful case study in why teams should segment feedback by user type, not just volume.
The loudest request can come from the least representative group, while the highest-quality usage pattern comes from people who barely ask for anything.
Really good reminder to separate expressed desire from long-term value creation.
Murror
@Luca Ardito You are absolutely right that segmenting feedback by user type is one of the most underrated practices in product development. We learned this the hard way with the chat feature β the loudest voices were our power users who were already deeply engaged, but the behavioral shift hit newer users the hardest. Now we always look at who is asking alongside what they are asking. The people who quietly use your product in exactly the way it was designed often have the most to lose when you change things based on the loudest requests. Appreciate you framing it so clearly.
This resonates deeply β we went through a similar experience with ad-vertly.ai.
We had a segment of users constantly asking for a "recommendations feed" β a passive stream of ad suggestions they could scroll through. It seemed obvious. We built it.
Engagement with the feed looked great in the first week. Then our core workflow metrics tanked. People were browsing, not building. The feed turned an action-oriented tool into a scroll experience. We killed it after three weeks.
The insight Mona nails here: users describe their desires in the vocabulary of features they already know. "Chat" means "I want something more natural." "Feed" meant "I want less friction." The actual need was different from the literal request.
Now before we build anything, we ask: what behavior are we trying to produce? Not just what feature are we adding. That question kills a lot of bad ideas early.
Murror
Thank you all for continuing this conversation, it has been really meaningful to read through. A few more thoughts:
@Miles Anthony - Great question about the pauses. They were not intentional in the beginning. The original guided experience just naturally had them because of how the prompts were structured. When we built the chat version, we removed them without realizing they were doing important work. It was only after we saw the data that we understood their value and made them a deliberate part of the redesign. Sometimes the best features are the ones you discover by accident.
@Paige Lauren - We actually did explore a few other approaches before landing on guided conversations. We tested a voice-based format, a journal-style free write, and even a choose-your-own-path model. The chat framing kept surfacing in user feedback because people associated it with feeling heard. But what we learned is that the feeling of being heard comes from the right questions at the right pace, not from the chat interface itself. The guided conversation format gave us the best of both: the warmth of a dialogue with the structure that actually helps people reflect.
@Ian Maxwell @Kyle Bennett - You both nailed it. The relationship between interface design and emotional behavior is something we think about constantly now. The same content delivered differently produces completely different outcomes. It has made us much more careful about treating the medium as part of the message.
@Leah Josephine @Jade Melissa - Really appreciate you both staying in this thread. The point about loud feedback not equaling representative feedback has become one of our core principles. We now look at who is asking, how they use the product today, and whether the request aligns with the outcomes we are optimizing for. It has slowed down our roadmap but improved every decision we make.