Mona Truong

We stopped tracking daily active users and our product got better

by

For the first year of building Murror, DAU was the number we checked every morning. It was the first thing on our dashboard, the first metric in every team meeting, and the number we used to judge whether a feature was working.

Then one day we noticed something strange: our DAU was climbing, but our NPS was dropping. People were opening the app more often, but they were less happy with it. Some of our most engaged users were showing signs of what we started calling "compulsive checking" — opening Murror out of habit rather than intention.

That was a turning point for us. We realized that for a product designed to help people reflect on their emotions, daily active usage was not just a vanity metric — it was potentially a harmful one. If someone is using Murror every single day, are they building self-awareness or are they developing a dependency on external validation of their feelings?

We made a controversial decision internally: we replaced DAU with what we call "meaningful sessions." A meaningful session is one where a user completes a full reflection cycle and reports feeling clearer afterward. It does not matter if they do that once a week or three times a day. What matters is whether the product is actually delivering on its promise.

The results surprised us. Once we stopped optimizing for daily opens, we started making very different product decisions. We removed push notifications that were driving habitual check-ins. We added a "you seem good today" screen that actually encouraged people to close the app when their mood patterns looked stable. We introduced weekly reflection summaries instead of daily prompts.

Our DAU dropped by about 30 percent. Our meaningful sessions per user went up by 45 percent. And our retention at 90 days actually improved because the people who stayed were getting real value, not just a dopamine loop.

I think this connects to a bigger conversation in tech right now: we have built an entire industry around engagement metrics that may not align with user wellbeing. For tools that deal with mental health, productivity, or personal growth, "more usage" is not always "better outcomes."

The hardest part was convincing ourselves (and our investors) that a declining DAU chart could actually be a sign of a healthier product. It goes against everything the startup playbook teaches you.

Has anyone else wrestled with this tension between engagement metrics and actual user value? How do you measure success when "less usage" might mean your product is working?

16 views

Add a comment

Replies

Be the first to comment