Why we built AI that prepares you for hard conversations instead of replacing them
There's a pattern in AI products right now that worries me: the goal is to make AI the relationship.
AI friends. AI therapists. AI partners. The pitch is always the same — humans are complicated, AI is easy. No judgment, available 24/7, infinitely patient.
But at Murror, we kept asking ourselves: what happens when the user closes the app? If the best version of their emotional life only exists inside our product, we haven't solved loneliness — we've just made it more comfortable.
So we took a different approach. Instead of building AI that replaces human connection, we built AI that prepares you for it.
Here's what that looks like in practice:
Before a difficult conversation — the app helps you understand what you're actually feeling and why, so you can show up more clearly with the real person.
After a misunderstanding — instead of venting to an AI that always agrees with you, Murror helps you see the other person's perspective and figure out what to say next.
In moments of self-doubt — rather than offering endless validation, the app helps you identify patterns in how you relate to others, so you can actually grow.
The hardest part of building this way is that success looks different. We can't optimize for "conversations with AI" because our whole thesis is that the most important conversations happen outside the app. We measure whether users report feeling more confident in their real relationships — which is harder to track but way more meaningful.
I think there's a broader question here for anyone building in the AI space: are we building tools that make humans more capable, or tools that make humans more dependent?
Would love to hear from other builders thinking about this tension.



Replies
How do you define "feeling more confident" in real relationships in a measurable way?
Murror
@evan_cooper1 Great question — it's honestly one of the hardest things we track. We use a simple self-reported check-in after key conversations: "Did you feel more prepared going into that conversation?" and "Did you say what you actually wanted to say?" It's not perfect, but the signal has been surprisingly consistent. Users who stick with Murror for 3+ months report feeling less anxious before difficult conversations, even when they haven't used the app that week. That's the metric we care about most — confidence that outlasts the session.
Have you noticed users becoming more independent over time, or still replying on the app regularly?
Murror
@jack_sullivan5 Yes, and that's actually one of the most encouraging patterns we see. Our most successful users gradually shift from using Murror before every conversation to using it mainly for the really tough ones — a salary negotiation, a conflict with a close friend, setting a boundary with family. Their overall usage goes down, but the moments they do use it are more intentional. We see that as a win, not a retention problem. The app should be a tool you reach for when you need it, not a habit you can't break.
How do you balance usefulness with the risk of users overusing the tool before every interaction?
Murror
@wyatt_cameron This is something we think about a lot. Our approach is to design Murror around specific moments rather than open-ended use. You don't just "open the app and chat" — you come with a specific conversation in mind, work through it, and then go have the real one. That natural endpoint built into each session helps prevent overuse. We also intentionally don't send push notifications or nudge people to come back. If you're not facing something difficult, there's no reason to open Murror — and we're fine with that.
Do you see this evolving more into a coaching tool or staying strictly focused on pre-conversation preparation?
Murror
@leah_josephine Great question. Right now we're intentionally staying focused on pre-conversation preparation because that's where we see the most impact — the moment before someone has a difficult conversation is when they're most open to reflection. That said, we're starting to see natural coaching-like moments emerge, especially when users come back after the conversation and reflect on what actually happened vs. what they expected. We're paying close attention to those patterns, but we want to be careful about scope creep. The worst thing we could do is try to become everything and lose what makes the focused experience valuable.
This framing really resonates — "more capable vs more dependent" is the question I wish more AI builders were asking out loud. The temptation to optimize for session length is real because it's easy to measure, but you're right that it's the wrong target if the goal is actually helping people. How are you measuring whether someone showed up better in the real conversation? That feels like the hard part.
Murror
@tijogaucher You're right — it is the hard part. We can't sit in on the real conversation, so we rely on what users tell us after. We ask two things: "Did you feel more prepared going in?" and "Did the conversation go closer to how you wanted it to?" It's imperfect and self-reported, but the consistency has been surprising. Users who prep with Murror report feeling more in control of difficult moments — not that the conversation went perfectly, but that they said what they actually meant. We're also exploring lightweight follow-up prompts that help users reflect on the gap between preparation and reality, which gives us richer signal over time.
@monatruong_murror I appreciate this, following you guys closely now!
Maybe the line between "more capable" and "more dependent" matters less than we think.
If someone uses AI to process their emotions, rehearse a hard conversation, or simply feel less alone before showing up — that's preparation.
The real question isn't who you're talking to. It's whether you're more ready for the real world after.
Murror
@summerxia This really resonates with how we think about it. The distinction we keep coming back to isn't capable vs. dependent — it's whether the person is growing or just consuming. Someone who uses AI to rehearse a hard conversation and then goes and has it for real — that's growth, even if they needed help getting there. The problem starts when the AI becomes the destination instead of the launchpad. We want Murror to be something people use less over time because they've internalized the skills, not more because they can't function without it.
Strong thesis. AI that prepares people for real conversations feels far more durable than AI that tries to replace them.
Murror
@alpertayfurr Thank you, Alper. That's exactly the bet we're making. Tools that prepare people for real life build lasting value because the skill transfer compounds — every hard conversation someone navigates better makes the next one a little easier, with or without us.