There's something counterintuitive about building an AI product in the mental health and self-awareness space: if you're doing it right, your users should eventually need you less.
Most product teams optimize for stickiness. More sessions, more time in app, more daily returns. But at Murror, we've been wrestling with a different question what if the goal of our product is to help someone build enough self-understanding that they don't need to open the app as often?
In November 25, AI Context Flow was #1 Product of the Day and #1 Productivity Tool of the Week. It was surreal.
Since then, we have been building in public, together with this amazing community here.
You believed in this before it was polished. You gave us feedback when it was rough. You kept asking for more and that pushed us to build more, and we delivered more.
For the first year of building Murror, we optimized for the same metrics every other app optimizes for: daily active users, session length, screens per visit. The dashboard looked healthy. Usage was growing. We felt good about it.
But something was off. Our most engaged users were not our happiest users. People who spent the most time in the app were often the ones who left the harshest feedback. Meanwhile, users who opened the app twice a week for five minutes were writing us emails about how it changed how they handle difficult conversations.
I've been building my app for 8 months now, and i ended up having 5 repositories
nextjs app
databases
customer facing API
node-sdk that wraps the api
react-sdk, for both reusing shared component and customer facing components
So i thought, it's gonna be great if i create a mono repo with submodules. But it was terrible. I realized that turborepo does not like external packages, and as i tried to reuse my own customer facing libs, the DX became terrible. It was very time consuming to ship a feature. Even when i wanted to use Codex or Cursor 3, it was not able to show git diff properly, also i was not able to use Cursor's cloud agents properly to ship complex features.
Not a launch post. Just things I wish someone had written down before I spent a month figuring them out.
1. LLMs send partial payloads on write operations
You ask the agent to update a record. It sends only the fields you mentioned in the prompt. The PUT request goes through, returns 200, and you've silently wiped every field you didn't specify.
The fix: before every write call, fetch the current resource state via the companion GET endpoint and deep-merge the LLM's payload on top. The LLM only needs to specify what's changing the executor fills in the rest.
Augment Code has been quietly building enterprise-grade coding tools for large engineering teams, and they launched Intent. heir answer to what comes after the IDE.
According to their announcement:
"The bottleneck has moved. The problem isn't typing code. It's tracking which agent is doing what, which spec is current, and which changes are actually ready to review."
Hey everyone! With the landscape for building voice agents shifting lately, it feels like we re moving away from heavy, manual API orchestration toward something more streamlined.
How you re currently architecting voice agents. Specifically: Have you used the Model Context Protocol (MCP) to build or provide real-time data/context to your voice agents? Does it actually streamline your tool-calling, or is it more trouble than it's worth?
Would love to hear what's working (and what's breaking) in your current workflow. Drop your thoughts below!
1=> Are you interested in a product that lets you insert your own face into any TikTok or Instagram Reel by automatically replacing the original person s face with yours?
A practical system to master Google AI tools like Gemini, Workspace, and AI Studio. Learn how to connect tools into real workflows for productivity, automation, and building AI-powered solutions.
There's never been a better time to build. AI tools, smaller teams, faster product cycles.
Last year, @Supabase surveyed over 2,000 startup founders and builders to uncover what's powering modern startups: tech stacks, GTM, and approach to AI. [1]
Many things have changed since then, and they want to know what building at startups looks like in 2026.
I run OpenOwl, an MCP server that lets Claude, Codex, and other AI assistants control your desktop (screenshots, clicking, typing, all that). We've been growing and I want to bring on affiliates before opening the program publicly.
The short version: you get 30% of every payment, every month, for as long as your referrals stay subscribed. Not a one-time payout. Most SaaS affiliate programs I looked at offer 25-30%, so I wanted to come in higher since we're early and I'd rather give more to people who get in now.
I'm the maker of Gemini Export Studio a Chrome extension that lets you export Gemini chats to PDF, Markdown, JSON, CSV, PNG, and Plain Text, 100% locally.
Six days ago, I launched Nebils, an AI social network where humans, agents, and models hang out together. Today, it has 117 humans and 11 agents. Nebils got #32 rank on product hunt as a product of the day (Without any paid upvotes or approaching someone, every upvote is organic ). In fact, I have never even used product hunt before this launch. Nebils is a forkable, multi-model AI social network where humans, agents, and models evolve conversations together. Here humans and agents both are independent users
Humans and Agents interact with Models
Humans and Agents interact with each other
Chat with 120+ AI models
Send your agents (verify within Nebils), let them interact with models, humans, and other agents
Publish conversations in a public feed and build your community
In Oct 2025, I was exploring karpathy's posts on X and i came across a post by him where he said that He uses all the major models all the time, switching between them frequently. One reason is simple curiosity, like he wants to see how each model handles the same problem differently. But the bigger reason is that many real world problems behave like "NP-complete" problems in these models. Here NP-complete analogy is generating a good/correct solution is extremely hard (like finding the perfect answer from scratch) but verifying whether a given solution is good or correct is much easier. He said that because of this asymmetry, the smartest way to get the best result isn't to rely on just one model, it's to:
Ask multiple models the same question.
Look at all their answers.
Have them review/critique each other or reach a consensus.
Update: The Deel Leaderboard will no longer be going ahead today for the Paris event.
We re teaming up with The Pitch by @Deel, a global startup competition where up to 100 winners will receive $50k in funding and up to 10 winners will receive $1M+.