Most software wants you to come back every day. The business model depends on it. More sessions, more engagement, more opportunities to monetize.
But what happens when your product's purpose is to help someone understand themselves better? At Murror, we've been wrestling with a paradox: if we do our job well, users should eventually need us less not more.
We just launched Genie - Databox's AI analyst, and it's live on the leaderboard today.
One thing we kept seeing in user research: teams have dashboards, but still can't get fast answers. "Why are leads down this week?" still takes hours of manual digging. Genie fixes that - you ask in plain language, it finds the right metrics, runs the analysis, and returns an answer with a chart. No SQL, no waiting.
A few things I'm curious to hear from this community:
TL;DR: Anthropic refused to sign a contract with the Pentagon that would have allowed the U.S. military to use all of its models without restrictions. Anthropic insisted on an exception, and brace yourself, that its models cannot be used: 1) for mass surveillance of citizens, 2) for autonomous killing. Now the administration is threatening that if the founder of Anthropic doesn't change his mind by a certain date, they will come after him.
Google, OpenAI, and Musk (Grok) have all signed the contract.
Following Sam Altman's announcement over the past few hours, people have been speaking out massively about cancelling their OpenAI subscriptions and subscribing to Claude.