Working towards launching my app. It's too early for meaningful data, growth trends, or any real signal on what's working, and I'm okay with that.
What I've noticed though is that the internet is full of milestone posts. First 100 users, $10k MRR, viral launches. And when you're pre-data, it's easy to accidentally use someone else's month 18 as your week 1 benchmark.
I'm not losing sleep over it, but it did get me thinking about how founders define meaningful progress before the numbers are there to tell the story.
My current approach is staying focused on qualitative signals are the right people finding it, are early users actually engaging, are conversations happening. But I'm curious what others have done:
Early-stage founders often try to improve their product as much as possible and tend to take almost any feedback into account.
Sometimes they end up adding every feature users (even non-paying ones) ask for, even when those features are unnecessary. The product then becomes more complicated and harder to use.
And I m not even talking about the stage when the product is already established. At that point, there are more users, and their expectations start to differ.
TL;DR: Anthropic refused to sign a contract with the Pentagon that would have allowed the U.S. military to use all of its models without restrictions. Anthropic insisted on an exception, and brace yourself, that its models cannot be used: 1) for mass surveillance of citizens, 2) for autonomous killing. Now the administration is threatening that if the founder of Anthropic doesn't change his mind by a certain date, they will come after him.
Google, OpenAI, and Musk (Grok) have all signed the contract.
Following Sam Altman's announcement over the past few hours, people have been speaking out massively about cancelling their OpenAI subscriptions and subscribing to Claude.
When mass layoffs started in tech, many people suggested that:
The layoffs were happening because, during COVID, companies hired too many people for online and remote roles.
That AI was attacking jobs.
And I still keep seeing statements from creators of various AI tools saying: No, AI won t replace you. Employees will just have time for more meaningful tasks in a company.
I'm a freelance consultant. Tried Folk, Attio, HubSpot free, Google Sheets. Never stuck with any of them. The problem wasn't the features, it was that I never went back to the tool.
So I built a CRM inside my AI assistant (Claude + MCP server + Supabase). Six contact lists, email drafting, a Chrome extension that scrapes LinkedIn profiles at $0.001 each. Total cost: $10.
The whole thing lives where I already work. That's why I actually use it.
Today, I m doing a slightly more relaxed and bizarre corner.
The internet is full of things that are either amusing or scary, but mostly things that capture something outside the norm (and over time, even these weird things tend to become normalised).
but I recently came across an article describing how someone used Claude Code to access robot vacuum devices across 24 countries and potentially observe their environments.
I continue building UI pages for my email marketing platform using AI, and I wondered if AI generated UI designs are good enough for production apps?
They seem to work great on mobile devices and desktop screens. They load fast. They behave normally. I m personally perfectly fine and happy to use them in production, but what is your opinion? Am I missing something? Should I be worried about using them in production? To build UI I use Lovable and Google Stitch. I go feature by feature, and for each feature I create a separate Git branch. This way I m more careful that one update does not break my entire website. P.S. The attached screenshot is a work in progress design that I created using Google Stitch for the Email Automation and Sequencing feature.
A couple weeks ago, Boris Cherny (the creator of Claude Code) shared a bunch of really useful tips on getting the most out of Claude Code. #1 at the top of the list: do more in parallel. He himself runs 10-15 Claude codes in parallel.
His advice and practice makes sense: coding agents give us the ability scale infinitely. At this point, the only real limiter is our own ability to manage all of these agents.
At the beginning of the year, 2 co-founders reached out to me because they wanted to scale their personal LinkedIn profiles. The reason: In a few months, they re planning to raise funding and believe their personal brand could help.
A few days ago, another founder contacted me with a similar intention, although he s not planning to raise funding. For him, LinkedIn has become the platform that generates the most leads. He doesn t particularly enjoy the network itself, but he still wants to keep building it.
I m increasingly noticing a trend: people use AI for (almost everything), especially for writing texts. it is nothing new, but it started to be annoying (?)
The problem is that AI often: fully or largely replicates existing text without adding anything new adds completely pointless things, like a two-line comment followed by writes extremely long comments that no one will actually read
I'm a self-taught dev and former fuel salesman (yes, really). I started coding about 4 years ago, working evenings and weekends with a couple of friends on a project called Settl. We ran it for 3 years and managed to exit.