Hey everyone, I m currently building a small SaaS product, and I started thinking seriously about monitoring. Most tools (Datadog, New Relic, etc.) feel built for larger teams. Powerful, yes but also complex and expensive. So I m curious: What do you actually monitor in your small or solo SaaS? Do you track uptime only? Do you track latency? Do you rely on logs? At what point does monitoring start feeling like overkill? I m trying to understand what is truly essential vs. what is just enterprise noise . Not selling anything just genuinely curious how other indie builders approach this. Would love to hear your setups.
Today I celebrate a big milestone. 365 days in a row active on Product Hunt. Honestly, quite a number given how many things happened in my life this year, it feels like I could write a book about it.
But consistency takes effort. Here are 3 things that helped me:
Hey, all!
There are many productivity tools out there offering different features. But I still haven't found one that covers everything I want. To me, the essential features a perfect tool must have are: - A good text editor ( I mean, I love notion. But why is it so hard to bring the text there to other formats? Sometimes I summarize some things inside notion, but taking them to an actual doc or pdf is so painful!) - Simple, minimalistic design - A good place to have random thoughts and things that I need to get done without losing sight of them - Good integrations!! I use Gcal a lot, so I would love to have a productivity software that actually integrates with it. What about you?
Today, I read in Techcrunch that India has an ambition to "compete" with the US and China in the startup scene:
India has updated its startup rules to better support deep tech companies in sectors like space, semiconductors, and biotech, which take longer to mature.
Do you have a particular WHY or reason for your side projects? If not, how are you personally staying disciplined, motivated, and focused on achieving your side project-related goals?
Here s something uncomfortable I ve learned building AI agent systems:
AI rarely fails at the step we re watching.
It fails somewhere quieter a retry that hides a timeout, a queue that grows by every hour, a memory leak that only matters at scale, a slow drift that looks like variation until it s too late.
Most teams measure accuracy. Some measure latency.