Mine says 68 today. Green zone apparently. But 68 out of what? Compared to what baseline? Why 68 and not 71? Should I train hard or take it easy? The app doesn't say. It just shows me the number and a vague color.
Most wearables give you a score with zero explanation of how they got there. Black box. Proprietary algorithm, tuned for some average user that probably isn't you. You either trust it or you don't, but you have no way to verify it either way.
I ve been testing this with an AI agent we use for outbound workflows.
The agent s job is simple: take a lead, generate a personalized outreach email, and send it.
Before: The agent only had access to the lead s basic details (name, company, role) and a prompt to write the email. Output was consistent, clean, and predictable(though the personalisation aspect was limited) .
I am working for a 20-person startup. @Linear is our product development system, which we really enjoy.
Currently, our operations team receives support requests via their personal email and then publishes requests in the Slack channel, where developers are picking up issues, but we would like to upgrade to a more comprehensive system with a shared inbox. I would like it to have an integration with Linear, and the ticket creation to be automated as much as possible. It would be great to have a help center (documentation), too, as a part of the offering.
AI agents are increasingly making real decisions in businesses. They qualify leads, respond to customers, analyze data, and sometimes trigger actions that affect revenue or customer experience. As these systems move from suggesting to actually deciding, mistakes become inevitable.
When that happens, responsibility becomes unclear. The user configured the system, the company built the product, and the underlying models often come from another provider. If an AI agent makes the wrong call and it impacts a customer or revenue, where should accountability actually sit?
Curious how others are thinking about this. Who should be responsible in such cases, and are there any legal guidelines or draft regulations emerging around this?
Let me start from the creator s perspective: I personally don t have a product (apart from hiring people for creative work or offering personal consultations).
But as a creator, I constantly share content, insights, and information, value that helps me build trust (for free). Based on that perceived expertise, people eventually decide to work with me (a paid service).
Before AI, I always thought I would NEVER learn how to code. I genuinely admired technical people, watching them code felt like watching magic. I remember wishing that maybe one day, I could do something like that too.
I ve never had any formal education in programming, and I had zero experience building apps. But with AI, I was able to start from just an idea and slowly figure things out on my own experimenting, setting things up, and eventually creating my first interface that I could actually interact with.
It honestly felt magical. It made me realize how fast the world is changing. Coding is no longer something completely out of reach. AI is making it possible for people like me to turn ideas in our heads into real, tangible drafts for the first time.
I have been thinking about situations where clients specifically ask for AI agents to simplify a process. On the surface, it sounds reasonable. They want something intelligent to classify, route, or decide. But when we go deeper into the actual workflow, we often find that the logic is completely structured. It might just be routing leads based on budget, geography, or service type. In those cases, a simple if-else condition or a fetch record from a table would solve the problem cleanly.
Another common case is using AI to analyze structured form submissions. If the inputs are predefined dropdowns and checkboxes, there is nothing to interpret. A fetch record or rule-based filter is cleaner, cheaper, and easier to maintain.
So the real question is this: are we adding AI agents because they actually do the job better, faster, or more efficiently? Or are we just throwing AI into the mix because it sounds cool and everyone else is doing it?
I recently saw a marketer with 10k+ followers launch and finish 6th with 348 upvotes. They followed a proper pre-launch and post-launch plan, did everything right, and still the outcome felt unpredictable.
Now I m launching @Curatora next week.
I m not a marketer. I have a little over 1k followers. Of course, asking for support helps. But I also keep hearing that a large part of the Product Hunt community shows up mainly for their own launch, then goes quiet until the next one.
That makes me wonder: how much of success here is strategy, and how much is timing and network effect?
Lately it feels like every week there s a new AI-powered SaaS launching.
Same landing page formula. Same promises. Same 10x productivity pitch.
And what s interesting is the number of products keeps increasing but I m not sure demand is increasing at the same rate. It feels like we re repackaging the same value just slightly different positioning.
Last year we hired a design agency to build our marketing site for @Basedash. They did an incredible job. The headline makes it sound like I'm dunking on them, but I'm not. The site was genuinely great. They built it in Framer so we could manage content ourselves, which was a completely reasonable bet at the time (and something we explicitly asked for).
Today, I read in Techcrunch that India has an ambition to "compete" with the US and China in the startup scene:
India has updated its startup rules to better support deep tech companies in sectors like space, semiconductors, and biotech, which take longer to mature.
AI is everywhere right now - from copilots and chat assistants to analytics, research, and planning tools. But beyond the hype, I m curious about what s truly useful in day-to-day product work.
From a PM or founder perspective:
Where has AI genuinely saved you time?
What tasks do you trust AI with - and what do you never delegate?
Has AI changed how you write specs, manage roadmaps, or talk to users?
What AI use cases sounded great in theory but failed in practice?
Personally, I see a lot of potential, but also a lot of noise. I believe that in the future, AI should help us much more. Create good roadmaps, convert product specs into concrete tasks, prioritise them, assign people, push for realisation, and much more.
We ve worked in two other eco-systems (India & France), and each has clear strengths and trade-offs in terms of talent density, cost of building, access to capital, speed of decision-making, and openness to risk all vary a lot.
Curious to hear from founders and operators who ve built outside the US:
Which ecosystem punches the most above its weight today?
Where do you see the best balance between talent, capital, and customer access?
Are there cities/countries that are especially strong for specific stages (0 1 vs scaling) or specific verticals (AI, fintech, climate, SaaS, deep tech)?
I've built my product around traditional SaaS pricing (monthly tiers), but I m starting to wonder if that model is getting outdated, especially with more AI-powered and compute-heavy tools entering the market. That shift requires real architectural changes, instrumentation, metering, billing logic, and UI changes, not just pricing tweaks. It s something I m starting to seriously think about for my own product.
In particular, AI usage has real COGs (every prompt costs money), and I m seeing more platforms experimenting with usage-based models, or hybrids like SaaS base + usage + overage.
For those of you building AI or compute-intensive tools: