Jude Everett

Jude Everett

Market Researcher

About

I study consumer behavior and industry trends. I collect information through surveys and reports to understand what customers actually want.

Badges

Tastemaker
Tastemaker
Gone streaking
Gone streaking

Forums

We just launched our Alpha and we need your honest feedback.

I built Prodshort because I understood after my previous companies that the hard thing is not to Build but to Sell.
But because I'm a builder, not a seller. I decided to build something that Sells for me.
And Because the trend is Founder Led Marketing, I decided to build something that Create content on your behalf.
But there was a lot of AI tools out there. So I decided to go the opposite way, make it the most authentic possible.
I want you to create content when you are not even aware of it.
And honestly it worked for me. Many people tell me it's amazing but to keep it honest, NO ONE PAYED, and that's the only KPI I'm looking at.
For now, I have feedback about the landing page being too AI generated, and doesn't reflect the quality of our product.
And Builder socially scared from sharing there first content.
Let me know what you think https://www.producthunt.com/prod...

What's the one thing you wish you could see inside your AI agent's brain?

I've been building ClawMetry for past 5 weeks. 90k+ installs across 100+ countries.
The observability features I built first were the ones I personally needed: a live execution graph (Flow tab), full decision transcripts (Brain tab), token cost tracking per session, and visibility into sub-agent spawns.
But I keep hearing variations of the same thing: "I don't really know what my agents are doing." And everyone means something slightly different by that.
For some it's costs. For some it's timing (why did this take 4 minutes?). For some it's trust (did the agent actually do what I think it did?). For some it's failures (where exactly did it break?).
So I want to ask you directly:
If you're running AI agents today -- what's the one thing missing from your observability setup? What would make you feel like you actually understand what's happening inside your agents?
Options I'm thinking about next:
- Alerting (get notified when an agent fails or goes over budget)
- Cost per task breakdown (not just per session)
- Agent run comparisons (before/after a prompt change)
- Memory snapshots (what did the agent "know" at each decision point)
Drop your answer below. The next feature I build will be heavily influenced by this thread.
(ClawMetry is free to try locally: pip install clawmetry. Cloud: app.clawmetry.com, $5/node/month, 7-day free trial.)

"Consilium Belli" – Summoning the Roman War Council to stress-test my landing page & business model

Roman generals never went into battle with an untested plan. They convened a consilium a no-holds-barred council of war where officers could openly criticize strategy, expose flaws, and prevent stupid mistakes before it was too late.

I m doing the same.

View more