Launched this week

ClawMetry for OpenClaw
Real-time observability dashboard for OpenClaw AI agents
317 followers
Real-time observability dashboard for OpenClaw AI agents
317 followers
ClawMetry is a free, open-source observability dashboard for OpenClaw AI agents. Think Grafana, but purpose-built for AI. One command install (pip install clawmetry), zero config. Monitor token costs, sub-agent activity, cron jobs, memory changes, and session history. All in real-time with a beautiful live flow visualization. Works on macOS, Linux, Windows, even Raspberry Pi











ClawMetry for OpenClaw
@vivek_chand Hi Vivek. Congratulations on your launch. Can we detect when an agent deviates from its original task objective mid-execution?
@vivek_chand Hi Vivek - this is exactly what agent workflows need. When sub agents start spawning, visibility becomes critical. Seeing tool calls, commands, costs, and system health in one place makes it far more trustworthy.
Observability for agents will only get more important.
I’m building Ahsk, a macOS AI assistant focused on seamless AI in daily workflows. Would love to connect and exchange feedback.
Observability for AI agents is something most builders overlook until things break in production. Love that you're tackling this early. How do you handle tracing when agents make multiple chained calls? In my experience building database agents, the hardest part to debug is when the agent interprets a query correctly but chains the wrong follow-up action. Clean dashboards for that would be a game changer.
ClawMetry for OpenClaw
@saezbaldo Great question, Damian. Tracing chained agent calls is exactly where most logging tools fall short.
ClawMetry tracks every tool call, sub-agent spawn, and session handoff with full context. So when agent A calls agent B which queries a database and picks the wrong follow-up, you can trace the entire chain: what each agent saw, what it decided, and where it went wrong.
The cron management dashboard also helps here. You can see scheduled tasks, their run history, and drill into individual executions. No more guessing which step broke.
We're actively building deeper tracing for multi-agent workflows. Would love your feedback if you try it out: pip install clawmetry and you're up in 30 seconds.
Thanks @vivek_chand ! The full chain tracing sounds exactly right — especially the part about tracking what each agent saw vs what it decided. That's where most debugging falls apart.
Curious: when you trace a chain where agent B picks the wrong follow-up action, does ClawMetry also capture whether B had the authority to take that action in the first place? That's been our obsession — the gap between "the agent did X" and "the agent was authorized to do X."
Will definitely give it a try. Tracing + authority control feels like a natural combo.
Oh this is exactly what I needed. I run a few OpenClaw agents for different tasks and honestly the biggest pain point has been figuring out where my tokens are going. Especially with sub-agents spawning other sub-agents, costs spiral fast and you have zero visibility into it. The live flow visualization sounds great for debugging too - sometimes an agent gets stuck in a loop and I don't realize until the bill comes. Does it track per-agent cost breakdowns or is it more of an aggregate view? Also curious if there's any alerting for anomalous token spikes.
ClawMetry for OpenClaw
@mykola_kondratiuk Hey Mykola! Glad it resonates, that's exactly the pain I built it for.
To answer your questions:
Per-agent cost breakdowns: Yes. ClawMetry shows cost per session, per model, and per tool call. So if one sub-agent is burning 10x more tokens than another, you'll see it immediately. One user actually discovered a 3x token inefficiency in a single agent and cut costs by 40% with one prompt adjustment.
Alerting for anomalous token spikes: Not yet in the open source CLI, but this is exactly what we're building into the native app (iOS/Mac, coming very soon). The vision is a rules engine where you can set things like "alert me if any session exceeds $5" or "auto-reject if a sub-agent spawns more than 3 levels deep." Eventually it learns your patterns and auto-approves safe actions.
The stuck-in-a-loop problem you mentioned is a perfect use case. With ClawMetry you'd see the token counter climbing in real-time and can kill the session before it drains your wallet.
pip install clawmetry and you'll have visibility in 30 seconds. Would love to hear how it works with your setup!
ResumeUp.AI
Congrats on the launch 🚀
Finally, real visibility into what agents are actually doing. Super useful for anyone building with OpenClaw.
ClawMetry for OpenClaw
@rohithreddy Thank you Rohith! That means a lot. Visibility into what the agent is doing was the #1 thing I was missing when I started building with OpenClaw, so I built it. Glad it resonates. Would love to hear how you use it with your setup!
Nice work @vivek_chand, the zero-config install is brilliant for developer adoption. The real-time flow visualization looks clean. How are you planning to drive initial awareness beyond the OpenClaw community?
Congrats on the launch @vivek_chand! Love the real-time visibility approach, solves a real pain point. How are you planning to reach developers beyond Product Hunt?
ClawMetry for OpenClaw
@austinelvis Thanks Austin! Appreciate the kind words on the zero-config install and real-time flow viz. For awareness beyond the OpenClaw community, I'm taking a few angles: content (writing about the agent observability problem on dev blogs and Reddit), integrations (making ClawMetry work with other agent frameworks, not just OpenClaw), and word of mouth from builders who actually use it. The insight is that anyone running AI agents hits the "what is it actually doing?" wall eventually, so we're meeting them where that pain shows up. Open to ideas if you've seen what works!
Hi @vivek_chand ,
Came across ClawMetry on Product Hunt — really like the clarity of “Know what your agents are doing. Right now.” That line immediately communicates urgency and control.
One thought I had while exploring the page — are you intentionally positioning it strictly around OpenClaw users, or do you see it evolving into a broader “agent observability layer” narrative?
The core pain you’re solving (visibility into agent behavior, token burn, tool usage) feels bigger than a single ecosystem. It could potentially resonate with teams deploying AI agents in production more broadly.
Either way, very clean execution — especially the directness of the messaging.
ClawMetry for OpenClaw
@harsh_upadhyay10 Thanks Harsh! Really appreciate the kind words on the messaging.
You're reading my mind. The core problem, visibility into agent behavior, token burn, tool usage, is definitely framework-agnostic. I started with OpenClaw because that's where I live day-to-day, but the vision is absolutely a broader agent observability layer.
In fact, Nanobot and PicoClaw support are already in the works. The plan is to keep expanding to more agent frameworks from there. If you're running agents in production on a different stack, I'd love to hear what metrics and visibility you'd find most useful.
@harsh_upadhyay10 @vivek_chand Please let us know about the Nanobot version! Would be fantastic - I think building out crons in a calendar view would be particularly helpful.