Garry Tan

ClawTrace - Make your OpenClaw better, cheaper, and faster

ClawTrace closes the self-evolving loop for OpenClaw agents. It captures every trajectory automatically — every LLM call, tool use, sub-agent, and cost — so Tracy, the doctor agent, can query OpenClaw's execution history live and tell exactly what failed, what was wasted, and how OpenClaw should evolve next.

Add a comment

Replies

Best
Richard Song
Hey Product Hunt 👋 I'm Richard, co-founder of Epsilla. Today we're launching ClawTrace, and I want to tell you the story that made us build it, because it's a bit meta. We run our own OpenClaw agents internally. One of them is ElizaClaw, our AI co-founder. A few weeks ago, ElizaClaw ran a research task: she was studying self-evolving AI agent frameworks, such as EvolveR, CASCADE, and STELLA, trying to learn how AI agents can improve themselves from their own execution history. The irony? While she was researching how AI agents self-evolve, we had absolutely no visibility into her own execution. We didn't know she'd burned 1M input tokens on a single LLM call. We didn't know four web searches were running sequentially when they could have been parallel. We didn't know the biggest bottleneck was a 68-second LLM call that could have been avoided entirely. ElizaClaw was learning how to self-evolve in theory. But in practice, she couldn't self-evolve at all, because she had no feedback on her own runs. That's the gap ClawTrace closes. Self-evolving agents need a signal. They need to see every step they took, what it cost, where they stalled, and why. Without that signal, "self-evolving" is just a name, the agent improves only when a human manually digs through logs, guesses at the bottleneck, and patches the prompt. ClawTrace makes the signal automatic: → Every trajectory captured: every LLM call, tool use, and sub-agent delegation → Three views: execution path, call graph, and timeline → Tracy, our built-in OpenClaw's doctor agent, who can query the agent's trajectory graph live and say "here's the bottleneck, here's why, here's what to fix next" When we showed ElizaClaw's own trajectory through ClawTrace, the 1M-token context stuffing, the sequential tool calls, the 68-second LLM call, and asked Tracy "where is the bottleneck?", she surfaced a full span breakdown in seconds with three specific recommendations. That's the loop working. A few things I'm genuinely curious about from this community: 1. Are you already thinking about self-evolving agents in your work, or does that feel far off? 2. When an agent run goes wrong today, what's your actual debugging workflow? (Ours was embarrassingly manual before ClawTrace) 3. If your agent could query its own past trajectories and improve itself automatically, what's the first thing you'd want it to learn? Thank you for being here. Today feels like a real milestone, and honestly, ElizaClaw helped research and write parts of this launch too. Meta all the way down. Thank you for your support, and happy building! Cheers, Team Epsilla clawtrace.ai | github.com/epsilla-cloud/clawtrace
Marcin Michalak

@renchu_song Shipping the future! This is a great addition to the space. Seeing more practical tools for self-improving agents is a big win. Congrats to the whole Epsilla team!

Richard Song

@marcin_michalak thank you so much for your support, Marcin! We really believe observability is the key for self-evolving AI agents, and we look forward to exploring collaboration opportunity with AgentX team in this space!

New User

@renchu_song This project is a miracle-shaped hole for the problem I have right now - 'what were you THINKING???' - now I know, and more importantly, now my agentic-coCEO and I can figure out what to do about it

Richard Song

@deebeejason thank you for your support, Jason!

Lakshay Gupta
💎 Pixel perfection

One of the coolest launch of the day! Btw once it identifies bottlenecks, how are fixes applied like automatically, suggested or human in the loop???

Richard Song

@lak7  Thank you so much for your support! We have a self-evolved skill that can be installed to OpenClaw: https://clawhub.ai/richard-epsilla/clawtrace-self-evolve. After that OpenClaw can automatically talk with Tracy (either triggered by heartbeat, triggered by specific conditions during task run, or human initiated), get the diagnosis, and apply changes to its own memory and skills, thus closes the self-evolution loop. Below screenshots show a sample session how OpenClaw evolve itself by talking to Tracy:

This feature is still in the experimental phase, and stay tuned, more exciting things will come soon!

Alex

The 'powered by your private data' part is what matters here. Most agent platforms force you to feed everything into someone else's cloud. How do you handle data residency — can everything stay on-prem, or is there a hybrid option for teams that need both?

Richard Song

@youngyankee  Thanks for your emphasis on the data privacy part. For ClawTrace, we are Apache 2.0 licensed open-source at https://github.com/epsilla-cloud/clawtrace/ that people can use to build their own on-prem or hybrid deployment architecture. For people who don't want to operate and manage their own graph lake house architecture, To provide a SaaS-managed version at https://clawtrace.ai, with a SOC 2 verified architecture.

Jimmy Nguyen

@renchu_song the 1M token burn on a single LLM call with no visibility is a very relatable war story — does ClawTrace surface cost attribution per agent or per task/subtask? Trying to understand if the granularity is enough to catch runaway sub-agents before they crater a budget.

Richard Song

@jimmypk Thank you for the insightful question! The granularity is at per span / per LLM call and per sub-agent level, with hierarchical aggregation, so investigator can speculate which specific part of the trajectory is the bottleneck

Natalia Iankovych

I need to collect a database of objects (for example, hotels) from other websites, with specific fields and information for each object. Can I do this with your service? What is the cost of live search?

Richard Song

@natalia_iankovych Hi Natalia, very practical requirements. But ClawTrace is for OpenClaw observability, like Datadog for AI agents. For your requirements, I believe OpenClaw itself can do a decent work, if you describe the requirement in natural English with specific fields required, and source websites to collect the information (like priceline, etc), OpenClaw will use Browser to collect these information and organize them into a spreadsheet. Happy to help setup such a pipeline

Natalia Iankovych

@renchu_song Thanks. I actually asked our developer today to look into OpenClaw for this task :) We’re currently using a specialized service, but it’s paid.

Mehmet Kose

Cool project but you guys def need a design work for your branding/logo. aye human one

Richard Song

@mehmetkose thanks for your feedback!

Saul Fleischman

Congrats on the launch! This is a really compelling vision - the ability to let domain experts build agents with their own data without heavy ML chops seems like a huge unlock. How do you handle the challenge of agents going off-rails or hallucinating with proprietary data? Do you have guardrails built in, or is that something each user configures themselves?

Richard Song

@osakasaul thank you for your support! Our agent runs on a semantic graph - a graph vector hybrid search engine that make sure context is not getting rot during agent execution. Paired with frontier model and a battle-tested harness, the performance is production-ready.