All activity
traceAI is OTel-native LLM tracing that actually works with your existing observability stack.
✓ Captures prompts, completions, tokens, retrievals, agent decisions
✓ Follows GenAI semantic conventions correctly
✓ Routes to any OTel backend—Datadog, Grafana, Jaeger, anywhere
✓ Python, TypeScript, Java, C# with full parity
✓ 35+ frameworks: OpenAI, Anthropic, LangChain, CrewAI, DSPy, and more
✓ Two lines of code to instrument your entire app
No new vendor. No new dashboard. Open source (MIT).

traceAIOpen-source LLM tracing that speaks GenAI, not HTTP.
Jaya Surya Puttileft a comment
Agent debugging shouldn’t be trial-and-error anymore. This turns intuition into measurable fixes you can actually trust.

Fix My Agent (FMA)See what breaks your AI agent and fix it automatically
Diagnose → Fix → Compare → Ship
When AI agents fail in prod, debugging is manual and slow. Evals show what broke, but not why or how to fix it. Teams spend weeks on root causes, testing prompts sequentially, swapping models & configs one at a time.
Fix My Agent auto-detects why your AI agents fail (system + prompt level), suggests fixes, lets you implement in one click, compare results parallely, and ship the best version. Get the most optimized agent in minutes, not weeks.

Fix My Agent (FMA)See what breaks your AI agent and fix it automatically

