
Torrix
Self-hosted LLM observability. Every token. Every dollar.
1 follower
Self-hosted LLM observability. Every token. Every dollar.
1 follower
Most LLM observability tools send your prompts to their cloud. Torrix runs on your server. Add 2 lines of Python, or route any HTTP client through the proxy. No code changes needed. Every AI call is logged instantly: tokens, cost, latency, and the full prompt trace. Works with OpenAI, Anthropic, Gemini, Groq, Azure, Mistral, SAP AI Core, n8n, and any HTTP API. Community edition is free forever. Your data never leaves your infrastructure.









Hey Product Hunt! 👋
I am Adarsh, a developer & integration consultant at SAP building Torrix as a side project.
The frustration: every LLM observability tool I tried wanted to store my prompts on their servers. For enterprise work, that's a non-starter.
So I built Torrix, a self-hosted proxy that logs every AI call locally in SQLite. Two lines of Python, or zero code changes via the HTTP proxy. Supports 300+ LLM models. Deploys in 60 seconds with Docker. Free forever.
Would love your honest feedback, especially from anyone running LLMs in production. What's missing? What would make this a daily driver for you?
Try it: https://torrix.ai