Launching today
Nirixa AI
AI observability & cost intelligence for LLM apps
25 followers
AI observability & cost intelligence for LLM apps
25 followers
Nirixa gives AI teams full visibility into every LLM call β across OpenAI, Anthropic, Gemini, Groq, and more. Track token cost by feature, detect prompt drift, score hallucination risk, and monitor latency in real time. One SDK. One dashboard. Under 5 minutes to set up.




I run multiple LLM providers in production (Gemini Flash, GPT-4o, GPT-4o-mini) across different parts of my app and tracking cost per feature has been a nightmare. Right now I'm doing it with spreadsheets and napkin math. The token cost breakdown by feature is exactly what I need. Quick question, does it work with OpenRouter or just direct provider APIs?
@jarjarmadeitΒ Yes! OpenRouter works out of the box. It uses the OpenAI-compatible format so Nirixa picks it up automatically, no extra config. Gemini Flash, GPT-4o, GPT-4o-mini all show up separately in one dashboard. 5 min to set up.
GitSyncPad
Hey Product Hunt! π
We are Aravind & Sai, builder of Nirixa (ΰ€¨ΰ€Ώΰ€°ΰ₯ΰ€ΰ₯ΰ€·ΰ€Ύ β Sanskrit for
"to observe").
I built this after watching a founder friend get a
$4,200 OpenAI bill with zero idea which feature caused it.
They had no way to know. That's the problem Nirixa solves.
What we've built:
β Token cost breakdown by feature, user & endpoint
β Prompt drift detection (alerts when quality shifts)
β Hallucination risk scoring per request
β Works across OpenAI, Anthropic, Gemini, Groq & more
β 1 SDK. Under 5 minutes to full visibility.
Launching today with a free tier (100K
tokens/month).
Two things I'd love from this community:
1. Try it β nirixa.in (free tier)
2. Tell me what you'd want to see next
Happy to answer any questions below! π
Hey PH! Sai here β built Nirixa after getting burned by invisible AI costs one too many times.
The core insight: every AI observability tool today is either provider-specific (so it can't show you cross-provider comparisons) or infra-general (so it doesn't understand LLM-specific concepts like prompt drift or hallucination risk).
Nirixa fills that gap. It's a thin SDK layer that intercepts your LLM calls and tracks:
β’ Token cost per feature/endpoint/user
β’ Prompt stability over time (semantic diff engine)
β’ Hallucination risk score per request
β’ Cross-provider latency benchmarks
We're live now. Drop your questions below β especially if you're skeptical. Those are the conversations I learn the most from. π