Launched this week

ZenMux
An enterprise-grade LLM gateway with automatic compensation
1K followers
An enterprise-grade LLM gateway with automatic compensation
1K followers
ZenMux is an enterprise-grade LLM gateway that makes AI simple and assured for developers through a unified API, smart routing, and an industry-first automatic compensation mechanism.














Does ZenMux's credit compensation trigger on latency spikes the same way it does on hallucinations? That threshold is where the value gets real. Feeding compensated cases back so teams can fine-tune against their own failure modes is what makes the insurance self-improving.
ZenMux
@piroune_balachandran You hit the nail on the head! 🎯
You completely understand our long-term vision. Ultimately, we see ZenMux as a data flywheel company designed to help developers build their own data flywheels.
The core mechanism driving this is the continuous refinement of our compensation algorithms. We are actually planning to introduce both active and passive compensation mechanisms in the future. We view every compensated event as a high-value corner case or bad case—by feeding this data back to you, teams can fine-tune against specific failure modes and constantly improve their product quality.
Thanks for seeing the bigger picture!
Lancepilot
Congrats on the launch, ZenMux.
While everyone is building on LLMs, you’re building the backbone. Unified, intelligent, and enterprise-ready, that’s how real AI infrastructure scales.
Wishing you powerful integrations and unstoppable momentum ahead.
ZenMux
@priyankamandal
Thanks so much! That means a lot — we really believe infrastructure should be boring (in a good way), so developers can focus on the fun part. Appreciate the support! 🙏
ZenMux
Elisi : AI-powered Goal Management App
Multiple suppliers for the same model + auto failover = no more "our model provider is down" incidents.
ZenMux
FastMoss
Congrats on the launch, ZenMux team! A unified LLM gateway with smart routing is already valuable, but the “automatic compensation” angle is especially interesting — it’s rare to see reliability/quality guarantees treated as a first-class product feature. Curious how you define and measure “subpar results” (latency, hallucination rate, eval score, user feedback?) and what the compensation workflow looks like in practice.
ZenMux
@31xira Great question — and yes, we have algorithms to detect both latency spikes and content quality issues. When the conditions are met, compensation is triggered automatically.
That said, we're still iterating on the thresholds — this is our first version, and we're learning fast. Would love for you to stay tuned and share any feedback if you give it a try! 😊
@31xira Thanks, Angie! You asked the million-dollar question regarding our workflow.
We actually designed two core mechanisms: Passive and Active compensation.
Passive Compensation (Live now): This happens in the background without user intervention. We use our internal 'insurance mining' algorithms to detect qualifying failure patterns and issue credits automatically. However, purely automated detection naturally has a technical ceiling.
Active Compensation (Coming H1 this year): This is where we see the most potential. Developers can use the ZenMux SDK to embed feedback controls directly into their AI products. When an end-user flags a poor result (via a standard interaction flow), we evaluate it and route the compensation through you (the developer) directly to that end-user.
We believe this 'Active' approach will be the real game-changer for building trust!
mymap.ai
Excited to follow your journey. Great launch!
ZenMux
@victorzh Thanks! Appreciate it. Stoked to have you along for the ride — more coming soon!
@victorzh Thank you so much! Really appreciate the support 🙌
ZenMux
KnowU
The most stressful part of using LLMs is wondering if the model secretly got worse. This fixes that.
ZenMux
@carlvert Totally. 🙏 Nothing worse than wondering if it's your prompt or the model just got dumber. We put the HLE tests and leaderboard out there so you can actually know. No more guessing games.
Appreciate you!
@carlvert Yes. The worst failures aren’t crashes — they’re subtle intelligence regressions.
That’s why we run ongoing HLE benchmarks and monitor routing drift continuously.
ZenMux
Elser AI
What exactly is "model insurance"? Never heard of this before.
ZenMux
@elser_ai Appreciate it! 🙏 You hit it — the model insurance is new. Currently we cover two dimensions: 1) output quality (hallucinations, unexpected content), and 2) high latency. More dimensions coming soon.
But honestly the best part is what comes with the payout: real edge cases from your own usage. Long term, these insights help you iterate and improve your own product's user experience.
Curious to hear what you think once you try it! 😊
ZenMux