
ZenMux
An enterprise-grade LLM gateway with automatic compensation
799 followers
An enterprise-grade LLM gateway with automatic compensation
799 followers
ZenMux is an enterprise-grade LLM gateway that makes AI simple and assured for developers through a unified API, smart routing, and an industry-first automatic compensation mechanism.














ZenMux
Elisi : AI-powered Goal Management App
Multiple suppliers for the same model + auto failover = no more "our model provider is down" incidents.
ZenMux
NewOaks AI
This product is in high demand. The only question is the pricing and whether the insurance works
ZenMux
@ray_luan Exactly why we built it. Pricing is usage-based, and the insurance (auto-comp) is fully automated—no manual claims. DM me if you'd like to see how it works!
Elser AI
What exactly is "model insurance"? Never heard of this before.
ZenMux
@elser_ai Appreciate it! 🙏 You hit it — the model insurance is new. Currently we cover two dimensions: 1) output quality (hallucinations, unexpected content), and 2) high latency. More dimensions coming soon.
But honestly the best part is what comes with the payout: real edge cases from your own usage. Long term, these insights help you iterate and improve your own product's user experience.
Curious to hear what you think once you try it! 😊
ZenMux
Sublime Todo
The automatic compensation mechanism is really clever. Balancing costs across multiple model providers is a pain point we've dealt with. How does it handle routing decisions when multiple providers offer similar performance but vastly different pricing? Does it learn from request patterns to optimize long-term?
ZenMux
ZenMux
Hey Product Hunt! 👋
I'm Haize Yu, CEO of ZenMux. We’ve been heads-down building an enterprise-grade LLM gateway that actually puts its money where its mouth is. I’m thrilled to finally get your feedback on it today.
Why we built this
Scaling AI shouldn't feel like "fighting the infra." As builders, we grew tired of:
Juggling dozens of API keys and messy billing accounts.
Sudden "intelligence drops" or latency spikes in production.
Paying full price for hallucinations without any fallback. 😅
We thought: What if a gateway didn’t just route requests, but actually insured the outcome?
What ZenMux brings to your stack
Built-in Model Insurance: We’re the first to offer automatic credit compensation for poor outputs or high latency. We take the risk, so you don't have to.
Dual-Protocol Support: Full OpenAI & Anthropic compatibility. Works out-of-the-box with tools like Claude Code or Cline.
Transparent Quality (HLE): We conduct regular, open-source HLE (Human Last Exam) testing. We invest in these benchmarks to keep model routing honest.
High Availability: Multi-vendor redundancy means you’ll never hit a rate-limit ceiling.
Global Edge Network: Powered by Cloudflare for rock-solid stability worldwide.
Pricing that scales
Builder Plan: Predictable monthly subscriptions for steady development.
Pay-As-You-Go: No rate limits, no ceilings. Pure stability that scales freely with your traffic. Only pay for what you actually use.
Launch Special
Bump up your credits! For a limited time: Top up $100, get a $10 bonus (10% extra).
One last thing...
What’s the biggest "production nightmare" you've faced with LLMs? Drop a comment—I'm here all day to chat!
Stop worrying. Start building. 🚀
https://zenmux.ai
KnowU
The most stressful part of using LLMs is wondering if the model secretly got worse. This fixes that.
ZenMux
@carlvert Totally. 🙏 Nothing worse than wondering if it's your prompt or the model just got dumber. We put the HLE tests and leaderboard out there so you can actually know. No more guessing games.
Appreciate you!
@carlvert Yes. The worst failures aren’t crashes — they’re subtle intelligence regressions.
That’s why we run ongoing HLE benchmarks and monitor routing drift continuously.
ZenMux