Launched this week

ZenMux
An enterprise-grade LLM gateway with automatic compensation
1K followers
An enterprise-grade LLM gateway with automatic compensation
1K followers
ZenMux is an enterprise-grade LLM gateway that makes AI simple and assured for developers through a unified API, smart routing, and an industry-first automatic compensation mechanism.














I wish everytime a product didn’t work this happened !
ZenMux
@howell4change Haha right? Wouldn't that be nice 😄 Appreciate you.
ZenMux
Hey Product Hunt! 👋
I'm Haize Yu, CEO of ZenMux. We’ve been heads-down building an enterprise-grade LLM gateway that actually puts its money where its mouth is. I’m thrilled to finally get your feedback on it today.
Why we built this
Scaling AI shouldn't feel like "fighting the infra." As builders, we grew tired of:
Juggling dozens of API keys and messy billing accounts.
Sudden "intelligence drops" or latency spikes in production.
Paying full price for hallucinations without any fallback. 😅
We thought: What if a gateway didn’t just route requests, but actually insured the outcome?
What ZenMux brings to your stack
Built-in Model Insurance: We’re the first to offer automatic credit compensation for poor outputs or high latency. We take the risk, so you don't have to.
Dual-Protocol Support: Full OpenAI & Anthropic compatibility. Works out-of-the-box with tools like Claude Code or Cline.
Transparent Quality (HLE): We conduct regular, open-source HLE (Human Last Exam) testing. We invest in these benchmarks to keep model routing honest.
High Availability: Multi-vendor redundancy means you’ll never hit a rate-limit ceiling.
Global Edge Network: Powered by Cloudflare for rock-solid stability worldwide.
Pricing that scales
Builder Plan: Predictable monthly subscriptions for steady development.
Pay-As-You-Go: No rate limits, no ceilings. Pure stability that scales freely with your traffic. Only pay for what you actually use.
Launch Special
Bump up your credits! For a limited time: Top up $100, get a $10 bonus (10% extra).
One last thing...
What’s the biggest "production nightmare" you've faced with LLMs? Drop a comment—I'm here all day to chat!
Stop worrying. Start building. 🚀
https://zenmux.ai
BiRead
Model insurance for AI infra? That’s new. Curious to try it.
ZenMux
@luke_pioneero Appreciate it! 🙏 You hit it — the model insurance is new, but honestly the best part is what comes with the payout: real edge cases from your own usage, ready to plug back in and make your product smarter.
Curious to hear what you think once you try it! 🚀
@luke_pioneero Thank you! We built it because we felt infra shouldn’t shift all risk to builders.
ZenMux
Toki: AI Reminder & Calendar
ZenMux
@sophialgrowth Seeing "used ZenMux for a while" honestly made our day. Thanks so much!🥹
ZenMux
Congrats on the launch! The model insurance angle is interesting, especially for production use cases where reliability matters more than raw capability. How do you objectively determine when an output qualifies as poor versus just subjective dissatisfaction?
ZenMux
@vik_sh Thanks for the great question! 🙏 We've built our own detection algorithm, and right now we have two dimensions live:
Unexpected content generation
High latency
How do we detect "unexpected content generation"? One example: if a user asks two consecutive questions with the same intent, we treat that as a signal that they weren't satisfied with the first response. That's one of the ways we identify bad cases.
The payout is just the outcome. The real value is: every flagged bad case is an edge case from your own business, ready to be used as context to improve your product experience.
That's where the data flywheel starts turning.
ZenMux
Automatic compensation is a bold promise. Love this angle. Congrats on launch 👏
ZenMux
@sandy_liusy Thanks! 🙏 The auto-compensation is the hook, but the real gold is — every payout is an edge case from your own business. Take those insights back as Context to reverse-optimize your product experience. That’s the data flywheel.
@sandy_liusy Thanks! We felt that infra providers shouldn’t only optimize throughput — they should stand behind output quality and reliability. That’s the bet we’re making.
ZenMux
ZenMux