Reviews praise TensorBlock Forge for unifying access to multiple AI models via a single, OpenAI-compatible endpoint, with several noting setup is quick and switching providers takes just a few lines. Developers like its cost-free positioning versus alternatives and say it reduces key, config, and routing headaches. Users highlight smooth automatic failover, reliability, and strong privacy posture. It’s viewed as especially helpful for projects juggling GPT-4, Claude, Gemini, and more, saving time and credits while supporting research experiments and production workflows.
TensorBlock Forge
Hey ProductHunt!
We're so excited to announce our newest product.
🚀 Introducing TensorBlock Forge – the unified AI API layer for the AI agent era.
At TensorBlock, we’re rebuilding AI infrastructure from the ground up. Today’s developers juggle dozens of model APIs, rate limits, fragile toolchains, and vendor lock-in — just to get something working. We believe AI should be programmable, composable, and open — not gated behind proprietary walls.
Forge is our answer to that.
🔗 One API, all providers – Connect to OpenAI, Anthropic, Google, Mistral, Cohere, and more.
🛡️ Security built in – All API keys are encrypted at rest, isolated per user, and never shared across requests.
⚙️ Infra for the agent-native stack – Whether you're building LLM agents, copilots, or multi-model chains, Forge gives you full-stack orchestration without the glue code.
💻 And yes — we’re open source.
We believe critical AI infrastructure should be transparent, extensible, and owned by the community. Fork us, build with us, or self-host if you want full control.
We’re just getting started. Come help us shape the future of AI agent infra.
Check out our product at https://tensorblock.co/forge
Star us on GitHub: https://github.com/TensorBlock
Join our socials: https://linktr.ee/tensorblock
Follow us on X: https://x.com/tensorblock_aoi
Let us know how you would use Forge to simplify your AI agent or workflow!
@tensorblock it's a great product, we actively include products like these in our newsletter https://hw.glich.co/ , do let me know if you are open for a quick collab.
HelloTalk
I can't quite figure out what this does, is it an OpenRouter competitor or something else?
I'm currently using Bedrock for a chatbot, would I be able to swap it out for this to use Gemini for example?
TensorBlock Forge
@lewislebentz Thanks for the great question!
Forge is a bit different from OpenRouter. While OpenRouter acts as a direct AI service provider, Forge is an open-source middleware that helps you manage, route, and unify access to multiple AI providers through a single, OpenAI-compatible API.
It doesn’t host the models directly; instead, you bring your own API keys, and Forge handles smart routing, formatting, and compatibility across providers. You can also integrate custom if needed.
In your case, Forge can absolutely help. You could swap in the Forge endpoint and configure it with your Gemini API key, or any other providers you use. Just make sure to reference the correct model name in your API calls, and Forge will route the request accordingly.
And since Forge is fully open source, you're also free to self-host and customize it however you like :D
@lewislebentz @morrischeung nice work and neat UI. Would this be something similar to llmlite?
TensorBlock Forge
@lewislebentz FYI, here is the open-sourced repo: https://github.com/TensorBlock/f..., you can self-host it with single command:
I have never seen anything like this but love the concept. It has a looot of use cases. Congrats on the launch @tensorblock @morrischeung @wilson_chen7
TensorBlock Forge
@tensorblock @morrischeung @lakshya_singh thanks bro, all the best to your launch as well
Congrats Dennis and team, Forge looks like a powerful step forward for developers building in the AI agent era. From Best of Web team
TensorBlock Forge
@nimaaksoy Thanks so much! Really appreciate the support from the Best of Web team :)!
We’re excited to help developers and communities, Forge is just the beginning. Looking forward to staying connected and exploring ways to collaborate!
@wilson_chen7 check out bestofweb .site I am sure we could collaborate
TensorBlock Forge
@nimaaksoy Absolutely! Once I catch up on a much-needed 5 hours of sleep after the launch day chaos, I’ll sit down at my computer and take some time to dive in. I’ll definitely reach out afterward.
Does Forge support fine-tuned private models (e.g. custom LLMs trained on internal data), or is it mainly optimized for major public models like OpenAI, Anthropic, etc.?
Also curious how you handle latency and provider fallback if one model endpoint fails?
TensorBlock Forge
@iam_sofia Thanks for the question. Currently, we only support major public models and supporting private models is on our roadmap.
Congratulations Dennis and team. Looks really interesting. One quick question. How should I differentiate between Forge and say LiteLLM?
TensorBlock Forge
@abid_mohammed Thanks so much for the support and kind words! From what I understand, LiteLLM functions primarily as a locally hosted proxy — whether for personal or enterprise use — so it doesn’t need to address the same level of concerns around performance, security, privacy, or key management.
In contrast, Forge is a fully managed cloud service that works out of the box. It includes TensorBlock as the default model provider, offering access to free models and long-tail open-source models hosted on our own infrastructure. As a SaaS platform, the Forge key can be used seamlessly across devices and environments, providing users with flexibility without requiring local setup.