Warehouse every AI request to your database. Query logs to analyze usage and costs, evaluate models, and generate datasets. Install the proxy with 2 lines of code. It’s free to get started, and you own your data.
Hey Product Hunt 👋 Emma and Chris here from Velvet. We built an AI gateway to warehouse OpenAI and Anthropic requests to your database. Engineers use Velvet logs to analyze usage and costs, evaluate models, and generate datasets.
Our first product was an AI SQL editor. It worked really well, but we had limited insights into what happened between our app and OpenAI. We started storing requests to our database, giving us the ability to query data directly. It gave us so much more control over developing our AI features.
We didn’t think of it as a product until one of our customers (Find AI) asked to use it. We warehoused over 3 million requests for them in the first week, logging up to 1,500 requests per second during their launch. Now we have many startups using Velvet daily for caching, evaluations, and analysis of opaque endpoints like the Batch API.
It's easy to get started! Once set up, we warehouse every request to your database.
Requests are formatted as JSON. You can include custom metadata in the header - like user ID, org ID, model ID, and version ID. This means you can run complex SQL queries unique to your app. Granularly calculate costs, evaluate new models, or identify datasets for fine-tuning.
We'd love to hear from you! Email team@usevelvet.com with any feedback or questions.
@emmalawler24 Congratulations on the launch! 🎉 I’m curious, how does Velvet handle data security and compliance with such high request volumes? Excited to see how it evolves!
Hey @devindsgbyq - thanks! For teams with large volumes of data in production, we warehouse requests to their own PostgreSQL instance. This resolves most security and compliance concerns since the company maintains full control of data. Our app and infrastructure is built with Cloudflare, Supabase, Neon, Vercel, and OpenAI. All are SOC 2 Type II compliant.
Report
@emmalawler24 Congrats on the launch!! You’ve come quite a way since I’ve last seen a demo and it’s super exciting to see
@elijas - Thanks, Paul! Glad you've been following our journey.
Report
🔌 Plugged in
Congratulations on launch! Our company Find AI has been an early customer of the AI gateway, and it's been a game-changer for our engineering team to interact with OpenAI.
Here's some of the things we do with the millions of requests we warehouse weekly with Velvet:
- Cost analysis: What's AI spend per user, per search, for logged out users, and in prod vs. development?
- Debugging: When a query does something weird, trace it back to the OpenAI call. We traced one query that occasionally returned gibberish, and realized that the temperature was just too high.
- Caching: particularly use this on smoke tests, so that we can actually query OpenAI if our prompt/model/etc changes to verify expected behavior, but skip paying money if things are the same
- Testing model upgrades: This week we've been exporting data of some models in gpt-4o, then re-running the exact queries against gpt-4o-mini. To our surprise, a couple of our classification queries had 100% the same accuracy on the mini model, so we could immediately switch and save money.
@philipithomas - thanks! Find AI has been a pivotal early design partner. Their engineering team uses Velvet daily, providing feedback on exactly what they need implemented to run and optimize a high-usage AI application. It's inspiring to see the Find AI product get better every day!
Congrats for the launch and thank you, we are happy users of Velvet, it's not only very simple to set up and use but also highly reliable and very well designed. Kudos team 🙌
@mehdidjabri - it's been great to see the revo.pm product evolve and scale. Excited to be part of your journey as you automate more product management workflows.
I see three killer features:
1. Integration in 2 lines means I can try it w/o much work
2. Hosting my own DB (obviously important)
3. Caching out of the box
The third feature is crazy important for nearly everyone building with LLMs. I can't think of a reason not to try Velvet - congrats on the launch, really excited to see where this goes
@hunter_brooks - thanks! Yep, exactly. Caching is a surprise killer feature for anyone building AI features. Excited to onboard Ellipsis to Velvet!
Report
The Velvet team is stellar--I've been watching them constantly improve the product for the better part of a year by eliciting and responding (swiftly) to feedback.
@thedatadavis - Thanks, Chris! Happy to have you following our journey.
Report
I'm usually working with AI companies that use LLMs to improve their models; I'll tell them to check out Velvet. Seems like useful tool for brining transparency to the workflow of engineers. Congrats on the launch @emmalawler24 🤠
Hey Emma and Chris,
Does using Velvet as a proxy introduce any noticeable latency?
For companies concerned about data security, what measures are in place to ensure the safety of the stored requests?
Congrats on the launch!
@kyrylosilin - thanks! Velvet’s proxy latency is nominal. And with our caching feature enabled, we can improve response times by more than 50%. You can read an article about our latency benchmarks here - www.usevelvet.com/articles/velve...
For products in production at scale, we warehouse requests directly to the team's PostgreSQL instance. This resolves most data security concerns since the company maintains full control of their data.
@chirag_mahapatra - Thanks for being an early adopter of the Velvet gateway! Blaze AI's workflows are the perfect application for our tooling.
Report
Congrats on the launch! Making our LLM requests actually queryable has been on our wishlist and isn't quite satisfied by logging/observability providers we know about. Excited to test it out 🔥
Replies
Velvet
Hey Product Hunt 👋 Emma and Chris here from Velvet. We built an AI gateway to warehouse OpenAI and Anthropic requests to your database. Engineers use Velvet logs to analyze usage and costs, evaluate models, and generate datasets.
Our first product was an AI SQL editor. It worked really well, but we had limited insights into what happened between our app and OpenAI. We started storing requests to our database, giving us the ability to query data directly. It gave us so much more control over developing our AI features.
We didn’t think of it as a product until one of our customers (Find AI) asked to use it. We warehoused over 3 million requests for them in the first week, logging up to 1,500 requests per second during their launch. Now we have many startups using Velvet daily for caching, evaluations, and analysis of opaque endpoints like the Batch API.
Try a sandbox demo → usevelvet.com/sandbox (no signup needed)
Read the docs → docs.usevelvet.com
Get started with two lines of code 🧑🏻💻
It's easy to get started! Once set up, we warehouse every request to your database.
Requests are formatted as JSON. You can include custom metadata in the header - like user ID, org ID, model ID, and version ID. This means you can run complex SQL queries unique to your app. Granularly calculate costs, evaluate new models, or identify datasets for fine-tuning.
We'd love to hear from you! Email team@usevelvet.com with any feedback or questions.
Watch a demo video to learn more.
Velvet
Velvet
Velvet
Revo AI Email Assistant
Velvet
Ellipsis (YC W24)
Velvet
Velvet
Velvet
Telebugs
Velvet
Smarty
Next Alpha
Velvet