Portkey's prompt engineering platform lets teams build, test, and deploy AI prompts at scale. Includes prompt playground, version control, collaborative editing, and AI gateway integration supporting 1600+ models and AI agent frameworks.
Portkey is a fantastic tool for any product that relies a lot on LLMs. It's great for creating automatic failovers and custom LLM routing based on variables we can send from the app.
MasterClass On Call Desktop beta — Instant feedback on how you communicate & lead in meetings.
Instant feedback on how you communicate & lead in meetings.
Promoted
Maker
📌
👋 Hey Product Hunt! I’m Ayush, Co-Founder and CTO of Portkey AI
We're very excited to introduce Portkey's Prompt Engineering Studio on Product Hunt
After seeing teams struggle with inconsistent prompt performance and scattered tooling, we built the solution prompt engineers have been waiting for.
Today we're launching the world's most advanced prompt engineering toolkit at prompts.new 🚀
Why we built this: Working with AI models requires precision in prompting, but existing solutions leave teams guessing with slow, manual processes and no version control.
What makes our Portkey’s Prompt Studio special:
- Powerful playground with side-by-side model comparison
- Test across 1600+ AI models instantly
- Version control with labeled deployments
- Collaborative prompt libraries
- Mustache templating and reusable partials
- A-powered prompt optimization
- Native support for LangGraph, CrewAI and other agent frameworksI
Our platform helps teams craft, optimize, and deploy AI prompts 75% faster, from experimentation to production.
We've put a lot care into building Portkey, and we really hope you like the product !
Check it out at prompts.new and let us know what you think!
P.S. we call the IDE for Prompt Engineers
Report
@ayush_garg_xyz Portkey’s Prompt Studio solves so many pain points like side-by-side model comparison, version control, and AI-powered optimization. Congrats on the launch
Report
@ayush_garg_xyz Do you support APIs for data import as part of the prompt eval process?
Report
@ayush_garg_xyz Finally, a proper toolkit for prompt engineers! Working with AI models has always felt a bit like trial and error, so having a dedicated studio with version control and real testing tools is a game-changer. Love the focus on collaboration too
Report
Maker
@ayush_garg_xyz@jontronic Hey Jonathan, no that is not supported at the moment - we are building a better Evals feature on top of this now.
This is an amazing idea, i've been looking for a platform to help me with creating the prompts. I hope this would be it. Giving it a go today! congratulations on the launch and keep it up!
Hi @ramywafaa thanks for the excitement! 🎉 we’re thrilled you’re giving it a go. Hope it makes your prompt creation process easier and faster—looking forward to hearing how it works for you!
This looks amazing @vrv18 and @ayush_garg_xyz ! Love how Portkey streamlines prompt engineering with powerful comparisons and collaboration.
How does your version control system handle prompt iterations? Can teams see performance changes over time and easily roll back to the best-performing versions?
Report
Maker
@ayush_garg_xyz@harkirat_singh3777 Thank you Harkirat! We don't have a polished evals product right now - you can run tests on prompts with our API but not in the UI. However, of course all prompt iterations are versioned and you can rollback, use any of the versions easily.
hey @lylia_djaitpaulien great question! you can play with both system prompt as well as user prompts in portkey playground. you can also use prompt variables inside your prompt to get your user's context
Portkey is an essential technology in the LLM/GenAI stack. We struggled to find a stable LLM proxy system to run on a serverless architecture since lamatic.ai is fully serverless. We were so desperate that we even considered building a solution ourselves. Then @ayush_garg_xyz introduced us to Portkey, and it completely transformed our situation.
Describing it as merely useful would be an understatement; it's a must-have. We definitely need more open-source and serverless technologies for GenAI.
I do have some suggestions for improvement:
1. Provide an API or documentation portal for available model support; currently, we have to manually test each model.
2. Include the option to add custom providers or methods.
3. Implement usage monitoring via the API.
4. Support multi-tenancy.
Congratulations to the team on the launch, looking forward to growing together.
Report
Maker
@ayush_garg_xyz@amanintech Thanks so much Aman for your comment. We are also a fan of what you're building at Lamatic!
Been using Portkey for a coupe of years now and the new prompt engineering studio has been a perfect match to write and iterate prompts at scale. Excited for the launch, lightspeed!
Report
Maker
@prrranavv Thank you so much Pranav for your feedback and suggestions so far! They have been instrumental in building the Prompt Studio.
Report
Amazing product! This addresses a critical pain point in the AI workflow! The version control with labeled deployments is what's missing in most prompt engineering processes. As someone who constantly tweaks prompts for different use cases across product, business, and content, the ability to compare models side-by-side would save countless hours of guesswork.
Have you considered adding a prompt performance analytics feature that tracks which versions perform best across different contexts or user segments? That kind of data-driven insight could help teams standardize on proven patterns rather than relying on individual intuition.
@ayush_garg_xyz Portkey’s Prompt Studio solves so many pain points like side-by-side model comparison, version control, and AI-powered optimization. Congrats on the launch
@ayush_garg_xyz Do you support APIs for data import as part of the prompt eval process?
@ayush_garg_xyz Finally, a proper toolkit for prompt engineers! Working with AI models has always felt a bit like trial and error, so having a dedicated studio with version control and real testing tools is a game-changer. Love the focus on collaboration too
@ayush_garg_xyz @jontronic Hey Jonathan, no that is not supported at the moment - we are building a better Evals feature on top of this now.
@ayush_garg_xyz @anas_turki Thank you Anas!
Vectopus
This is an amazing idea, i've been looking for a platform to help me with creating the prompts. I hope this would be it. Giving it a go today!
congratulations on the launch and keep it up!
@ramywafaa Glad you like this Ramy, thank you!
Portkey Prompt Engineering Studio
Hi @ramywafaa thanks for the excitement! 🎉 we’re thrilled you’re giving it a go. Hope it makes your prompt creation process easier and faster—looking forward to hearing how it works for you!
Zivy
This looks amazing @vrv18 and @ayush_garg_xyz ! Love how Portkey streamlines prompt engineering with powerful comparisons and collaboration.
How does your version control system handle prompt iterations? Can teams see performance changes over time and easily roll back to the best-performing versions?
@ayush_garg_xyz @harkirat_singh3777 Thank you Harkirat! We don't have a polished evals product right now - you can run tests on prompts with our API but not in the UI. However, of course all prompt iterations are versioned and you can rollback, use any of the versions easily.
Tyce
Is this more for system prompts or can be used for general users so they know how to ask better questions?
Portkey Prompt Engineering Studio
hey @lylia_djaitpaulien great question! you can play with both system prompt as well as user prompts in portkey playground. you can also use prompt variables inside your prompt to get your user's context
Lamatic.ai
@ayush_garg_xyz @amanintech Thanks so much Aman for your comment. We are also a fan of what you're building at Lamatic!
Onto your suggestions:
1 - This is a must have, and is coming soon to Portkey
2 - Thi s is already supported! Check out https://portkey.ai/docs/integrations/llms/byollm and https://portkey.ai/docs/api-reference/inference-api/gateway-for-other-apis
3 - This is supported on app.portkey.ai as well
4 - We have released an update for this as well, and you can create multiple Portkey workspaces inside a single org.
Peppertype.ai
Been using Portkey for a coupe of years now and the new prompt engineering studio has been a perfect match to write and iterate prompts at scale. Excited for the launch, lightspeed!
@prrranavv Thank you so much Pranav for your feedback and suggestions so far! They have been instrumental in building the Prompt Studio.
Amazing product! This addresses a critical pain point in the AI workflow! The version control with labeled deployments is what's missing in most prompt engineering processes. As someone who constantly tweaks prompts for different use cases across product, business, and content, the ability to compare models side-by-side would save countless hours of guesswork.
Have you considered adding a prompt performance analytics feature that tracks which versions perform best across different contexts or user segments? That kind of data-driven insight could help teams standardize on proven patterns rather than relying on individual intuition.