Repo Prompt helps AI models understand your codebase without wasting tokens on irrelevant code. Context Builder analyzes your project and selects the files and functions needed for your task, building dense context that fits in model limits. It works with your existing AI subscriptions (Claude MAX, ChatGPT Plus, Gemini) — no extra API costs. The MCP server turns Repo Prompt into a backend for Claude Code, Cursor, and Codex, giving them context analysis and discovery they can't do on their own.







Product Hunt
Repo Prompt
@curiouskitty Hey! So the big difference with plan mode in existing tools, is that Repo Prompt's context builder operates earlier in the flow. It splits up planning and research into separate tasks.
Context builder's job is to navigate your repo using an agent like claude code or codex directly (the app handles orchestration using cli headless modes), and it will then isolate the relevant files, and carve out the most relevant sections of those files. The discovery agent also writes you an optimized prompt that includes your task, and information about the codebase architecture, and class relationships.
The result is a dense context prompt that can be used for planning. If you've ever tried to use reasoning models like GPT 5.2, one of the challenges is getting them to spend as many reasoning tokens as possible on analysis, instead of navigation. This gets you the best possible way to prompt those models to really pull the frontier of intelligent architectural planning forward.
I invite you to read how some of the other commenters here actually use the app. It's highly automated and with the cli or mcp, it can fit into your existing cursor or claude code workflows, and enhance their built in planning and navigation.
The process I described above, is automated with a convenient slash command now. You can simply type /rp-build, and Repo Prompt will return with codebase analysis and the plan from that dense context prompt, for your agent to get to work with.
Puppr
I've been using Repo Prompt for about a year now and it's become my go-to tool for AI-assisted development.
My setup: I use Claude Code with Opus 4.5 as my main coding agent, but I also bring in GPT-5.2, especially during architecture and planning. Repo Prompt is the context layer that lets these models collaborate - I can curate exactly what each one sees and keep everything in sync. Instead of being stuck with one model's strengths, I get the best of both.
Just shipped my latest iOS app using this workflow exclusively.
Beyond the tool itself, the community is incredibly valuable. The Discord is active, Eric is constantly listening to feedback, and he ships high-quality updates at a speed I haven't seen from other dev tools. Can't recommend this enough.
Been using Repo Prompt daily for months. My workflow: GPT 5.2 High or Pro for planning, Opus 4.5 for implementation, then GPT 5.2 High for review. RepoPrompt keeps the same codebase context across all three, so each model builds on what the last one did.
The MCP integration with Claude Code is where it really clicks. Context building, implementation, review - all orchestrated end-to-end without manual handoffs.
Also want to highlight the Discord community. I'm active there, helping other users get set up, but I'm also learning a ton from how others use it. It's one of the most responsive dev communities I've been part of. @eric_provencher ships features faster than we can request them.
Context assembly is quietly becoming infrastructure for AI coding. Tools that formalize it instead of hand-waving over repo search feel inevitable.
Minara
Congrats on the launch! @eric_provencher @chrismessina
As a PM managing an AI product, context management has become one of the most critical (and most underestimated) challenges in our workflow. Last week I was commenting on Conversation API about multi-model routing, and today Repo Prompt reminds me that "context" is the universal problem across all AI applications — whether it's chatbots, coding agents, or data analysis tools.
What I love about Repo Prompt:
The "No Extra API Costs" Model: This is huge. So many tools try to become yet another AI subscription, but you're positioning this as infrastructure that enhances existing subscriptions (Claude MAX, ChatGPT Plus, Gemini). That's the right abstraction.
The MCP Server Approach: Turning Repo Prompt into a backend for Claude Code, Cursor, and Codex is brilliant. These tools are powerful but lack intelligent context analysis. You're filling a real gap in the ecosystem.
Token Efficiency: In our product, we're constantly battling token costs. When you're running hundreds of AI operations per day across different models, wasting tokens on irrelevant code adds up fast. A tool that automatically builds dense, relevant context is a game-changer.
Questions I have:
Context quality metrics: How do you measure whether the selected context is actually relevant? Do you have any feedback loops or success metrics?
Team collaboration: Can multiple team members share context configurations? For example, if I define a good context for a specific task, can my teammate reuse it?
Sensitive code handling: How does Repo Prompt handle sensitive code (API keys, credentials, proprietary algorithms)? Can I exclude certain files or patterns?
Cross-repo context: Can it analyze context across multiple repos? In our product, we often need to understand how changes in one service affect another.
This feels like the missing piece in the AI coding workflow. Excited to try it out!
One of the best additions to my dev workflow. It is a great tool to rapidly improve the precision of your development with coding agents - saving you time and money. One of the best features of this app is the rapid development lifecycle and how close Eric is to the community of users, and he's both opinionated and openminded... Join the discord, and help guide the tool's development.
Congrats on the launch! I like how Repo Prompt focuses on context quality instead of just prompt wording, that’s usually where things break with coding agents.