RepoPrompt is the uber AI dev productivity enhancer for devs and teams that deliver. A couple of features make it stand out: one is the Context Builder, which is the ability to precisely carve the context required by an LLM to execute a task optimally. The other is, via MCP, to enable multi-model collaboration within a single context window: you can have one model architect a solution, another challenge it, yet another offer a lateral take, all sharing the same codebase context and conversation history. The context selection system is surgically precise: codemaps give structural awareness without burning tokens, while you curate exactly which files get full attention. Running proposals through multiple models before touching code has caught architectural flaws that would've cost hours to unwind.
Raycast
Repo Prompt is like a superskeleton for creating better prompts within specific repos or projects.
It'll help you produce refined prompts so that when you hand off to your coding agent of choice, it'll get the done more efficiently and intentionally.
Enjoy!
As a solo dev, context switching is the biggest pain. The mental overhead of manually curating files for Claude/Cursor is exhausting. 💀
The MCP integration specifically catches my eye (I'm currently experimenting with Claude Code workflows).
Quick question: Does the Context Builder handle large, unstructured "spaghetti" codebases well (to help refactor), or does it rely on existing clean architecture to find the right files?
@soyaoda I'm just a user but RepoPrompt can handle anything, whether it's spaghetti or a large open source codebase. The MCP tool will basically do a fast code scan and other things to ensure that the context is relevant and fits within the model's limits. It will slice out the irrelevant parts. What's even more amazing is with MCP you only type a slash command /rp-build or /rp-investigate (for more complex analysis) and type a normal prompt like "investigate how to fix X" along with any context you have and off it goes. But in practice you rarely have to manually do anything. What's more if you have access to ChatGPT Pro you can produce a 50-60K token prompt with RP that you can paste into the app and back into your AI.
I can tell you from experience you will never have to curate manually for most tasks and when you do need to refine it you can focus on the really high value part of curation.
@barons Thanks for the detailed breakdown! The /rp-build command sounds exactly like what I need to stop the manual copy-paste madness.
Good tip on the token limits for Pro users too—definitely going to test that out. Appreciate the insight!
Repo Prompt
@soyaoda It works well in fairly large codebases, even messy ones no problem. One caveat though is that how good it is depends on the mode/agent used to run it. I recommend GPT 5.2 Codex Medium, but it works with both Gemini and Claude as well. You can read about model recs here.
The app has a free trial if you want to see how it handles your repo before committing anything.
@eric_provencher Appreciate the confirmation (and the specific model rec)!
Handling "messy" codebases is the real stress test for me, so I'll give the free trial a spin this weekend to see how it cleans up my mess. Congrats again on the launch!
The Context Builder approach sounds practical for larger codebases—isolating relevant code before passing to reasoning models makes a lot of sense. I'm curious how it handles multilingual codebases where comments and variable names mix languages. Does the discovery agent factor in those language patterns when building context?
Repo Prompt
@yamamoto7 The context builder is based on plain text processing, so it doesn't matter if the codebase uses many languages. The agent will intelligently find whats most relevant to the task, and this even works across several repos, which you can setup in a Repo Prompt workspace. One note is it does by default ignore git ignored files, and you can isolate where it's allowed to search with further filtering as needed.
There's a list of supported codemap languages as well, that help the discovery agent more efficiently navigate the codebase. You can read about that feature here. Codemaps are not required for the discovery agent to work effectively, but it is also a helpful tool to help the model reduce context for types that are referenced without their full implementation being required.
Repo Prompt looks awesome! Super excited about the potential for boosting developer productivity. How does it handle repos with a lot of historical branches?
Repo Prompt
@jaydev13 Hey historical branches shouldnt matter much for repo prompt! I've been working hard to optimize the app for large codebases. Be sure to give the free trial a go to see how well it works for your code.
This tackles a real problem in AI assisted development by building focused, task relevant context instead of dumping entire repos into the model. The MCP server integration is a smart move. How do you ensure important dependencies are not missed, and does it support large monorepos or legacy codebases?
Repo Prompt
Repo Prompt lets you build precise context from your codebase so reasoning models actually understand what you’re working on.
I’ve been building it for over a year now. With MCP and CLI support, it fits into any agent workflow — Claude Code, Codex, Cursor, whatever you use.
The /rp-build command automates the whole loop: it researches your codebase, builds a plan, and hands off to your agent with the right context already loaded.