How many tokens does your AI agent burn just finding the right file?
I've been tracking this and it's kind of wild.
Every time I ask an agent to change something on the frontend: "fix the padding on that card," "make the CTA button blue"; the actual edit takes maybe 200 tokens. But before that happens, the agent:
Greps the codebase for matching components
Reads 5-10 candidate files to build context
Asks me to confirm which one it should edit
Sometimes still picks the wrong one and has to backtrack
That search loop can easily eat 5,000-10,000 tokens before a single line of code changes. On a big codebase with hundreds of components, it's even worse. The agent is doing the equivalent of opening every drawer in the kitchen to find a fork.
It got bad enough that I built a tool to fix it (launching tomorrow 👀), but I'm curious: has anyone else measured this? How much of your agent's token budget goes to search vs. actual coding?
And what's your worst "the agent edited the wrong file" story? I have a few that still haunt me.


Replies