I launched my 9-Agent Resume Swarm. I immediately had to stop it from lying (and fix the UX).
Hey makers! π
Yesterday I launched Resume Squad.
What is it? After months of obsessing over the "ATS Black Box," I realized that standard "one-shot" ChatGPT prompts fail at writing resumes. They lose context, write generic fluff, and fail human recruiter checks. To beat a corporate Applicant Tracking System, a resume isn't an essay anymore, itβs a data payload. It needs to compile like code.
How it works: Instead of a single prompt, I built a 9-Agent AI Swarm. The agents work in a sequential, automated pipeline:
π΅οΈ The Extractor: Scrapes the Job Description and maps your raw skills to a standardized taxonomy.
βοΈ The Critic-Refiner Loop: A Writer Agent drafts the content using the strict STAR method. Then, a Reviewer Agent acts as a ruthless hiring manager, rejecting weak verbs and passing strict JSON feedback to an Optimizer Agent for surgical rewrites.
π― The Compiler: Finally, an ATS Scorer Agent runs a validation check simulating legacy parsers to ensure keyword density and formatting compliance.
The Launch Day Bug: AI Safety vs. UX Friction Building this strict pipeline led me to a massive paradox just a few hours after going live today.
Because my Writer Agent is strictly instructed to use the STAR method (Action + Task + Result), if a user's raw data lacks specific metrics (like revenue generated), the LLM tries to be "helpful" by inventing fake numbers. I absolutely cannot let my users submit resumes with fake data.
The Backend Fix (Zero Hallucinations): I pushed a strict update to my prompts: If a metric is missing, the AI is forbidden from guessing. It must insert a placeholder like [Insert %] or [Insert X]. Safety achieved!
The UX Nightmare & Frontend Fix: But suddenly, I created a UX nightmare. A user might skim the resume, miss the placeholders, and export a PDF that says [Insert %]. I solved the backend hallucination but created a frontend risk.
So, I just shipped a live React update to my frontend:
ποΈ Interactive Highlighter: I built a custom markdown parser that intercepts the [Insert] tags on the fly and wraps them in a bright yellow, clickable highlight.
βοΈ One-Click Edit: Clicking any highlighted metric instantly pops open the markdown modal so users can type their actual numbers.
π¨ Conditional Safety Banner: If the document detects an empty placeholder, a warning banner blocks them from thinking the document is "done."
The Ask for the Community: Finding the balance between "protecting the user from AI hallucinations" and "making the UI frictionless" is incredibly hard as a solo dev.
For the founders and devs here: How do you handle missing data in your generative AI workflows? Do you let the AI guess, or do you force the user to fill in the blanks?
I would love for you to test out the 9-Agent Swarm (and my new interactive highlighter) and give me your honest feedback!


Replies