Tessa Kriesel

How I Used Claude Code's Multi-Agent Orchestration and Laravel to Rebuild a Backend Overnight

by

Last night at 8:50 PM, I sat down to rebuild a product's backend from scratch. By 11 PM, I had a complete Laravel API with auth, CRUD, real-time WebSockets, and chat. By 1:50 AM, the existing React Native frontend was wired up and running against it locally. Deployed to the cloud the next morning.

That's about two hours for the entire backend. Five hours total including frontend integration.

Here's how multi-agent orchestration, Laravel, and solid documentation made that possible.

What I Was Building

The product is a real-time sports analytics platform. Live play-by-play, game scores, chat, coach dashboards, player profiles, the whole deal. The original product was built on a monolithic Python script with a FastAPI backend. It worked, but it wasn't built to scale.

I decided to tear down the backend and rebuild it on Laravel. The existing React Native frontend just needed to be rewired to the new API. And I needed it done fast, because we're shipping in August.

The Three-Tool Workflow

I ended up using different Claude tools for different parts of the process. I'd done some research beforehand on how to work with Claude effectively, and the workflow that emerged was effective enough that I want to break it down.

I used three:

Claude in the browser for strategic planning. I fed it the CEO's product documentation, the existing codebase context, and our technical requirements. It generated a phased build plan, six phases covering everything from auth to deployment prep. This wasn't vague, it was specific enough to hand directly to Claude Code as prompts.

Claude Code for execution. Each phase of the build plan became a prompt. Claude Code scaffolded the Laravel backend phase by phase: migrations, models, controllers, API resources, broadcasting events, chat with profanity filtering, security hardening. Six phases in about two hours.

Multi-agent orchestration for the messy middle. This is where it got interesting.

18 Agents Fixing Errors Simultaneously

After the backend was built and the frontend was wired up, I had 175 TypeScript errors across 33 files. The old Python API returned flat objects with string IDs. The new Laravel API returned nested resources with integer IDs. Every screen that touched game data, team data, or player data was broken.

Instead of fixing them one file at a time, I spun up multiple Claude Code agents in parallel. At one point, 18 agents were running simultaneously, each tackling a different batch of errors. One agent handled coach dashboard screens. Another fixed player stat field names. Another rewired the scores screens from flat fields to nested team objects.

They all worked against the same codebase without stepping on each other, because each agent had a scoped task. The errors went from 175 to 0 across eight batches.

That kind of parallelism just isn't possible with a single-threaded workflow, human or AI.

Why Laravel Was the Cheat Code

The original backend was a monolithic Python script. One file doing everything. Translating that into Laravel was almost unfairly easy.

Laravel gives you so much out of the box. Sanctum for API auth with token management. Eloquent for database models with relationships. API Resources for consistent JSON response shaping. Broadcasting with Reverb for WebSockets. Built-in rate limiting, validation, and middleware. Even the profanity filter for chat was straightforward with Laravel's pipeline pattern.

What would've been weeks of custom plumbing in the old stack became php artisan make:model, define relationships, write a resource class, done. The CRUD patterns practically wrote themselves, and Claude Code was incredibly effective at generating Laravel code because the framework's conventions are so well-defined.

Six phases. Two and a half hours. A complete API with auth, CRUD, game stats, play-by-play engine, real-time broadcasting, chat, and deployment prep.

Documentation Made the AI Work

The reason Claude's build plan was so accurate on the first pass? The founder's documentation was thorough.

The founder had detailed specs for every feature. Field definitions for game data. User role descriptions. Chat behavior rules. Stat categories and how they should be grouped. When I fed that into Claude in the browser alongside the existing codebase, the AI had enough context to generate a build plan that actually mapped to reality.

Garbage in, garbage out applies to AI workflows too. If the product documentation had been vague, I'd have spent half my time clarifying requirements instead of building.

The Full Timeline

Here's how the night actually went:

Time

What Happened

8:50 PM

Created the Laravel project, started Phase 1

9:32 PM

Auth system complete (Sanctum, login, register, roles)

10:00 PM

Core CRUD, search, dashboard, coach notes

10:15 PM

Game stats engine, play-by-play, data export

10:31 PM

Real-time broadcasting, chat, profanity filter

10:54 PM

Security hardening, error tracking, deployment prep

11:00 PM

Backend done. Six phases in ~2 hours.

11:42 AM

Brought in Claude Code for frontend integration

1:50 AM

Frontend wired to backend, full stack running locally

Next morning

Deployed backend to Laravel Cloud, frontend to Vercel

Push-to-deploy on both platforms. The team could pull up the app on their phones by the time church was over.

What I'd Tell Other Developers

Use AI at every layer, but different AI for each layer. Browser-based Claude is great for planning and strategy. Claude Code is great for execution. Multi-agent orchestration is great for parallel problem-solving. Don't try to do everything in one tool.

Pick a framework with strong conventions. Laravel's opinionated structure made it trivially easy for Claude Code to generate correct, idiomatic code. The more predictable your framework, the more effective AI-assisted development becomes.

Invest in documentation before you start building. The single biggest accelerator wasn't the AI itself, it was the quality of the input. Thorough product docs meant the build plan was right on the first try. No back-and-forth. No "actually, I meant this."

Let the agents scope themselves. When I ran 18 agents simultaneously, Claude Code broke the work into batches on its own, each agent handling a specific set of files and error types. They didn't overlap. I didn't have to manually assign work, I just pointed it at the problem and it parallelized the fix.

Don't be afraid of ambitious timelines. A complete backend rebuild in one night sounds absurd. But with the right tools, the right framework, and solid documentation, it's just a series of well-scoped tasks executed in parallel. The ceiling on what one developer can ship in a session has fundamentally changed.

I'm Tessa Kriesel, CEO of Built for Devs and part-time CTO at a startup. By day I help developer tools with product adoption and go-to-market. By night, apparently, I rebuild backends with 18 AI agents running at once. Find me online if you want to talk developer products or multi-agent orchestration.

254 views

Add a comment

Replies

Best
Marius Pantea

Did you include filters and aggregations for your CRUD in this time?

Tessa Kriesel

@marius_ciclistu yep! Documentation around needs was the key win here honestly. It knew everything that needed to be done and it efficiently ran because of that documentation.

Marius Pantea

@tessak22 It hardcoded them for each resource or it did a general reusable solution?

fmerian

This is such an inspiring story, @tessak22. Thank you for posting it here - we need more.

Above all, I really like how you leverage @Claude by Anthropic's products. How about the AI model(s)? Did you rely on Opus, or did you experiment with multiple models?

Tessa Kriesel

@fmerian Opus 4.6. I did not experiment with different models. Gosh, that makes me want to re-do the whole thing in a different model and see where it lands in comparison.

fmerian

@tessak22 haha love it - please keep us posted

Ezra Matrix

5 hours for a sports analytics backend is a massive velocity win, but I’ve seen this movie before. Rebuilding with 18 parallel agents is a high-stakes gamble on 'Agentic Drift' where speed hides architectural hallucinations that only surface under real load.

​In my clinical audit of 800+ pages of Python logic for the Ezra Matrix, I found that AI tends to 'vibe' the architecture once it hits the 100-page mark. If the data integrity hasn't been manually stress-tested, you aren't saving time; you’re just deferring a total rebuild when the logic starts 'Quiet Cracking' at 10k users.

​Did you actually audit the cross-file logic, or are you trusting the orchestration to have caught the subtle overlaps?

Tessa Kriesel

@ezra_matrix thanks for you comment. I hear you and all the skeptics. The python backend was only part of the context given. An entire detailed build plan was the root of this success