Alexander Tibbets

Mngr - Run 100s of Claude agents in parallel

Mngr is a CLI tool for programmatically spinning up coding agents at any scale. It lets you compose workflows—fix all my tests, open PRs for every issue, validate every use case—and run them repeatedly. Run 1. Run 100s. See all your agents, and if they're blocked on you. Connect to any agent mid-task to ask a question or debug it. Agents start in under 2 seconds and shut down when idle. The same commands work with any agent harness, running locally, on Modal, or in Docker. Free and open-source.

Add a comment

Replies

Best
Kanjun Qiu

Kanjun here, one of the founders of @Imbue (the team behind mngr).

Internally, we run 100s of parallel Claude Code sessions all doing useful work. It's been wild — we just say "for each flaky test in the past week, fix it" or "for each Linear ticket, create a PR".

mngr is the CLI tool that makes it possible, and we're open sourcing it today because we believe that open agents must win over closed platforms for humans to live freely in our AI future.

Hope you give it a spin, find it useful, and star it on Github if you like it!

Kiyoshi Nagahama

Just ran 6 parallel Claude agents today for a launch strategy analysis.

The bottleneck is always orchestration, not the model. Curious how you handle context sharing between agents.

Qi Xiao

@cyberseeds It's up to you how you want to solve it! Mngr is not "one orchestration framework for everything", but it gives you simple but powerful primitives that make this really easy:

  • There's an event stream mechanism, so you can let an agent put stuff on the event stream and let another agent monitor it.

  • You can transfer files with `mngr pull`, `mngr push` and `mngr file`.

  • You can message an agent with `mngr message`. You can even let one agent message another (it's a lot of fun watching them talk to each other)

Kiyoshi Nagahama

@qi_imbue That event stream mechanism is clever — decoupled communication is the right pattern for this. Today I had agents analyzing SEO, ads, LP conversion, SNS strategy, product readiness, and revenue projections all in parallel. The pain point was exactly what you described: getting Agent B to build on Agent A's findings without re-running the whole context.

Will give Mngr a spin. Thanks for the detailed breakdown.

Ivo Tzanev

Running agents at scale is the easy part to imagine. The harder problem is state coherence across the swarm — what happens when agent 47 and agent 12 reach conflicting conclusions about the same codebase and neither is obviously wrong.

Does Mngr expose any shared state or consensus layer, or is resolution left entirely to the orchestrating workflow?

Qi Xiao

@ivaylotz I actually ran into exactly this problem and it's easy to use mngr primitives to resolve those:

  • You can pull the Git branches of multiple agents onto one place with `mngr pull`. (If you're running agents locally, you can skip this entirely because mngr uses Git worktrees by default for local agents)

  • Then just `mngr create` another agent, asking it to resolve conflicts from these two branches

The interesting property about `mngr` is that it's kind of agnostic of what you're doing with your agents - they don't have to be coding agents at all - but it gives you enough primitives to just trivially build your multi-agent workflow. I believe having general primitives is better than having specialized workflows - the latter will be obsolete when the next model comes out, but the former will not!

Ivo Tzanev

@qi_imbue  The "primitives, not pre-packaged workflows" framing is the right call long-term. Specialized orchestration frameworks get brittle when the next model drops — the abstraction layer becomes the maintenance burden.

One question: as teams accumulate mngr workflows over time, is there any mechanism for sharing patterns across the org, or does every team rebuild from primitives independently? Feels like the compounding value of a shared workflow library would be significant once you hit 50+ agents regularly.

Qi Xiao

@ivaylotz 

Specialized orchestration frameworks get brittle when the next model drops — the abstraction layer becomes the maintenance burden.

That's exactly why we thought it'd be smart to build a library of primitives that work well together, rather than a specialized multi-agent orchestration framework! Frameworks get outdated quickly but libraries can adapt.

One question: as teams accumulate mngr workflows over time, is there any mechanism for sharing patterns across the org, or does every team rebuild from primitives independently? Feels like the compounding value of a shared workflow library would be significant once you hit 50+ agents regularly.

Of course! Again, just think of mngr as a library of primitives - you can build higher-level libraries based on it, either using the Python API or the CLI.

We've actually been doing this ourselves as we develop mngr, and if you'd like a case study, we've published a case study on how we built a multi-agent testing workflow using mngr's primitives: https://imbue.com/product/mngr_part_2/. There will be more!

Mykola Kondratiuk

Running parallel agents at this scale is genuinely interesting. The hard part I keep running into isn't starting agents - it's knowing when they're done, stuck, or drifted from intent. How does Mngr handle that? Is there any visibility into what's actually happening across the swarm, or is it more fire-and-forget?

Qi Xiao

@mykola_kondratiuk Mngr has builtin tracking for the basic lifecycle of an agent - running, waiting for your input, stopped, etc.. If that suffices for you, there's nothing more you need to do. But the nice thing about `mngr` is that it's very flexible and scriptable, so you are free to build your domain-specific tracking mechanism however you like! Some approaches I'm aware of are:

  • You can let the agents report their own state to a central place by e.g. making some HTTP requests - it's easy to control what credentials each agent has access to, just do something like `mngr create foo --env KEY=value`

  • You can let the agents write their outcome to a file, and then download the file using `mngr file` or `mngr pull`.

  • You can let them write to mngr's event stream and watch for those events using `mngr event`.

  • You can also just tell the agent itself to message another agent using `mngr message`. This is trivially easy for local agents, although I haven't tried it for remote agents.

Mngr gives you primitives, not pre-packaged workflows. Just build whatever workflow you want!

Mykola Kondratiuk

The scriptable approach makes sense. Basic lifecycle tracking out of the box covers most needs, and having the escape hatch to build domain-specific stuff is exactly how I want tools to work. Way better than being locked into someone's opinionated agent model that doesn't match how your team actually runs things.

Piotr Kusiak

Does it have any limit managemnt - ie maximizing AI subscription limits?

Josh Albrecht

@k_piotr it currently doesn't have anything built-in right now, but it's on the roadmap!

Piotr Kusiak

@josh_albrecht thanks for the clarification. Congrats on your launch anyway - will give it a try

Chintan

this is useful! can I set token / dollar / time budgets?