Launching today
Assemble
One /go command for AI work that remembers — zero runtime
55 followers
One /go command for AI work that remembers — zero runtime
55 followers
Assemble is an open-source configuration generator for AI work: /go, memory, spec-driven workflows, and zero runtime across 21 platforms.






Assemble
Hey Product Hunt,
I’m Rénald, founder of Cohesium AI.
I built Assemble because I was tired of AI tools that sound helpful but stay generic. A code review becomes a polite summary. A security audit becomes a reformatted checklist. A multi-step project starts strong, then falls apart as soon as context gets longer or the work gets more complex.
So I built what I actually needed: a structured AI work system, not just another assistant.
With Assemble, you type /go and describe what you need. From there, it routes the task by difficulty, keeps useful cross-session memory, and switches into a spec-driven workflow when the work is complex. For bigger delivery, it can even move execution into a board with review and test stages.
What makes it different from most agent frameworks:
• it’s a configuration generator, not a runtime
• zero daemon, zero SDK, zero dependencies, zero lock-in
• native configs for 21 platforms including Cursor, Claude Code, Codex, Gemini CLI, Copilot, and Windsurf
• it works beyond coding too: docs, contracts, proposals, email, and client operations
The Marvel framework isn’t branding — it’s a prompt-engineering choice. In testing, it gave us stronger role identity, better consistency, and less generic output than traditional agent setups.
And because LLMs naturally agree too easily, Assemble bakes in structural dissent: Deadpool challenges assumptions by default, and Doctor Doom escalates high-stakes decisions.
A real turning point for me: a client project that was supposed to take 2 days turned into 10 days of failed attempts with generic AI tools. With Assemble, it took 30 minutes.
If you try it, I’d genuinely love your feedback — especially on the workflows, platforms, and specialist roles you’d want next.
MIT licensed. Open source. Built for real work.
RiteKit Company Logo API
Congrats on the launch! This looks really impressive. I'm curious about the memory component - when you say it "remembers," does that persist across different AI platforms automatically, or do users need to configure how context flows between integrations? Also, how does the zero runtime constraint work with platforms that have inherent latency?
Assemble
@osakasaul Thanks, appreciate it.
For memory, Assemble keeps things simple: it uses Markdown files (.md) as the persistence layer. So yes, memory can persist across platforms and LLMs, because it’s not tied to any provider’s hidden internal state.
That continuity comes from portable, readable files rather than model-specific memory.
On the zero runtime side, Assemble doesn’t add a daemon or always-on orchestration layer between the user and the target platform. The logic lives in the config and files, so execution still happens natively in the tool you use — which means latency stays the platform’s latency.
If helpful, I can also explain how we separate persistent memory from session context.