If you’ve used a coding agent, you know the pain. You’re deep into a feature and everything is clicking. Then the context window fills up, it compacts, and key details are lost.
Mastra Code is different. Powered by our state-of-the-art observational memory, it watches, reflects, and compresses context without losing important details.
The result: long-running coding sessions that stay precise, letting you build faster, merge sooner, and ship more.
From the team behind Gatsby, Mastra is a framework for building AI-powered apps and agents with workflows, memory, streaming, evals, tracing, and Studio, an interactive UI for dev and testing. Start building:
npm create mastra@latest
Observational Memory is a SoTA memory system for AI agents - scoring 95% on LongMemEval, the highest ever recorded. It works like human memory: two background agents act as your agent's subconscious, one observing and compressing conversations, the other reflecting and reorganizing long-term memory. It extracts what matters and lets the rest fade - just like you do. Available in Mastra today - with adapters for LangChain, Vercel AI SDK, OpenCode and others coming soon.
Mastra (YC W25) is launching on Product Hunt today, releasing stable 1.0 for building AI apps and agents, and announcing 300,000+ weekly npm downloads, 19,400+ GitHub stars, and production use at companies like @Replit and @WorkOS.
I had a blast working on this launch with @calcsam @smthomas3 @bookercodes and team, and wanted to share with you some thoughts we put into the ops.