
OpenClaw × MiniMax Agent × M2.5, now fully unlocked. No deployment. No extra API fees. 7×24 across Telegram / WhatsApp / Slack / Discord. Ready-made MiniMax Expert ecosystem. Upgraded built-in tools for real work.
This is the 2nd launch from MaxClaw by MiniMax. View more

MiniMax-M2.7
Launching today
MiniMax M2.7 is a self-evolving AI model that helped build its own capabilities. It can create agent harnesses, collaborate via Agent Teams, and handle complex tasks like coding, debugging, and research. With strong SWE-Pro performance and reduced intervention time, it moves beyond static AI into systems that continuously learn, adapt, and execute complex work with minimal human input. Available via API and MiniMax Agent for builders pushing AI-native workflows.





Free Options
Launch Team



Self-evolving AI is the right direction for any prediction system where the underlying distribution changes continuously. Our football analytics model faces exactly this — features that predicted match outcomes well last season (possession stats, pressing intensity) need reweighting as teams adapt tactically. A static model doesn't flag when its feature importance has drifted, so you only discover the problem in retrospect.
The 'analyze failures, modify setup, re-run' loop you describe is essentially formalizing what good data scientists do manually between seasons. The self-feedback mechanism is what's interesting — the system needs to know not just that it failed, but why it failed in a way that suggests a structural fix vs a data quality issue.
The hard tradeoff in real-time prediction contexts: how does M2.7 balance exploration (trying new configurations) vs exploitation (keeping outputs stable while a process is live)? In a sports context, you can't be A/B testing model architectures mid-match. Curious if the self-evolution loop has a 'freeze' mode for production stability.
MiniMax M2.7 is an AI agent model pushing toward self-evolving systems, not just assisting work, but actively improving how it works.
Current AI still needs heavy human orchestration across research, engineering, and workflows. M2.7 builds and optimizes its own agent harness, using memory, self-feedback, and iterative loops to improve performance over time.
What’s different is the self-evolution loop — it can analyze failures, modify its own setup, and re-run experiments autonomously. That’s a big shift from static models.
Key features:
Agent Teams for multi-agent collaboration
Complex skill execution with high adherence
Strong performance across software engineering + office workflows
End-to-end project delivery + real-world debugging
Benefits: Faster experimentation, reduced manual effort, and AI that acts more like a junior researcher/operator than just a tool.
Great for developers, researchers, and teams building AI-native workflows or automating complex tasks.
How far do you think self-evolving agents can go before humans are only setting goals and everything else runs autonomously?
I hunt the latest and greatest launches in tech, SaaS and AI, follow to be notified → @rohanrecommends
@MiniMax is cooking. They launched M2.5 last month, with SOTA performance at coding (SWE-Bench Verified 80.2%), and they're pushing it forward (again) with M2.7, with an 88% win-rate vs M2.5.
Mind-blowing.
Oh and pro tip: you can give it a spin for free in @Kilo Code and @KiloClaw ✌️
The long-term memory feature is what makes this interesting to me. Most AI agents today are essentially stateless – you start fresh every session and lose all the context you've built up. An agent that actually remembers your preferences and past tasks over weeks could be a real productivity unlock.
How does the memory work in practice? Is there a way to review or edit what the agent has stored about you, or is it a black box? Being able to curate that memory layer would make a big difference for trust, especially when connecting it to workplace tools.
This direction feels inevitable.
Once agents start improving their own workflows, it stops being just a tool and becomes more like a system you’re managing.
The part I keep thinking about is control.
If the system keeps evolving its own setup, how do you keep things predictable in production?
Especially for real workflows, stability often matters more than raw capability.
until they do the wrong thing at scale.
How do you control that?
I've been using MiniMax 2.5 in my product and the bar is really high already - can't wait to try 2.7