TL;DR: Anthropic refused to sign a contract with the Pentagon that would have allowed the U.S. military to use all of its models without restrictions. Anthropic insisted on an exception, and brace yourself, that its models cannot be used: 1) for mass surveillance of citizens, 2) for autonomous killing. Now the administration is threatening that if the founder of Anthropic doesn't change his mind by a certain date, they will come after him.
Google, OpenAI, and Musk (Grok) have all signed the contract.
Following Sam Altman's announcement over the past few hours, people have been speaking out massively about cancelling their OpenAI subscriptions and subscribing to Claude.
We started using Claude at the agency for client briefs and first-draft copy. The multi-agent review is a smart addition AI-generated code ships faster than anyone can review manually, so having agents check each other makes sense.Curious about the false positive rate. That's usually where automated review tools lose the team's trust.
Built my entire SaaS with Claude Code, so this is relevant to me. The biggest challenge with AI-generated code isn't writing it, it's trusting it at scale. When you're integrating multiple ML models and wiring up payment flows, a missed edge case can cost you. Excited to see multi-agent review applied to this problem.
Documentation.AI
Seems like Caude killed a lot of code review products from YC. They may have to pivot.
Who has a Team or Enterprise subscription?
Huge launch, the multi-agent approach for PR reviews makes a lot of sense. Catching logic bugs, security issues, and subtle AI-generated code mistakes before production is exactly where teams need help.
Coincidentally, today I launched something related as well: Blocfeed.
While tools like Claude Code analyze the code itself, Blocfeed focuses on what happens after software reaches real users. Bugs often appear only on specific systems or edge cases where everything works fine on the developer’s machine.
Blocfeed aggregates user feedback and reports to surface:
Bugs that only occur in certain environments
Issues that slip past internal testing
Patterns in what users are complaining about
Feature requests users repeatedly ask for
I can imagine a strong synergy here:
Claude Code → prevents bugs before merge
Blocfeed → detects real-world issues and user needs after release
Congrats on the launch, excited to see where this multi-agent review direction goes. 🚀
Copus
Multi-agent code review is a great concept. Having different agents specialized for different types of issues — security, performance, logic errors — should catch things that a single-pass review would miss. Really like the approach of catching bugs early in AI-generated code specifically, since that is becoming the default way people write code now.
So we have AI writing the code, and now a team of AI agents reviewing the code. Are we humans just here to pay the AWS server bills now?Haha. Brilliant launch!