Anthropic Alleges Industrial-Scale “Distillation Attacks” by DeepSeek, Moonshot and MiniMax
On Feb 24, 2026, Anthropic said it uncovered industrial-scale efforts by Chinese Labs (@DeepSeek, Moonshot and @MiniMax-M2.5) to extract capabilities from its Claude models via “distillation.”
According to Anthropic, the campaigns involved:
16M+ exchanges
~24,000 fraudulent accounts
Proxy networks to bypass regional restrictions
Targeted extraction of reasoning, coding and tool-use abilities
Distillation is a common technique for training smaller models from larger ones. The controversy here is using a competitor’s model outputs at scale without permission.
What Anthropic Claims
DeepSeek (~150k exchanges): Extracted step-by-step reasoning and censorship-safe reformulations.
Moonshot (~3.4M exchanges): Targeted agentic reasoning, coding, tool use, and vision.
MiniMax (~13M exchanges): Focused on agentic coding and orchestration; allegedly pivoted quickly after a new Claude release.
Anthropic says the activity showed identifiable patterns: repetitive, capability-focused prompts distributed across coordinated accounts (“hydra cluster” proxy networks).
Why It Matters
Anthropic frames this as both a competitive and national security issue, arguing that distilled models may lack safety safeguards and could undermine export controls if capabilities spread.
The company says it’s deploying new detection systems, strengthening account verification, and sharing intelligence with other labs.
No public response from the accused labs yet.
Question for the Product Hunt community: Where’s the line between aggressive benchmarking and illicit distillation?
@Migma AI tweeted:
Train on the entire internet = progress.
Train on one model = espionage.
What do you think? Share your thoughts in the comments.

Replies
@Migma AI's tweet had me laughing xD
minimalist phone: creating folders
NGL, I am not surprised this happened (esp. for Deepseek) :D
It's a whole new world of stealing information. I'll say that as a business owner, I would aggressively fight against a company trying to steal my data. That said, didn't Anthropic get their data through some questionable methods (scraping Reddit for example)? I know they settled a suit last year with content owners.