Anthropic Accuses DeepSeek, Moonshot AI, and MiniMax of Industrial-Scale Model Theft Via 24,000 Fake Accounts
Anthropic publicly named three Chinese AI labs — DeepSeek, Moonshot AI, and MiniMax — for allegedly creating over 24,000 fraudulent accounts to systematically extract Claude's capabilities through 16 million conversations. It is the most explicit accusation of model distillation ever leveled between major labs.
Anthropic went public Monday with what it called "industrial-scale distillation attacks" on Claude, directly naming DeepSeek, Moonshot AI, and MiniMax as the perpetrators. According to @AnthropicAI, the three Chinese AI labs created more than 24,000 fraudulent accounts and generated over 16 million exchanges with Claude, systematically extracting its capabilities to train and improve their own models. The accusation is unprecedented in its specificity and directness — this is not a vague blog post about "model-on-model" risk but a named indictment with numbers.
Distillation — the practice of using a frontier model's outputs to train a cheaper, smaller model — has been an open secret in the AI industry for years. Researchers have long discussed how API access to leading models creates a vector for knowledge extraction. But the scale Anthropic describes here goes far beyond the typical gray-area fine-tuning. Twenty-four thousand fake accounts represent a coordinated operation, not a rogue researcher running a script overnight. Sixteen million exchanges suggest months of sustained, automated querying designed to map Claude's reasoning patterns across an enormous surface area of tasks.
Get our free daily newsletter
Get this article free — plus the lead story every day — delivered to your inbox.
Want every article and the full archive? Upgrade anytime.
No spam. Unsubscribe anytime.