AI firm Anthropic has alleged that three multi-billion-dollar Chinese AI companies illicitly used its Claude model in large-scale “distillation” attacks. The firm stated that DeepSeek, Moonshot, and MiniMax generated over 16 million exchanges via fraudulent accounts to copy capabilities like coding and reasoning. Anthropic framed the activity as both an intellectual property violation and a potential geopolitical risk.
AI safety company Anthropic publicly accused three rival firms of conducting illicit distillation campaigns against its Claude large language model. The alleged attacks involved generating data from Claude to train competing models more cheaply and quickly.
The accused firms are DeepSeek, Moonshot, and MiniMax, all based in China and valued in the multi-billion dollar range. Anthropic stated the trio collectively used approximately 24,000 fraudulent accounts to create over 16 million exchanges.
Anthropic described distillation as a legitimate training method but condemned its use by competitors for capability theft. “But distillation can also be used for illicit purposes: competitors can use it to acquire powerful capabilities from other labs in a fraction of the time, and at a fraction of the cost,” the firm wrote.
The campaigns specifically targeted Claude’s advanced functions like agentic reasoning, tool use, and coding. Anthropic said it identified the activity through IP address correlation, request metadata, and infrastructure indicators.
Beyond intellectual property concerns, Anthropic highlighted broader national security implications. The firm argued such distillation could allow foreign governments to deploy frontier AI for offensive cyber operations, disinformation, and mass surveillance.
In response, Anthropic plans to enhance detection systems, share threat intelligence, and tighten access controls. The company called for a coordinated industry and policy response, stating “No company can solve this alone.”

