Anthropic has accused three Chinese AI labs—DeepSeek, Moonshot, and MiniMax—of illicitly extracting millions of outputs from its Claude chatbot to train competing systems. The company claims the activity, involving over 16 million exchanges from roughly 24,000 fraudulent accounts, violates its terms of service and undermines U.S. export controls and AI safety measures. The announcement has drawn criticism on social media, with critics accusing Anthropic of hypocrisy regarding how AI models are trained.
Anthropic has accused three Chinese AI labs of extracting millions of responses from its Claude chatbot to train competing systems, a move the company claims violates its terms of service and weakens U.S. export controls. In a blog post published Monday, Anthropic said it identified “industrial-scale campaigns” by AI developers DeepSeek, Moonshot, and MiniMax to extract Claude’s capabilities through model distillation.
The company alleged the labs generated more than 16 million exchanges using roughly 24,000 fraudulent accounts. Anthropic’s announcement drew skepticism and mockery on X, where critics questioned its stance given how major AI models, including Claude, are trained. “You trained on the open internet and then call it ‘distillation attacks’ when others learn from you,” wrote Tory Green, co-founder of AI infrastructure firm IO.Net.
In a separate X post, Anthropic stated that foreign labs engaging in illicit distillation can remove safeguards, feeding model capabilities into their own military and intelligence systems. The company is expanding detection, tightening account verification, and sharing intelligence with other labs and authorities to limit future attempts.
This controversy emerges as Anthropic itself faces legal action over its own training practices. In June, Reddit sued Anthropic, accusing it of scraping more than 100,000 posts and comments and using the data to fine-tune Claude. That lawsuit said the company has a public face of righteousness and a private face that ignores rules.

