Agentic AI
,
Artificial Intelligence & Machine Learning
,
Next-Generation Technologies & Secure Development
DeepSeek, MoonShot AI, MiniMax Used 24K Fake Accounts in Campaign

Anthropic has accused three Chinese artificial intelligence companies of running coordinated, large-scale operations to steal capabilities from its Claude models. The U.S.-based company said DeepSeek, Moonshot AI and MiniMax are conducting “industrial-scale campaigns” executed through tens of thousands of fraudulent accounts.
See Also: How Unstructured Data Chaos Undermines AI Success
The San Francisco-based firm said the three labs collectively generated over 16 million exchanges with Claude through approximately 24,000 fake accounts, violating its terms of service and circumventing regional access restrictions that bar commercial Claude access in China.
Anthropic joined its chief rival, OpenAI, in raising allegations of theft. OpenAI sent a memo to the U.S. House Select Committee on China about two weeks ago describing similar behavior, including what it characterized as “sophisticated, multi-stage pipelines” used by Chinese actors to mine frontier models, and networks of unauthorized service resellers used to bypass its own access controls.
The method at issue is called distillation, a widely used machine learning technique in which a smaller, less capable model is trained on the outputs of a larger, more powerful one, absorbing the stronger model’s learned behavior at a fraction of the development cost. While AI labs routinely distill their own models to produce cheaper, compact versions for customers, Anthropic says the three Chinese companies weaponized the technique to extract proprietary capabilities without authorization.
The accusations carry an irony that critics are unlikely to overlook. Anthropic has faced multiple lawsuits alleging copyright infringement and unauthorized web scraping in connection with training its own models. The cases include Concord Music Group v. Anthropic and Reddit v. Anthropic. Rival OpenAI is the defendant in about a dozen class action copyright lawsuits. Neither company has had those claims resolved in court. Both now argue that what Chinese labs are doing to them is a threat to national security.
MiniMax ran the largest operation of the three, generating over 13 million exchanges and targeting Claude’s agentic coding and tool orchestration abilities. Agentic reasoning refers to a model’s capacity to plan and execute multi-step tasks autonomously. Anthropic said it caught MiniMax’s campaign while it was still running – before the company launched the model it was training – giving Anthropic an unusually complete view of the attack’s life cycle. When Anthropic released a new Claude model mid-campaign, MiniMax redirected nearly half its traffic to the updated system within 24 hours.
Moonshot AI, the maker of the Kimi model series, conducted over 3.4 million exchanges across hundreds of fraudulent accounts using multiple access pathways to disguise the campaign’s coordinated nature. Anthropic said the request metadata matched the public profiles of senior Moonshot staff. In a later phase, Moonshot shifted to a more targeted approach, attempting to extract and reconstruct Claude’s reasoning traces directly.
DeepSeek’s operation, the smallest in volume at over 150,000 exchanges, drew attention for its methods. Its prompts instructed Claude to reconstruct the internal reasoning behind its own completed responses, a technique for generating chain-of-thought training data that teaches models to reason through problems step by step. Anthropic also found prompts asking Claude to reframe politically sensitive queries about dissidents, party leaders and authoritarianism into censorship-safe alternatives, likely to train DeepSeek’s own models to handle such topics without triggering restrictions. Anthropic said it traced the accounts to specific researchers at the lab through request metadata. OpenAI’s memo to Congress separately flagged DeepSeek, warning that its models lack protections against dangerous outputs in high-risk domains such as chemistry and biology.
To overcome Claude’s access restrictions, the Chinese labs routed traffic through commercial proxy services running what Anthropic describes as “hydra cluster” architectures, which are distributed networks of fraudulent accounts spread across Anthropic’s API and third-party cloud platforms. One such network managed more than 20,000 fraudulent accounts simultaneously, blending distillation traffic with ordinary requests to evade detection. Since these networks have no single point of failure, banning one account leads to another taking its place.
Beyond the commercial implications, Anthropic says the campaigns pose a national security risk. Models trained through illicit distillation, it says, are unlikely to carry the safety guardrails that U.S.-based AI labs build in to prevent misuse, including restrictions against helping develop bioweapons or enabling offensive cyber operations. “Foreign labs that distill American models can then feed these unprotected capabilities into military, intelligence and surveillance systems,” the company said.
Anthropic also waded into the policy debate over export controls on advanced chips, saying that China’s rapid AI progress should not be read as evidence that those controls have failed. The company contends that a meaningful share of that progress depends on capabilities extracted from American models and that executing distillation at scale requires access to advanced semiconductors, reinforcing the rationale for restricting their export.
But the argument is complicated by independent forecasts. A Forecasting Research Institute report, released the same day as Anthropic’s blog post, shows that the performance gap between U.S. and Chinese AI models will narrow by 2031, with experts anticipating parity by 2041, suggesting that factors beyond distillation are driving China’s AI development.
Anthropic said it has built behavioral fingerprinting systems and classifiers to detect distillation patterns in API traffic, tightened verification requirements for account types most often exploited for fraud, and begun sharing technical indicators with other AI labs, cloud providers and relevant authorities. The company is also developing model-level safeguards designed to reduce the usefulness of Claude’s outputs for illicit training without degrading performance for legitimate users.
