Anthropic Accuses China of Stealing AI Secrets: Claude at Risk?
The artificial intelligence landscape is facing a new wave of concern as Anthropic, a leading AI safety and research company, has publicly accused three Chinese AI firms – DeepSeek, Moonshot AI, and MiniMax – of orchestrating a large-scale effort to pilfer proprietary information from its Claude AI model. This alleged operation involved the creation of over 24,000 fraudulent accounts, used to systematically extract knowledge and capabilities from Claude through a technique known as “distillation.” The accusations highlight the escalating tensions surrounding AI development and the protection of intellectual property in a fiercely competitive global market. This incident also reignites the debate surrounding export controls on advanced AI chips and their impact on China’s AI ambitions.
Understanding the Allegations: A Deep Dive into Distillation
At the heart of the controversy lies the practice of distillation. This is a common AI training method where a smaller, more efficient model learns from the outputs of a larger, more complex model. While legitimate use of distillation is widespread within the AI community, the alleged actions of DeepSeek, Moonshot AI, and MiniMax represent a malicious application. Instead of using distillation to improve their own foundational research, they are accused of essentially “copying homework” – leveraging Claude’s advanced capabilities to accelerate their own model development without the significant investment in research and development.
OpenAI recently echoed these concerns, sending a memo to US lawmakers accusing DeepSeek of employing similar distillation tactics to mimic its own products. This suggests a broader pattern of behavior and a concerted effort to gain an unfair advantage in the AI race.
The Scale of the Attacks: A Breakdown by Company
Anthropic’s investigation revealed a significant disparity in the scale of the attacks launched by each company:
- DeepSeek: Generated over 150,000 exchanges, focusing on improving foundational logic, alignment, and censorship-safe alternatives to sensitive queries. DeepSeek gained prominence last year with its open-source R1 reasoning model, which demonstrated performance comparable to leading US labs at a fraction of the cost. They are poised to release DeepSeek V4, reportedly surpassing Claude and ChatGPT in coding capabilities.
- Moonshot AI: Initiated over 3.4 million exchanges, targeting agentic reasoning, tool use, coding, data analysis, computer-use agent development, and computer vision. The firm recently launched its Kimi K2.5 open-source model and a dedicated coding agent.
- MiniMax: Conducted a staggering 13 million exchanges, concentrating on agentic coding, tool use, and orchestration. Anthropic observed MiniMax redirecting nearly half of its traffic to siphon capabilities from the latest Claude model immediately after its launch.
The Implications for AI Export Controls and National Security
These accusations arrive at a critical juncture, as the US government continues to grapple with the complexities of AI chip export controls to China. Last month, the Trump administration authorized US companies like Nvidia to export advanced AI chips, such as the H200, to China. This decision has been met with criticism, with opponents arguing that it strengthens China’s AI computing capacity and potentially undermines US leadership in the field.
Anthropic argues that the scale of the alleged extraction performed by DeepSeek, MiniMax, and Moonshot necessitates access to advanced chips. “Distillation attacks therefore reinforce the rationale for export controls: restricted chip access limits both direct model training and the scale of illicit distillation,” the company stated in its blog post. This reinforces the argument that limiting access to cutting-edge hardware is crucial to curbing China’s AI advancements.
Dmitri Alperovitch, chairman of the Silverado Policy Accelerator and co-founder of CrowdStrike, expressed little surprise at the revelations. “It’s been clear for a while now that part of the reason for the rapid progress of Chinese AI models has been theft via distillation of US frontier models. Now we know this for a fact,” Alperovitch told GearTech. “This should give us even more compelling reasons to refuse to sell any AI chips to any of these [companies], which would only advantage them further.”
Beyond Competitive Advantage: The National Security Risks
The potential consequences of this alleged intellectual property theft extend beyond simply undermining American AI dominance. Anthropic highlights significant national security risks associated with models built through illicit distillation. US companies invest heavily in building safeguards into their AI systems to prevent misuse by state and non-state actors for malicious purposes, such as developing bioweapons or orchestrating cyberattacks.
Models created through distillation are unlikely to retain these crucial safeguards, potentially leading to the proliferation of dangerous capabilities without any ethical or security constraints. This is particularly concerning given the increasing deployment of frontier AI by authoritarian governments for activities like offensive cyber operations, disinformation campaigns, and mass surveillance. The open-sourcing of such models further exacerbates these risks.
The Role of Cloud Providers and Industry Collaboration
Anthropic acknowledges that defending against distillation attacks is an ongoing challenge. The company is committed to investing in defenses that make these attacks more difficult to execute and easier to detect. However, they emphasize the need for a coordinated response across the entire AI ecosystem.
This includes collaboration between AI companies, cloud providers, and policymakers. Cloud providers play a critical role in identifying and mitigating malicious activity on their platforms. Policymakers must consider the implications of these attacks when formulating export control policies and intellectual property protection measures.
What’s Next?
The accusations leveled against DeepSeek, Moonshot AI, and MiniMax represent a significant escalation in the global AI competition. The incident underscores the importance of protecting intellectual property, strengthening export controls, and fostering collaboration to ensure the responsible development and deployment of artificial intelligence. GearTech reached out to DeepSeek, MiniMax, and Moonshot for comment but had not received a response at the time of publication.
The future of AI hinges on establishing a framework that promotes innovation while safeguarding against malicious actors and ensuring the technology is used for the benefit of humanity. This case serves as a stark reminder of the challenges that lie ahead and the urgent need for a proactive and collaborative approach.
Key Takeaways:
- Anthropic accuses three Chinese AI companies of stealing AI secrets via distillation.
- Distillation is a common technique, but its malicious use poses a significant threat.
- The incident reignites the debate over AI chip export controls to China.
- National security risks are heightened by the potential loss of AI safeguards.
- A coordinated response from the AI industry, cloud providers, and policymakers is crucial.