Notebookcheck Logo

Anthropic accuses China's DeepSeek of plagiarizing Claude AI to advance censorship

Anthropic Claude AI's capabilities.
ⓘ Anthropic
Anthropic Claude AI's capabilities.
Chinese AI labs are harvesting Claude and other large language models' answers to mimic their performance for their own products without the safeguards. Anthropic is calling for an industrywide response and has contacted the relevant authorities.

As AI models from Chinese companies grow increasingly sophisticated, Anthropic has revealed a concerted effort from the likes of DeepSeek to plagiarize the inner workings of its award-winning Claude LLM and pass it off as its own innovation.

Anthropic is worried that the resulting AI agents are stripped of the safeguards against malicious use inherent in Claude and that the open-source approach of DeepSeek could place powerful AI tools in the hands of both state and non-state actors that have fewer scruples as to their usage.

DeepSeek, for instance, which shook the industry with its efficiency of doing more with less in terms of AI computing power, had stripped the Claude code of its free speech safeguards and made its agent circumvent topics that the Chinese government censorship machine strives to avoid.

Anthropic, which crafts some of the most advanced AI tools out there, just had a run-in with the Pentagon regarding the use of Claude for military purposes. It insists that Claude isn't used for controlling unmanned weapons systems or for surveillance of American citizens, while the Pentagon wants to have more freedom in how to use its AI agent without such burdensome guardrails.

When DeepSeek and others scoop up Claude's decision-making algorithms via a large distributed proxy network that serves millions of queries through fake accounts, or the so-called "distillation," the resulting agents may be used by a state military without any such restrictions.

This is why Anthropic is sounding the alarm on the practice it traced back to three Chinese AI companies—DeepSeek, MiniMax, and Moonshot—and introducing ways to mitigate them. In the case of MiniMax, in particular, the scope of AI model plagiarism is breathtaking. Anthropic detected more than 13 million query exchanges with Claude that often resulted in improving the MiniMax model in real time while Anthropic was rolling out major Claude updates.

Anthropic now has safeguards in place to detect millions of seemingly innocuous queries used for distillation, like asking Claude how it would go about a "goal to deliver data-driven insights—not summaries or visualizations—grounded in real data and supported by complete and transparent reasoning" and parsing the results to improve Chinese LLMs.

It warns that it can't fight the practice alone and is trying to get other industry juggernauts like OpenAI or Google on board so that they can influence policymakers to enhance the AI export control legislation with anti-distillation measures rather than just bans on Nvidia Blackwell GPUs and other hardware.

Get the NVD RTX PRO 6000 Blackwell Professional Edition AI GPU on Amazon

Source(s)

Please share our article, every link counts!
Mail Logo
Google Logo Add as a preferred
source on Google
Read all 1 comments / answer
static version load dynamic
Loading Comments
Comment on this article
> Expert Reviews and News on Laptops, Smartphones and Tech Innovations > News > News Archive > Newsarchive 2026 02 > Anthropic accuses China's DeepSeek of plagiarizing Claude AI to advance censorship
Daniel Zlatev, 2026-02-24 (Update: 2026-02-24)