A senior U.S. State Department official says artificial intelligence firm DeepSeek is “willingly” assisting China’s military and intelligence services and has tried to bypass American export controls to secure Nvidia’s most advanced H100 accelerators. The assessment, first reported by Reuters, marks Washington’s sharpest public rebuke of the Hangzhou-based startup.
According to the official procurement records list, DeepSeek has worked more than 150 times across projects for the People’s Liberation Army and other defense-linked institutions. Chinese law already obliges companies to hand over data when asked, but the source contends DeepSeek is proactively sharing user information and usage statistics with state-run surveillance networks.
The same source alleges that DeepSeek tried to route purchases of H100 chips through shell companies in Southeast Asia and tap foreign data centers so it could access U.S. hardware remotely—moves aimed at skirting restrictions imposed in 2022. Three industry insiders told Reuters the startup does hold some H100s, but likely far fewer than the 50,000 units rumored earlier this year. Nvidia disputes the claim, saying internal checks show DeepSeek obtained only the export-compliant H800 variant.
DeepSeek rose to prominence in January after declaring its DeepSeek-V3 and DeepSeek-R1 models matched or exceeded leading U.S. systems “at a fraction of the cost.” Independent researchers question that figure, arguing that the actual training bill likely surpassed the $5.6 million. Washington’s new concerns add to mounting skepticism that the company’s rapid ascent leaned heavily on U.S. technology and resources.
DeepSeek has not answered questions about privacy practices, chip acquisitions, or alleged military work. Nvidia says it does not support parties that breach export rules and notes current controls have effectively removed it from China’s data center market. Malaysia’s trade ministry, meanwhile, is probing whether an unnamed Chinese firm is using Nvidia-equipped servers on its soil for large-language-model training, underscoring regional vigilance against similar workarounds.
Source(s)
Reuters (in English)