Notebookcheck Logo

DeepSeek’s latest open-source AI models land to challenge GPT-5 and Gemini 3.0 Pro

DeepSeek, logo pictured, is back with two new open-source AI models. (Image courtesy: DeepSeek)
DeepSeek, logo pictured, is back with two new open-source AI models. (Image courtesy: DeepSeek)
DeepSeek is back with new open-source AI models: DeepSeek V3.2 and V3.2-Speciale. These are said to challenge both OpenAI and Google's creations. Here's what is known about them to date.

After taking the world by storm and sending the US stock markets tumbling in January 2025, DeepSeek has now announced two new open-source AI models: DeepSeek V3.2 and DeepSeek V3.2-Speciale.

The release marks a continuation of the company's distinct strategy in the AI arms race. While OpenAI and Google have poured billions of dollars in compute to train their frontier models, prioritizing performance gains at all costs, DeepSeek has taken a different path. Its previous R1 model was notable for managing to achieve performance on par with GPT 4o and Gemini 2.5 Pro through clever reinforcement techniques, despite being trained on less-advanced chips.

Surpasses GPT-5 while matching Google's Gemini 3 Pro

The standard DeepSeek-V3.2 is being positioned as a balanced "daily driver," harmonizing efficiency with agentic performance that the company claims is comparable to GPT-5. It’s also the first DeepSeek model to integrate thinking directly into tool use, with the latter allowed in both thinking as well as non-thinking modes.

However, it’s the high-compute variant DeepSeek V3.2-Speciale that will grab the headlines. DeepSeek claims the Speciale model surpasses GPT-5 and rivals Google’s Gemini 3.0 Pro in pure reasoning capabilities. It even achieved gold-medal performance in the 2025 International Mathematical Olympiad (IMO) and the International Olympiad in Informatics (IOI). And to prove it’s not just marketing fluff, DeepSeek says it has released its final submissions for these competitions for community verification.

DeepSeek attributes the performance gains to "DeepSeek Sparse Attention" (DSA), a mechanism designed to reduce computational complexity in long-context scenarios, and a scalable reinforcement learning framework.

Perhaps most interesting for developers is the focus on agents. DeepSeek has built a "Large-Scale Agentic Task Synthesis Pipeline" to train the model on over 85,000 complex instructions. The result is a model that can integrate "thinking" processes directly into tool-use scenarios.

Availability

The DeepSeek V3.2 is now live across the web, mobile apps, and API. Meanwhile, the V3.2 Speciale is currently API-only and comes with a strictly temporary endpoint that expires on December 15, 2025. Additionally, Speciale is a pure reasoning engine and doesn’t support tool calling. If you’re interested in running these models locally, the company has detailed instructions for that here.

DeepSeek's new AI models, particularly V3.2 Speciale, surpass GPT-5 in several benchmarks. (Image courtesy: DeepSeek)
DeepSeek's new AI models, particularly V3.2 Speciale, surpass GPT-5 in several benchmarks. (Image courtesy: DeepSeek)

Source(s)

DeepSeek, HuggingFace (1), (2)

No comments for this article

Got questions or something to add to our article? Even without registering you can post in the comments!
No comments for this article / reply

static version load dynamic
Loading Comments
Comment on this article
Please share our article, every link counts!
Mail Logo
> Expert Reviews and News on Laptops, Smartphones and Tech Innovations > News > News Archive > Newsarchive 2025 12 > DeepSeek’s latest open-source AI models land to challenge GPT-5 and Gemini 3.0 Pro
Kishan Vyas, 2025-12- 2 (Update: 2025-12- 2)