DeepSeek has released DeepSeek-V3.1, an updated version of its groundbreaking AI model launched in December 2024 that instantly ranked in the top ten of the most powerful AI models available worldwide.
The company surprised the world by detailing how it trained the model using far fewer computing resources at a lower cost than competing models. The latest version runs as a hybrid AI model, combining a faster non-thinking model that DeepSeek V3 was known for with a slower thinking model that DeepSeek R1 was known for.
The latest DeepSeek AI LLM model is available for free download under the open-source MIT license. Readers wanting to experiment with the full 671B DeepSeek-V3.1 model will need at least 720 GB of free storage space (or 170GB for a 1-bit quantized version). The smallest quantized model will need a powerful GPU with at least 24 GB of memory, such as this Nvidia 5090 GPU with 32 GB of memory on Amazon.
The updated DeepSeek-V3.1 model improves upon the coding ability of the company's prior non-thinking V3 and thinking R1 models, according to scores on the SWE-bench test. It also performs better across other AI benchmarks in thinking mode than the prior R1 model, including the xbench-DeepSearch, SimpleQA, and FRAMES AI benchmarks.
The V3.1 AI has a 128K token window, and API access pricing will be simplified after September 5, 2025, to take into account its hybrid nature. Readers can chat with the DeepSeek-V3.1 AI for free.