Notebookcheck Logo

OpenAI's new open-source models can run on your PC

The OpenAI logo on an abstract and gradient multi-color background. (Image Source: OpenAI)
Both models can be freely downloaded and be run locally on PCs and Laptop's. (Image Source: OpenAI)
For the first time in six years, OpenAI has released two new models with open weights that can be downloaded, for free, and run locally on your PC or laptop. The last model the company open-sourced was GPT-2 in 2019.

OpenAI has announced the release of gpt-oss-120b and gpt-oss-20b, two open-weight models that are free to download and can be run locally on your system. It's the company's first open-source release since the 2019 launch of GPT-2.

Gpt-oss-120b is the 117 billion parameter model that requires a beefy 80GB of VRAM to run. The smaller gpt-oss-20b - the 21 billion parameter model, can fit into a single GPU with 16GB of VRAM. Both models are available under a flexible Apache 2.0 licence. 

OpenAI says the "release is a meaningful step in their commitment to the open-source ecosystem, in line with their stated mission to make the benefits of AI broadly accessible." The company wants them to serve as a lower-cost tool for developers, researchers, and companies to efficiently run and customize.

How do they perform? 

The gpt-oss-120b scored 2622 points on the Codeforces coding test with tools, performing almost on par with the company's o3 and o4-mini, and comfortably beating o3-mini in both tests, scoring 2643 without tools.

The gpt-oss-20b scored 2516 with tools, performing on par with o3 and o4-mini, and 2230 without tools, narrowly edging out o3-mini. OpenAI says 120b does even better on health-related queries and mathematics than o4-mini, while 20b outscores the o3-mini.

Both models perform competitively with o3 and o4-mini. (Image Source: OpenAI)
Both models perform competitively with o3 and o4-mini. (Image Source: OpenAI)

OpenAI says both the 120b and 20b tend to hallucinate a lot more than reasoning models like o3 and o4-mini. In tests, they found both the open weight models hallucinated in 49% to 53% responses on their in-house benchmarks that test the models on their knowledge of people.

On the Humanity's Last Exam test, both models showed competitive accuracy to o3 and o4-mini. (Image Source: OpenAI)
On the Humanity's Last Exam test, both models showed competitive accuracy to o3 and o4-mini. (Image Source: OpenAI)

Both models can be downloaded from the official Hugging Face space and come natively quantized in MXFP4 for efficiency. They can also be freely deployed on platforms such as Microsoft Azure, Hugging Face, vLLM, Ollama and llama.cpp, LM Studio, AWS, Fireworks, Together AI, and a lot more.   

You can run these models locally on your system using Ollama and they can be installed with a simple two-line command. You can also run these using Microsoft Foundry Local.

OpenAI expects these models to "lower barriers for emerging markets, resource-constrained sectors, and smaller organizations that may lack the budget or flexibility to adopt proprietary models."

As to why they open-sourced a new model six years after the last one, the company says it wants to "make AI widely accessible and beneficial for everyone." 

static version load dynamic
Loading Comments
Comment on this article
Please share our article, every link counts!
Mail Logo
Rohith Bhaskar, 2025-08- 5 (Update: 2025-08- 6)