Notebookcheck Logo

Qualcomm takes the AI fight to Nvidia with new AI200 and AI250 chips built from smartphone tech

A Qualcomm AI rack (Image source: Qualcomm)
A Qualcomm AI rack (Image source: Qualcomm)
Qualcomm is expanding beyond smartphones with two new AI data center chips, the AI200 and AI250, built on its mobile neural processing tech. Promising significant efficiency gains and 768 GB memory support, these chips aim to challenge Nvidia’s dominance and power Saudi Arabia’s next-gen AI infrastructure.

Qualcomm is going big with AI, launching two new chips: the AI200 and AI250. It is the company’s biggest attempt to challenge Nvidia’s dominance in artificial intelligence computing.

From smartphones to servers

Qualcomm is one of the most popular smartphone chip makers. Its Snapdragon processors and Hexagon neural processing units (NPUs) have found their way into billions of phones globally. The California-based company is now bringing its mobile-first design philosophy to the AI200 and AI250 chips, designed to power large-scale AI inference workloads in data centers.

Qualcomm is targeting its new processors at already trained AI models. By avoiding Nvidia’s training/inference approach, Qualcomm can optimize its chips for efficiency, latency, and cost. These positions them for applications in generative AI tools, chatbots, and edge cloud applications.

Qualcomm has stated that the AI200 is coming in 2026, with the AI250 following a year later. The two chips are based on the Hexagon NPU architecture, and the semiconductor maker claims the design delivers high performance per watt, the Holy Grail of data centers.

Inside the AI200 and AI250

Qualcomm says the AI200 can handle very large AI models with minimal latency thanks to its support for up to 768 GB of memory. The company promises significant power savings over GPU-based clusters as a result of the inference-optimized design.

However, the AI250 will offer a generational leap in efficiency, according to Qualcomm. The chip slashes energy consumption by half using novel power management technologies and refined memory hierarchies.

Each rack of Qualcomm’s data center hardware can accommodate up to 72 of the new chips. They will operate as a unified computing cluster like the DGX systems and Instinct MI300-based servers from Nvidia and AMD, respectively. Meanwhile, Qualcomm plans to also provide complete rack solutions in direct competition with Nvidia and AMD.

The first deployment: Saudi Arabia’s Humain project

Qualcomm’s chips are coming out of the gate with a buyer. Humain, an AI startup with financial backing from the Saudi Arabia-controlled Public Investment Fund (PIF), will start deploying 200 megawatts of data center racks powered by the AI200 in 2026. Qualcomm will hope the deal convinces enterprise customers to choose its wares over Nvidia’s supply-constrained offerings.

Why it matters

Qualcomm has spent the last few years diversifying from mobile chips. Its processors now power PCs, and the new AI-focused chips will supercharge its incursion into cloud AI infrastructure. Industry watchers suggest Qualcomm now has what it takes to solidify its position in the AI inference market.

However, the AI compute space can still welcome more players. According to Joe Tigay, portfolio manager of the Rational Equity Armor Fund, "Qualcomm's entry and major deal in Saudi Arabia prove the ecosystem is fragmenting because no single company can meet the global, decentralized need for high-efficiency AI compute.”

Source(s)

Read all 1 comments / answer
static version load dynamic
Loading Comments
Comment on this article
Please share our article, every link counts!
Mail Logo
> Expert Reviews and News on Laptops, Smartphones and Tech Innovations > News > News Archive > Newsarchive 2025 10 > Qualcomm takes the AI fight to Nvidia with new AI200 and AI250 chips built from smartphone tech
David Odejide, 2025-10-27 (Update: 2025-10-27)