The AI boom is no longer just about a shortage of GPUs. A wave of new AI data centers is now draining the world’s supply of memory chips, forcing tech giants and device makers to compete for dwindling stocks of DRAM, NAND flash and high-bandwidth memory (HBM).
A Reuters investigation based on interviews with nearly 40 executives and industry insiders describes an “acute” shortage that has already sent some memory prices doubling versus early 2025, with inventories at multi-year lows and supply not expected to normalise until 2027–2028.
Cloud AI first, everyone else later
To keep up with AI demand, major memory makers, including Samsung, SK hynix, and Micron, have shifted wafer capacity into HBM and high-end DRAM for data-center GPUs. That has squeezed output of older but still vital parts like DDR4 and LPDDR4, used across PCs, laptops and budget phones.
At the same time, companies like Microsoft, Google and ByteDance are locking in long-term supply deals and in some cases, placing open-ended orders, effectively front-running smaller buyers and leaving less flexibility in the market. Analysts quoted in the report warn that the AI build-out is colliding with a supply chain that simply can’t meet its physical requirements in the short term.
Knock-on effects: pricier PCs and phones
The crisis is already rippling out into consumer tech. Samsung has raised prices on some memory products by up to 60% since September 2025, while PC vendors and custom-PC builders have started warning of across-the-board price hikes on RAM-heavy systems.
Smartphone makers are also feeling the pinch. Counterpoint Research expects global smartphone shipments to fall in 2026 as rising memory costs push entry-level handsets up by 20–30% in bill-of-materials terms, with sub-$200 models hit hardest. Brands such as Xiaomi and Realme have already flagged likely retail price increases if memory pricing doesn’t ease.
A multi-year constraint on the AI roadmap
With new memory fabs and process nodes taking years to build out, most analysts now see the memory bottleneck as a multi-year constraint on AI growth. Even as new GPU clusters come online, many operators may find that RAM and HBM availability — not accelerators — sets the pace for expanding AI capacity.
For now, tech giants with deep pockets and long-term contracts are best placed to ride out the storm. Smaller OEMs, white-box builders and budget-phone brands risk being priced out of the market or forced into lower-spec designs until the supply picture improves.















