Samsung ships priciest AI memory Nvidia ever ordered as HBM4 exceeds key specifications

Samsung has announced the first commercial shipments of a next-gen HBM4 memory for applications like Nvidia GPUs and the respective AI data centers.
Nvidia and other clients are reportedly paying close to $500 apiece to Samsung, double what they used to pay for the previous HBM3E high-bandwidth memory generation. As a consequence, Samsung's shares are at an all-time high, and its management expects another blockbuster year riding high on the general memory shortage.
Samsung HBM4 memory specs
While memory makers are currently charging an arm and a leg for any unit they produce, Samsung says that its new HBM4 AI memory has managed to surpass both the Joint Electron Device Engineering Council (JEDEC) standard and Nvidia's requirements.
The HBM4 operating speed can reach a whopping 13 Gbps, or 46% higher than JEDEC's requirements, and its total memory bandwidth is up to 3.3 TB/s per stack, way exceeding the 3 TB/s ask of customers like Nvidia.
Samsung achieved this breakneck throughput by preemptively applying the 10nm-class 6th-generation 1c DRAM process with the base die manufactured at 4nm, as opposed to the 14nm 1a DRAM it used for the HBM3 predecessor. This gives the HBM4 AI memory generation plenty of room for process and performance improvements. To manage all the generated heat, Samsung designed the core die and data transmission with low-power and low-voltage technologies for 30% better heat dissipation and 40% higher energy efficiency when compared to the HBM3 memory generation that Nvidia is now using in its AI graphics cards in the Blackwell series.
For now, Samsung can ship to customers a maximum of 36 GB HBM4 memory capacity via a 12-layer stack, but it can also craft 16-layer stacking for a total of 48 GB whenever Nvidia is ready with the respective GPU design and purchasing power. Samsung promises enough of the expensive HBM4 chips, saying that it will "continue to secure stable supply capabilities to meet the growing mid- to long-term demand, particularly from AI and data centers" as it starts sampling its next-gen HBM4E memory in the second half of 2026.









