Notebookcheck Logo
, , , , , ,
search relation.
, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

Samsung completes development for HBM-PIM, industry's first High Bandwidth Memory with on-chip AI processor

HBM-PMI chips come with an integrated AI processor. (Image Source: Samsung)
HBM-PMI chips come with an integrated AI processor. (Image Source: Samsung)
The new HBM-PIM chips will deliver over twice the system performance of the current HBM2 solutions and reduce energy consumption by more than 70%. Samsung designed this new memory architecture in order to accelerate large-scale processing in data centers, high performance computing systems and AI-enabled mobile applications.

If we look at raw performance and power requirements alone, HBM (High Bandwidth Memory) is the superior choice over GDDR when it comes to graphics RAM. However, Nvidia and AMD still do not offer HBM on their mainstream gaming GPUs and reserve this option only for high-end compute cards that are used mostly in AI applications and HPC environments. This is because GDDR is more than 50% more affordable to implement at current market prices. Samsung, as one of the major HBM producers, could help lower the price for “regular” HBM chips, as it now prepares to launch the more advanced HBM-PIM (processing-in-memory) architecture.

Each new HBM-PIM chip integrates an AI processing component that essentially doubles the performance of regular HBM2 Aquabolt solutions while lowering energy consumption by more than 70%. The on-chip AI processor can be programmed and tailored for diverse AI-driven workloads including training and inference. Kwangil Park, Samsung’s senior vice president of Memory Product Planning, stated that the company is willing to collaborate with AI solution providers for the development of even more advanced PIM-powered applications.

In its press release, Samsung explains how the on-chip AI processors can help double the performance. “Most of today’s computing systems are based on the von Neumann architecture, which uses separate processor and memory units to carry out millions of intricate data processing tasks. This sequential processing approach requires data to constantly move back and forth, resulting in a system-slowing bottleneck especially when handling ever-increasing volumes of data. Instead, the HBM-PIM brings processing power directly to where the data is stored by placing a DRAM-optimized AI engine inside each memory bank — a storage sub-unit — enabling parallel processing and minimizing data movement.”

Another big plus for the HBM-PIM architecture is that it does not require hardware or software changes, so there are no additional integration or deployment costs with existing systems. Samsung states that the HBM-PIM chips are currently tested inside AI accelerators by leading AI solution partners and all validations should be completed by the end of 1H2021.


Buy the AMD Radeon Instinct MI25 compute card on Amazon


static version load dynamic
Loading Comments
Comment on this article
Bogdan Solca
Bogdan Solca - Senior Tech Writer - 1756 articles published on Notebookcheck since 2017
I first stepped into the wondrous IT&C world when I was around seven years old. I was instantly fascinated by computerized graphics, whether they were from games or 3D applications like 3D Max. I'm also an avid reader of science fiction, an astrophysics aficionado, and a crypto geek. I started writing PC-related articles for Softpedia and a few blogs back in 2006. I joined the Notebookcheck team in the summer of 2017 and am currently a senior tech writer mostly covering processor, GPU, and laptop news.
contact me via: Facebook
Please share our article, every link counts!
> Notebook / Laptop Reviews and News > News > News Archive > Newsarchive 2021 02 > Samsung completes development for HBM-PIM, industry's first High Bandwidth Memory with on-chip AI processor
Bogdan Solca, 2021-02-17 (Update: 2021-02-17)