The grip Nvidia has on the entire AI hardware infrastructure is still strong, but quite a few companies are now striving to offer efficient alternatives. A couple of years ago, when Nvidia was struggling to meet the ever increasing demand for inference hardware, d-Matrix emerged and seized the opportunity to position itself as a reliable generative AI hardware supplier in the data center space. Its first dedicated compute card - the Corsair C8, was well received, and the company now plans to launch a successor model integrating the world’s first mass-produced 3D DRAM.
d-Matrix collaborated with high-performance AI infrastructure ASIC integrator AIchip from Taiwan in order to develop the 3D DRAM solution, which is said to “eliminate the performance and cost bottlenecks constraining today's AI infrastructure.” This new type of RAM known as 3DIMC (3D stacked digital in-memory compute) is currently tested on d-Matrix’s Pavehawk chips.
The first commercial product to feature 3D-stacked DRAM is expected to be d-Matrix’s Raptor inference accelerator (Nvidia Jetson AGX affiliate link), which will replace the current Corsair models. d-Matrix estimates that the 3DIMC solution could deliver up to 10x inference speeds over the fastest HBM4-based accelerators.
Sid Sheth, co-founder and CEO at d-Matrix explains that 3D-stacked DRAM represents “ a breakthrough that makes AI not only faster, but more cost-effective and sustainable at scale. 3DIMC represents the next logical step in our roadmap toward delivering efficient inference architectures that keep pace with the exponential growth of generative and agentic AI.”







