What if your hard drive could think with your data? Instead of just storing files, imagine it processing and responding to information exactly where it's kept. That’s the principle behind in-memory computing — a growing shift in architecture that moves logic closer to memory to boost efficiency.
Now, researchers at Forschungszentrum Jülich and the University of Duisburg-Essen have presented a new 2T1R memristor-based design that could support this shift, enabling more energy-efficient AI and edge hardware.
Published on arXiv, the design integrates two transistors and one memristor per cell, with current regulation intended to suppress sneak path currents, a known challenge in memristor arrays. Unlike conventional memory, the proposed design grounds both memristor terminals when idle — a strategy that may help improve signal stability and reduce leakage.
The architecture is designed to support analogue vector-matrix multiplication (VMM), a core function in machine learning, by controlling memristor conductance using integrated DACs, PWM signals, and regulated current paths. A 2×2 test array has been successfully implemented using standard 28 nm CMOS technology.
By addressing virtual ground issues and wire resistance effects, the architecture aims to improve performance predictability and reduce power consumption. With compatibility for RISC-V control and digital interfacing, the 2T1R design may lay the groundwork for scalable neuromorphic chips, enabling faster, more compact AI acceleration directly within memory.
While your hard drive may not be thinking just yet, the architecture behind that vision is already taking shape in silicon — hinting at a future of faster, memory-integrated AI.
For full technical details and results, see the full arXiv preprint (PDF).