On the fly hierarchical caches could speed up CPUs
CPU makers these days try to improve performance by cramming as many cores as they can in one chip. The old CPU tiered memory caches are still there, but the technology behind them has not seen any major improvement in decades. The CPU memory caches can significantly speed up applications by fetching and storing commonly used data. Even if today’s CPUs integrate up to four cache levels, neither of these levels in particularly suited for a specific application. MIT’s Computer Science and A.I. Laboratory has just come up with a solution named Jenga that reallocates cache access on the fly in order to create “cache hierarchies” specifically tailored for any one program.
Jenga includes a map of the physical locations of each cache memory bank, so it can calculate how to store data with reduced lag. The new cache system can efficiently redistribute physical memory resources to build application-specific hierarchies and maximize the performance for any particular application.
MIT ran Jenga on a simulated 36-core CPU and the performance increase was up to 30%, while the CPU used up to 85% less power. This could greatly benefit mobile devices, such as notebooks and smartphones, where a reduced total dissipated power (TDP) is very important.
Even though Jenga is still in its simulation phase, the technology could inspire CPU makers like Intel and Qualcomm to come up with similar systems as soon as they bump into the physical limits of the ever-shrinking manufacturing process.