Researchers over at Peking University have pulled off an impressive feat—they’ve developed the first-ever tensor processing unit (TPU) using carbon nanotube transistors. This is a huge deal in the quest for energy-saving AI hardware.
Their TPU, detailed in a paper in Nature Electronics, is built on a systolic array architecture that allows for parallel 2-bit integer multiply-accumulate operations. With about 3,000 carbon nanotube field-effect transistors doing the heavy lifting, the chip can handle convolution and matrix multiplication tasks without guzzling too much energy.
We successfully developed the world's first tensor processor chip (TPU) based on carbon nanotubes, we were inspired by the fast development of AI applications as well as TPU by Google.
The team has fine-tuned their manufacturing process so that the semiconductors are 99.9999% pure with super-clean surfaces, leading to transistors with high on-current densities and reliable performance. They did some system-level simulations and found that an 8-bit TPU made with these nanotube transistors could run at 850MHz and deliver an energy efficiency of 1 tera-operation per second per watt—all while using less energy.
When they put together a five-layer convolutional neural network using this TPU, it could hit an accuracy of up to 88% in recognizing MNIST images while only needing 295μW of power. That’s way better than what we’ve seen from current convolutional acceleration hardware.
Sure, the practical use of this current 180nm-class TPU might be a bit limited, but the researchers think their work is a big leap forward towards creating the next-gen, energy-efficient AI hardware that relies on carbon nanotube technology.
Are you a techie who knows how to write? Then join our Team! Wanted:
- News Writer (Romania based)
Details here
Source(s)
TechXplore (in English)