Notebookcheck Logo

MIT reveals superefficient AI chips for smartphones

The new MIT AI chip is bringing power efficient neural-net processing to mobile and IoT devices. (Source: MIT)
The new MIT AI chip is bringing power efficient neural-net processing to mobile and IoT devices. (Source: MIT)
Researchers at MIT managed to simplify and adapt the intricate algorithms required by large neural-nets in order to be ran on mobile devices and smart home appliances.

MIT is one of the premier US institutions to experiment with evolving technologies, contributing to the mass adoption of many innovative prospects. One of the main focuses for MIT in the past few years has been neural networks and their potential impact on consumer tech, and its latest ultra-low voltage AI chip that can be integrated in mobile devices stands as proof. Most mobile devices nowadays are able to perform AI-intensive tasks (e.g voice and facial recognition) only by uploading data to internet servers and then downloading the results. In an effort to simplify and increase the efficiency of neural networks, MIT has managed to develop an AI chip capable of boosting the computation capacity sevenfold while consuming 95% less energy. Having such a reduced energy footprint allows the improved AI chip to be integrated into mobile devices and even IoT appliances.

Large neural-networks are able to perform more advanced tasks like photo and video manipulation, human brain perception simulation, writing poetry or novels etc., but these tasks are very energy-intensive and require numerous processing nodes. The MIT scientists sought to simplify the machine-learning algorithms and adapt it for use in handheld devices. Thus, they came up with an analog dot product that still retains the basics of a neural-net, but makes it so that the analyzed data does not need to be passed back and forth that many times between processors and memory. For now, MIT’s prototype chip can calculate dot products for up to 16 nodes in one step, reducing the needed energy for multiple processor-memory passes.

IBM vice president of AI Dario Gil notes that "this is a promising real-world demonstration of SRAM-based in-memory analog computing for deep-learning applications,” and “ the results show impressive specifications for the energy-efficient implementation of convolution operations with memory arrays. It certainly will open the possibility to employ more complex convolutional neural networks for image and video classifications in IoT in the future."

Source(s)

static version load dynamic
Loading Comments
Comment on this article
Please share our article, every link counts!
Bogdan Solca, 2018-02-15 (Update: 2018-02-15)