Google DeepMind finds in just one year 2.2 million new materials, of which 381,000 are stable
In under one year, Google’s DeepMind division has managed to discover 381,000 stable materials that can benefit many domains through an advanced trained AI model called Graph Networks for Materials Exploration (GNoME). This is actually just a fraction of the 2.2 million crystalline structures predicted by the model.
The AI was trained using data collected over the past decade by the Lawrence Berkeley National Laboratory in the US via the Materials Project. In theory, the 381,000 stable structures could play a key role in the accelerated development of superconductors, quantum supercomputers, EVs and batteries, but all these materials first need to be synthesized in order to test their efficiency.
Up until now, independent experiments have synthesized and proved the stability of only 736 of the materials. However, the DeepMind researchers are willing to accelerate the synthesizing process for the rest of the structures by immediately releasing the data on all the materials and also making the AI model publicly available.
The most promising structures seem to be included among 52,000 compounds that resemble the layered nature of graphenes. These are said to be ideal candidates for new forms of superconductors that can be used by MRI scanners, quantum computers and nuclear fusion reactors. Additionally, the AI model found 528 alternative lithium ion conductors that can be used to increase EV battery efficiency.
Are you a techie who knows how to write? Then join our Team! Wanted:
- News translator (DE-EN)
- Review translation proofreader (DE-EN)
Details here