Hardware implementation of memristor-based artificial neural networks

Artificial Intelligence (AI) is currently experiencing a bloom driven by deep learning (DL) techniques, which rely on networks of connected simple computing units operating in parallel. The low communication bandwidth between memory and processing units in conventional von Neumann machines does not...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:Nature communications Ročník 15; číslo 1; s. 1974 - 40
Hlavní autori: Aguirre, Fernando, Sebastian, Abu, Le Gallo, Manuel, Song, Wenhao, Wang, Tong, Yang, J. Joshua, Lu, Wei, Chang, Meng-Fan, Ielmini, Daniele, Yang, Yuchao, Mehonic, Adnan, Kenyon, Anthony, Villena, Marco A., Roldán, Juan B., Wu, Yuting, Hsu, Hung-Hsi, Raghavan, Nagarajan, Suñé, Jordi, Miranda, Enrique, Eltawil, Ahmed, Setti, Gianluca, Smagulova, Kamilya, Salama, Khaled N., Krestinskaya, Olga, Yan, Xiaobing, Ang, Kah-Wee, Jain, Samarth, Li, Sifan, Alharbi, Osamah, Pazos, Sebastian, Lanza, Mario
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: London Nature Publishing Group UK 04.03.2024
Nature Publishing Group
Nature Portfolio
Predmet:
ISSN:2041-1723, 2041-1723
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:Artificial Intelligence (AI) is currently experiencing a bloom driven by deep learning (DL) techniques, which rely on networks of connected simple computing units operating in parallel. The low communication bandwidth between memory and processing units in conventional von Neumann machines does not support the requirements of emerging applications that rely extensively on large sets of data. More recent computing paradigms, such as high parallelization and near-memory computing, help alleviate the data communication bottleneck to some extent, but paradigm- shifting concepts are required. Memristors, a novel beyond-complementary metal-oxide-semiconductor (CMOS) technology, are a promising choice for memory devices due to their unique intrinsic device-level properties, enabling both storing and computing with a small, massively-parallel footprint at low power. Theoretically, this directly translates to a major boost in energy efficiency and computational throughput, but various practical challenges remain. In this work we review the latest efforts for achieving hardware-based memristive artificial neural networks (ANNs), describing with detail the working principia of each block and the different design alternatives with their own advantages and disadvantages, as well as the tools required for accurate estimation of performance metrics. Ultimately, we aim to provide a comprehensive protocol of the materials and methods involved in memristive neural networks to those aiming to start working in this field and the experts looking for a holistic approach. Memristors hold promise for massively-parallel computing at low power. Aguirre et al. provide a comprehensive protocol of the materials and methods for designing memristive artificial neural networks with the detailed working principles of each building block and the tools for performance evaluation.
Bibliografia:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ObjectType-Review-3
content type line 23
ISSN:2041-1723
2041-1723
DOI:10.1038/s41467-024-45670-9