Floating Gate Transistor‐Based Accurate Digital In‐Memory Computing for Deep Neural Networks

To improve the computing speed and energy efficiency of deep neural network (DNN) applications, in‐memory computing with nonvolatile memory (NVM) is proposed to address the time‐consuming and energy‐hungry data shuttling issue. Herein, a digital in‐memory computing method for convolution computing,...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:Advanced intelligent systems Ročník 4; číslo 12
Hlavní autori: Han, Runze, Huang, Peng, Xiang, Yachen, Hu, Hong, Lin, Sheng, Dong, Peiyan, Shen, Wensheng, Wang, Yanzhi, Liu, Xiaoyan, Kang, Jinfeng
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: Weinheim John Wiley & Sons, Inc 01.12.2022
Wiley
Predmet:
ISSN:2640-4567, 2640-4567
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:To improve the computing speed and energy efficiency of deep neural network (DNN) applications, in‐memory computing with nonvolatile memory (NVM) is proposed to address the time‐consuming and energy‐hungry data shuttling issue. Herein, a digital in‐memory computing method for convolution computing, which holds the key to DNNs, is proposed. Based on the proposed method, a floating gate transistor‐based in‐memory computing chip for accurate convolution computing with high parallelism is created. The proposed digital in‐memory computing method can achieve the central processing unit (CPU)‐equivalent precision with the same neural network architecture and parameters, different from the analogue or digital–analogue‐mixed in‐memory computing techniques. Based on the fabricated floating gate transistor‐based in‐memory computing chip, a hardware LeNet‐5 neural network is built. The chip achieves 96.25% accuracy on the full Modified National Institute of Standards and Technology database, which is the same as the result computed by the CPU with the same neural network architecture and parameters. To improve the computing speed and energy efficiency of the deep neural network (DNN) applications, a digital in‐memory computing method for convolution computing is proposed and a floating gate transistor‐based in‐memory computing chip for accurate convolution computing with high parallelism is created. The recognition accuracy of the hardware neural network system is same as the software.
Bibliografia:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2640-4567
2640-4567
DOI:10.1002/aisy.202200127