Device Specifications for Neural Network Training with Analog Resistive Cross‐Point Arrays Using Tiki‐Taka Algorithms

Recently, specialized training algorithms for analog cross‐point array‐based neural network accelerators have been introduced to counteract device non‐idealities such as update asymmetry and cycle‐to‐cycle variation, achieving software‐level performance in neural network training. However, a quantit...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:Advanced intelligent systems Ročník 7; číslo 5
Hlavní autori: Byun, Jinho, Kim, Seungkun, Kim, Doyoon, Lee, Jimin, Ji, Wonjae, Kim, Seyoung
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: Weinheim John Wiley & Sons, Inc 01.05.2025
Wiley
Predmet:
ISSN:2640-4567, 2640-4567
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:Recently, specialized training algorithms for analog cross‐point array‐based neural network accelerators have been introduced to counteract device non‐idealities such as update asymmetry and cycle‐to‐cycle variation, achieving software‐level performance in neural network training. However, a quantitative analysis of how these algorithms affect the relaxation of device specifications is yet to be conducted. This study provides a detailed analysis by elucidating the device prerequisites for training with the Tiki‐Taka algorithm versions 1 (TTv1) and 2 (TTv2), which leverage the dynamics between multiple arrays to compensate for device non‐idealities. A multiparameter simulation is conducted to assess the impact of device non‐idealities, including asymmetry, retention, number of pulses, and cycle‐to‐cycle variation, on neural network training. Using pattern‐recognition accuracy as a performance metric, the required device specifications for each algorithm are revealed. The results demonstrate that the standard stochastic gradient descent algorithm requires stringent device specifications. Conversely, TTv2 permits more lenient device specifications than the TTv1 across all examined non‐idealities. The analysis provides guidelines for the development, optimization, and utilization of devices for high‐performance neural network training using Tiki‐Taka algorithms. This study investigates the device specifications required for neural network training using analog resistive cross‐point arrays with the training algorithms. By demonstrating the robustness against non‐ideal update characteristics with these algorithms, it quantitatively shows how hardware‐aware training can relax device specifications. It could pave the way for successful implementation of analog deep learning accelerators with actual devices.
Bibliografia:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2640-4567
2640-4567
DOI:10.1002/aisy.202400543