Energy-efficient XNOR-free In-Memory BNN Accelerator with Input Distribution Regularization

SRAM-based in-memory Binary Neural Network (BNN) accelerators are garnering interests as a platform for energy-efficient edge neural network computing thanks to their compactness in terms of hardware and neural network parameter size. However, previous works had to modify SRAM cells to support XNOR...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:Digest of technical papers - IEEE/ACM International Conference on Computer-Aided Design s. 1 - 9
Hlavní autori: Kim, Hyungjun, Oh, Hyunmyung, Kim, Jae-Joon
Médium: Konferenčný príspevok..
Jazyk:English
Vydavateľské údaje: Association on Computer Machinery 02.11.2020
Predmet:
ISSN:1558-2434
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:SRAM-based in-memory Binary Neural Network (BNN) accelerators are garnering interests as a platform for energy-efficient edge neural network computing thanks to their compactness in terms of hardware and neural network parameter size. However, previous works had to modify SRAM cells to support XNOR operations on memory array resulting in limited area and energy efficiencies. In this work, we present a conversion method which replaces the signed inputs (+1/-1) of BNN with the unsigned inputs (1/0) without computation error, and vice versa. The method enables BNN computing on conventional 6T SRAM arrays and improves area and energy efficiencies. We also demonstrate that further energy saving is possible by skewing the distribution of binary input data based on regularization during network training. Evaluation results show that the proposed techniques improve the inference energy efficiency by up to 9.4x for various benchmarks over previous works.
ISSN:1558-2434
DOI:10.1145/3400302.3415641