Energy-efficient XNOR-free In-Memory BNN Accelerator with Input Distribution Regularization

SRAM-based in-memory Binary Neural Network (BNN) accelerators are garnering interests as a platform for energy-efficient edge neural network computing thanks to their compactness in terms of hardware and neural network parameter size. However, previous works had to modify SRAM cells to support XNOR...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Digest of technical papers - IEEE/ACM International Conference on Computer-Aided Design s. 1 - 9
Hlavní autoři: Kim, Hyungjun, Oh, Hyunmyung, Kim, Jae-Joon
Médium: Konferenční příspěvek
Jazyk:angličtina
Vydáno: Association on Computer Machinery 02.11.2020
Témata:
ISSN:1558-2434
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:SRAM-based in-memory Binary Neural Network (BNN) accelerators are garnering interests as a platform for energy-efficient edge neural network computing thanks to their compactness in terms of hardware and neural network parameter size. However, previous works had to modify SRAM cells to support XNOR operations on memory array resulting in limited area and energy efficiencies. In this work, we present a conversion method which replaces the signed inputs (+1/-1) of BNN with the unsigned inputs (1/0) without computation error, and vice versa. The method enables BNN computing on conventional 6T SRAM arrays and improves area and energy efficiencies. We also demonstrate that further energy saving is possible by skewing the distribution of binary input data based on regularization during network training. Evaluation results show that the proposed techniques improve the inference energy efficiency by up to 9.4x for various benchmarks over previous works.
ISSN:1558-2434
DOI:10.1145/3400302.3415641