Xcel-RAM: Accelerating Binary Neural Networks in High-Throughput SRAM Compute Arrays

Deep neural networks are biologically inspired class of algorithms that have recently demonstrated the state-of-the-art accuracy in large-scale classification and recognition tasks. Hardware acceleration of deep networks is of paramount importance to ensure their ubiquitous presence in future comput...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:IEEE transactions on circuits and systems. I, Regular papers Ročník 66; číslo 8; s. 3064 - 3076
Hlavní autoři: Agrawal, Amogh, Jaiswal, Akhilesh, Roy, Deboleena, Han, Bing, Srinivasan, Gopalakrishnan, Ankit, Aayush, Roy, Kaushik
Médium: Journal Article
Jazyk:angličtina
Vydáno: New York IEEE 01.08.2019
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Témata:
ISSN:1549-8328, 1558-0806
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Deep neural networks are biologically inspired class of algorithms that have recently demonstrated the state-of-the-art accuracy in large-scale classification and recognition tasks. Hardware acceleration of deep networks is of paramount importance to ensure their ubiquitous presence in future computing platforms. Indeed, a major landmark that enables efficient hardware accelerators for deep networks is the recent advances from the machine learning community that have demonstrated the viability of aggressively scaled deep binary networks. In this paper, we demonstrate how deep binary networks can be accelerated in modified von Neumann machines by enabling binary convolutions within the static random access memory (SRAM) arrays. In general, binary convolutions consist of bit-wise exclusive-NOR (XNOR) operations followed by a population count (popcount). We present two proposals: one based on charge sharing approach to perform vector XNOR and approximate popcount and another based on bit-wise XNOR followed by a digital bit-tree adder for accurate popcount. We highlight the various tradeoffs in terms of circuit complexity, speed-up, and classification accuracy for both the approaches. Few key techniques presented as a part of the manuscript are the use of low-precision, low-overhead analog-to-digital converter (ADC), to achieve a fairly accurate popcount for the charge-sharing scheme and proposal for sectioning of the SRAM array by adding switches onto the read-bitlines, thereby achieving improved parallelism. Our results on benchmark image classification datasets for CIFAR-10 and SVHN on a binarized neural network architecture show energy improvements of up to <inline-formula> <tex-math notation="LaTeX">6.1\times </tex-math></inline-formula> and <inline-formula> <tex-math notation="LaTeX">2.3\times </tex-math></inline-formula> for the two proposals, compared to conventional SRAM banks. In terms of latency, improvements of up to <inline-formula> <tex-math notation="LaTeX">15.8\times </tex-math></inline-formula> and <inline-formula> <tex-math notation="LaTeX">8.1\times </tex-math></inline-formula> were achieved for the two respective proposals.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1549-8328
1558-0806
DOI:10.1109/TCSI.2019.2907488