Tiny but Accurate: A Pruned, Quantized and Optimized Memristor Crossbar Framework for Ultra Efficient DNN Implementation

The memristor crossbar array has emerged as an intrinsically suitable matrix computation and low-power acceleration framework for DNN applications. Many techniques such as memristor-based weight pruning and memristor-based quantization have been studied. However, the high accuracy solution for the a...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Proceedings of the ASP-DAC ... Asia and South Pacific Design Automation Conference s. 301 - 306
Hlavní autoři: Ma, Xiaolong, Yuan, Geng, Lin, Sheng, Ding, Caiwen, Yu, Fuxun, Liu, Tao, Wen, Wujie, Chen, Xiang, Wang, Yanzhi
Médium: Konferenční příspěvek
Jazyk:angličtina
Vydáno: IEEE 01.01.2020
Témata:
ISSN:2153-697X
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:The memristor crossbar array has emerged as an intrinsically suitable matrix computation and low-power acceleration framework for DNN applications. Many techniques such as memristor-based weight pruning and memristor-based quantization have been studied. However, the high accuracy solution for the above techniques is still waiting for unraveling. In this paper, we propose a memristor-based DNN framework which combines both structured weight pruning and quantization by incorporating ADMM algorithm for better pruning and quantization performance. We also discover the non-optimality of the ADMM solution in weight pruning and the unused data path in a structured pruned model. We design a software-hardware co-optimization framework which contains the first proposed Network Purification and Unused Path Removal algorithms targeting on post-processing a structured pruned model after ADMM steps. By taking memristor hardware constraints into our whole framework, we achieve extreme high compression rate with minimum accuracy loss. For quantizing structured pruned model, our framework achieves nearly no accuracy loss after quantizing weights to 8-bit memristor weight representation. We share our models at anonymous link https://bit.ly/2VnMUy0.
ISSN:2153-697X
DOI:10.1109/ASP-DAC47756.2020.9045658