DP-Nets: Dynamic programming assisted quantization schemes for DNN compression and acceleration

In this work, we present effective quantization schemes called DP-Nets for the compression and acceleration of deep neural networks (DNNs). A key ingredient is a novel dynamic programming (DP) based algorithm to obtain the optimal solution of scalar K-means clustering. Based on the approaches with r...

Full description

Saved in:
Bibliographic Details
Published in:Integration (Amsterdam) Vol. 82; pp. 147 - 154
Main Authors: Yang, Dingcheng, Yu, Wenjian, Ding, Xiangyun, Zhou, Ao, Wang, Xiaoyi
Format: Journal Article
Language:English
Published: Elsevier B.V 01.01.2022
Subjects:
ISSN:0167-9260, 1872-7522
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In this work, we present effective quantization schemes called DP-Nets for the compression and acceleration of deep neural networks (DNNs). A key ingredient is a novel dynamic programming (DP) based algorithm to obtain the optimal solution of scalar K-means clustering. Based on the approaches with regularization and quantization function, two weight quantization approaches called DPR and DPQ for compressing normal DNNs are proposed respectively. Accordingly, a technique based on DP-Nets for inference acceleration is presented. Experiments show that DP-Nets produce models with higher inference accuracy than recently proposed counterparts while achieving same or larger compression. They are also extended for compressing robust DNNs, and the relevant experiments show 16X compression of the robust ResNet-18 model with less than 3% accuracy drop on both natural and adversarial examples. The experiments with FPGA demonstrate that the technique for inference acceleration brings over 5X speedup on matrix–vector multiplication. •A dynamic programming (DP) method for the scalar K-means problem is proposed.•Two DP-based methods are proposed for DNN compression and acceleration.•The two methods are extended for compressing robust DNNs.•The technique for inference acceleration with compressed DNNs is validated on FPGA.
ISSN:0167-9260
1872-7522
DOI:10.1016/j.vlsi.2021.10.002