Design of Sparse CNN Accelerator Based on Inter-Frame Data Reuse
Convolutional Neural Network(CNN)are widely used for object detection and other tasks in video applications. However, conventional CNN accelerators focus only on the acceleration of single-image inferences and do not use data redundancy between successive video frames to accelerate video tasks. CNN...
Saved in:
| Published in: | Ji suan ji gong cheng Vol. 49; no. 12; pp. 55 - 62 |
|---|---|
| Main Author: | |
| Format: | Journal Article |
| Language: | Chinese English |
| Published: |
Editorial Office of Computer Engineering
15.12.2023
|
| Subjects: | |
| ISSN: | 1000-3428 |
| Online Access: | Get full text |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Convolutional Neural Network(CNN)are widely used for object detection and other tasks in video applications. However, conventional CNN accelerators focus only on the acceleration of single-image inferences and do not use data redundancy between successive video frames to accelerate video tasks. CNN accelerators currently using inter-frame data reuse have low sparsity, large model size, and high computational complexity. To solve these problems, a design using a learned step-size low-precision quantization is proposed to increase the sparsity of differential frames. Furthermore, the power of two scales is proposed to implement hardware-friendly quantization. This design also uses the Winograd algorithm to reduce the computational complexity of the convolution operator. Based on this, an input-channel bitmap compression scheme is proposed to exploit the sparsity of both activations and weights to leverage full zero skipping. Based on the YOLOv3 tiny network, the proposed quantization method and sparse CNN accelerator are verified on a Field Programmable Gate Array(FPGA) platform using a subset of the ImageNet ILSVRC2015 VID and DAC2020 datasets. The results show that the proposed quantization method achieves 4-bit full-integer quantization with a loss of less than 2% in mean Average Precision(mAP). Owing to interframe data reuse, the designed sparse CNN accelerator achieves a performance of 814.2×109operation/s and an energy efficiency ratio of 201.1×109operation/s/W. Compared with other FPGA-based accelerators, the designed accelerator achieves 1.77-8.99 times higher performance and 1.91-5.56 times higher energy efficiency. |
|---|---|
| ISSN: | 1000-3428 |
| DOI: | 10.19678/j.issn.1000-3428.0066172 |