Adaptive Scaling Filter Pruning Method for Vision Networks With Embedded Devices

Owing to improvements in computing power, deep learning technology using convolutional neural networks (CNNs) has recently been used in various fields. However, using CNNs on edge devices is challenging because of the large computation required to achieve high performance. To solve this problem, pru...

Full description

Saved in:
Bibliographic Details
Published in:IEEE access Vol. 12; pp. 123771 - 123781
Main Authors: Ko, Hyunjun, Kang, Jin-Ku, Kim, Yongwoo
Format: Journal Article
Language:English
Published: Piscataway IEEE 2024
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects:
ISSN:2169-3536, 2169-3536
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Owing to improvements in computing power, deep learning technology using convolutional neural networks (CNNs) has recently been used in various fields. However, using CNNs on edge devices is challenging because of the large computation required to achieve high performance. To solve this problem, pruning, which reduces redundant parameters and computations, has been widely studied. However, a conventional pruning method requires two learning processes, which are time-consuming and resource-intensive, and it is difficult to reflect the redundancy in the pruned network because it only performs pruning once on the unpruned network. Therefore, in this paper, we utilize a single learning process and propose an adaptive scaling method that dynamically adjusts the size of the network to reflect the changing redundancy in the pruned network. To verify the performance of each method, we compare the performance of the proposed methods by conducting experiments on various datasets and networks. In our experiments using the ImageNet dataset on ResNet-50, pruning FLOPs by 50.1% and 74.0% resulted in a decrease in top-1 accuracy by 0.92% and 3.38%, respectively, and improved inference time by 26.4% and 58.9%, respectively. In addition, pruning FLOPs by 27.37%, 36.84% and 46.41% using the COCO dataset on YOLOv7, reduced mAP(0.5-0.95) by 1.2%, 2.2% and 2.9%, respectively, and improved inference time by 12.9%, 16.9% and19.3%.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2024.3454329