Sparse Kronecker Canonical Polyadic Decomposition for Convolutional Neural Networks Compression

In recent years, convolutional neural networks (CNNs) have continued to play a crucial role in various domains including computer vision. The lightweighting of CNNs is partic- ularly important for embedded industrial scenarios with limited computational resources. Among the numerous approaches to li...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:2024 IEEE 12th International Conference on Information, Communication and Networks (ICICN) S. 402 - 407
Hauptverfasser: Qi, Mengmeng, Wang, Dingheng, Yang, Wei, Wang, Fuyong, Liu, Zhongxin, Chen, Zengqiang
Format: Tagungsbericht
Sprache:Englisch
Veröffentlicht: IEEE 21.08.2024
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In recent years, convolutional neural networks (CNNs) have continued to play a crucial role in various domains including computer vision. The lightweighting of CNNs is partic- ularly important for embedded industrial scenarios with limited computational resources. Among the numerous approaches to lightweighting, tensor decomposition methods have demonstrated their unique advantages such as conciseness, flexibility, and the theory of low-rank approximation. However, how to balance the compression ratio and accuracy remains a major challenge currently faced by tensor decomposition. In this paper, we propose a novel method for compressing and accelerating CNNs, termed Sparse KCP decomposition. We utilize Sparse KCP decomposition to design a three-layer network structure called Sparse Bottleneck, and we employ large convolutional kernels to mitigate the accuracy degradation resulting from compression. Extensive experiments conducted on the CIFAR-10 and ImageNet benchmark datasets demonstrate that the proposed Sparse KCP method achieves significant compression and acceleration rates.
DOI:10.1109/ICICN62625.2024.10761380