Sensitivity pruner: Filter-Level compression algorithm for deep neural networks

•We integrate the sensitivity measure from SNIP into the “training while fine-tuning” framework to form a more powerful pruning strategy by adapting the unstructured pruning measure from SNIP to allow filterlevel compression. In practice, the sensitivity score can be easily computed as the gradient...

Full description

Saved in:
Bibliographic Details
Published in:Pattern recognition Vol. 140; p. 109508
Main Authors: Guo, Suhan, Lai, Bilan, Yang, Suorong, Zhao, Jian, Shen, Furao
Format: Journal Article
Language:English
Published: Elsevier Ltd 01.08.2023
Subjects:
ISSN:0031-3203, 1873-5142
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:•We integrate the sensitivity measure from SNIP into the “training while fine-tuning” framework to form a more powerful pruning strategy by adapting the unstructured pruning measure from SNIP to allow filterlevel compression. In practice, the sensitivity score can be easily computed as the gradient of the connection mask applied to the weight matrix. Independent of the model structure, the sensitivity score can be applied to most neural networks for pruning purposes.•We mitigate the sampling bias in the single-shot influence score by introducing the difference between the learned pruning strategy and the single-shot strategy as the second loss component. Filter influence is measured on batched data, where a convolutional layer is used to discover the robust influence from the noise of the batch. The learning process is guided by the score provided by the influence measure.•Our algorithm can dynamically tweak the training goal between improving model accuracy and pruning more filters. We add a selfadaptive hyper-parameter [Display omitted] As neural networks get deeper for better performance, the demand for deployable models on resource-constrained devices also grows. In this work, we propose eliminating less sensitive filters to compress models. The previous method evaluates neuron importance using the connection matrix gradient in a single shot. To mitigate the sampling bias, we integrate this measure into the previously proposed “pruning while fine-tuning” framework. Besides classification errors, we introduce the difference between the learned and the single-shot strategy as the second loss component with a self-adjustive hyper-parameter that balances the training goal between improving accuracy and pruning more filters. Our Sensitivity Pruner (SP) adapts the unstructured pruning saliency metric to structured pruning tasks and enables the strategy to be derived sequentially to accommodate the updating sparsity. Experimental results demonstrate that SP significantly reduces the computational cost and the pruned models give comparable or better performance on CIFAR10, CIFAR100, and ILSVRC-12 datasets.
ISSN:0031-3203
1873-5142
DOI:10.1016/j.patcog.2023.109508