SV-SAE: Layer-Wise Pruning for Autoencoder Based on Link Contributions

Autoencoders are a type of deep neural network and are widely used for unsupervised learning, particularly in tasks that require feature extraction and dimensionality reduction. While most research focuses on compressing input data, less attention has been given to reducing the size and complexity o...

Full description

Saved in:
Bibliographic Details
Published in:IEEE access Vol. 13; pp. 75666 - 75678
Main Authors: Rheey, Joohong, Park, Hyunggon
Format: Journal Article
Language:English
Published: Piscataway IEEE 2025
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects:
ISSN:2169-3536, 2169-3536
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Autoencoders are a type of deep neural network and are widely used for unsupervised learning, particularly in tasks that require feature extraction and dimensionality reduction. While most research focuses on compressing input data, less attention has been given to reducing the size and complexity of the autoencoder model itself, which is crucial for deployment on resource-constrained edge devices. This paper introduces a layer-wise pruning algorithm specifically for multilayer perceptron-based autoencoders. The resulting pruned model is referred to as a Shapley Value-based Sparse AutoEncoder (SV-SAE). Using cooperative game theory, the proposed algorithm models the autoencoder as a coalition of interconnected units and links, where the Shapley value quantifies their individual contributions to overall performance. This enables the selective removal of less important components, achieving an optimal balance between sparsity and accuracy. Experimental results confirm that the SV-SAE reaches an accuracy of 99.25%, utilizing only 10% of the original links. Notably, the SV-SAE remains robust under high sparsity levels with minimal performance degradation, whereas other algorithms experience sharp declines as the pruning ratio increases. Designed for edge environments, the SV-SAE offers an interpretable framework for controlling layer-wise sparsity while preserving essential features in latent representations. The results highlight its potential for efficient deployment in resource-constrained scenarios, where model size and inference speed are critical factors.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2025.3565296