SV-SAE: Layer-Wise Pruning for Autoencoder Based on Link Contributions

Autoencoders are a type of deep neural network and are widely used for unsupervised learning, particularly in tasks that require feature extraction and dimensionality reduction. While most research focuses on compressing input data, less attention has been given to reducing the size and complexity o...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:IEEE access Ročník 13; s. 75666 - 75678
Hlavní autoři: Rheey, Joohong, Park, Hyunggon
Médium: Journal Article
Jazyk:angličtina
Vydáno: Piscataway IEEE 2025
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Témata:
ISSN:2169-3536, 2169-3536
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Autoencoders are a type of deep neural network and are widely used for unsupervised learning, particularly in tasks that require feature extraction and dimensionality reduction. While most research focuses on compressing input data, less attention has been given to reducing the size and complexity of the autoencoder model itself, which is crucial for deployment on resource-constrained edge devices. This paper introduces a layer-wise pruning algorithm specifically for multilayer perceptron-based autoencoders. The resulting pruned model is referred to as a Shapley Value-based Sparse AutoEncoder (SV-SAE). Using cooperative game theory, the proposed algorithm models the autoencoder as a coalition of interconnected units and links, where the Shapley value quantifies their individual contributions to overall performance. This enables the selective removal of less important components, achieving an optimal balance between sparsity and accuracy. Experimental results confirm that the SV-SAE reaches an accuracy of 99.25%, utilizing only 10% of the original links. Notably, the SV-SAE remains robust under high sparsity levels with minimal performance degradation, whereas other algorithms experience sharp declines as the pruning ratio increases. Designed for edge environments, the SV-SAE offers an interpretable framework for controlling layer-wise sparsity while preserving essential features in latent representations. The results highlight its potential for efficient deployment in resource-constrained scenarios, where model size and inference speed are critical factors.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2025.3565296