Embedded stacked group sparse autoencoder ensemble with L1 regularization and manifold reduction

Learning useful representations from original features is a key issue in classification tasks. Stacked autoencoders (SAEs) are easy to understand and realize, and they are powerful tools that learn deep features from original features, so they are popular for classification problems. The deep featur...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Applied soft computing Ročník 101; s. 107003
Hlavní autoři: Li, Yongming, Lei, Yan, Wang, Pin, Jiang, Mingfeng, Liu, Yuchuan
Médium: Journal Article
Jazyk:angličtina
Vydáno: Elsevier B.V 01.03.2021
Témata:
ISSN:1568-4946, 1872-9681
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Learning useful representations from original features is a key issue in classification tasks. Stacked autoencoders (SAEs) are easy to understand and realize, and they are powerful tools that learn deep features from original features, so they are popular for classification problems. The deep features can further combine the original features to construct more representative features for classification. However, existing SAEs do not consider the original features within the network structure and during training, so the deep features have low complementarity with the original features. To solve the problem, this paper proposes an embedded stacked group sparse autoencoder (ESGSAE) for more effective feature learning. Different from traditional stacked autoencoders, the ESGSAE model considers the complementarity between the original feature and the hidden outputs by embedding the original features into hidden layers. To alleviate the impact of the small sample problem on the generalization of the proposed ESGSAE model, an L1 regularization-based feature selection strategy is designed to further improve the feature quality. After that, an ensemble model with support vector machine (SVM) and weighted local discriminant preservation projection (w_LPPD) is designed to further enhance the feature quality. Based on the designs above, an embedded stacked group sparse autoencoder ensemble with L1 regularization and manifold reduction is proposed to obtain deep features with high complementarity in the context of the small sample problem. At the end of this paper, several representative public datasets are used for verification of the proposed algorithm. The results demonstrate that the ESGSAE ensemble model with L1 regularization and manifold reduction yields superior performance compared to other existing and state-of-the-art feature learning algorithms, including some representative deep stacked autoencoder methods. Specifically, compared with the original features, the representative feature extraction algorithms and the improved autoencoders, the algorithm proposed in this paper can improve the classification accuracy by up to 13.33%, 7.33%, and 9.55%, respectively. The data and codes can be found in: https://share.weiyun.com/Jt7qeORm •A hybrid feature is embedded into the training process to construct a novel deep model.•A group sparsity constraint is introduced to obtain the sparse representations.•The ESGSAE ensemble model is constructed to obtain high complementary features.•A three-step feature learning mechanism is realized.
ISSN:1568-4946
1872-9681
DOI:10.1016/j.asoc.2020.107003