Research of stacked denoising sparse autoencoder

Learning results depend on the representation of data, so how to efficiently represent data has been a research hot spot in machine learning and artificial intelligence. With the deepening of the deep learning research, studying how to train the deep networks to express high dimensional data efficie...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Neural computing & applications Jg. 30; H. 7; S. 2083 - 2100
Hauptverfasser: Meng, Lingheng, Ding, Shifei, Zhang, Nan, Zhang, Jian
Format: Journal Article
Sprache:Englisch
Veröffentlicht: London Springer London 01.10.2018
Springer Nature B.V
Schlagworte:
ISSN:0941-0643, 1433-3058
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Learning results depend on the representation of data, so how to efficiently represent data has been a research hot spot in machine learning and artificial intelligence. With the deepening of the deep learning research, studying how to train the deep networks to express high dimensional data efficiently also has been a research frontier. In order to present data more efficiently and study how to express data through deep networks, we propose a novel stacked denoising sparse autoencoder in this paper. Firstly, we construct denoising sparse autoencoder through introducing both corrupting operation and sparsity constraint into traditional autoencoder. Then, we build stacked denoising sparse autoencoders which has multi-hidden layers by layer-wisely stacking denoising sparse autoencoders. Experiments are designed to explore the influences of corrupting operation and sparsity constraint on different datasets, using the networks with various depth and hidden units. The comparative experiments reveal that test accuracy of stacked denoising sparse autoencoder is much higher than other stacked models, no matter what dataset is used and how many layers the model has. We also find that the deeper the network is, the less activated neurons in every layer will have. More importantly, we find that the strengthening of sparsity constraint is to some extent equal to the increase in corrupted level.
Bibliographie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0941-0643
1433-3058
DOI:10.1007/s00521-016-2790-x