Building feature space of extreme learning machine with sparse denoising stacked-autoencoder
Gespeichert in:
| Titel: | Building feature space of extreme learning machine with sparse denoising stacked-autoencoder |
|---|---|
| Autoren: | Lele Cao, Wenbing Huang, Fuchun Sun |
| Quelle: | Neurocomputing. 174:60-71 |
| Verlagsinformationen: | Elsevier BV, 2016. |
| Publikationsjahr: | 2016 |
| Schlagwörter: | 0202 electrical engineering, electronic engineering, information engineering, 02 engineering and technology |
| Beschreibung: | The random-hidden-node extreme learning machine (ELM) is a much more generalized cluster of single-hidden-layer feed-forward neural networks (SLFNs) which has three parts: random projection, non-linear transformation, and ridge regression (RR) model. Networks with deep architectures have demonstrated state-of-the-art performance in a variety of settings, especially with computer vision tasks. Deep learning algorithms such as stacked autoencoder (SAE) and deep belief network (DBN) are built on learning several levels of representation of the input. Beyond simply learning features by stacking autoencoders (AE), there is a need for increasing its robustness to noise and reinforcing the sparsity of weights to make it easier to discover interesting and prominent features. The sparse AE and denoising AE was hence developed for this purpose. This paper proposes an approach: SSDAE-RR (stacked sparse denoising autoencoder - ridge regression) that effectively integrates the advantages in SAE, sparse AE, denoising AE, and the RR implementation in ELM algorithm. We conducted experimental study on real-world classification (binary and multiclass) and regression problems with different scales among several relevant approaches: SSDAE-RR, ELM, DBN, neural network (NN), and SAE. The performance analysis shows that the SSDAE-RR tends to achieve a better generalization ability on relatively large datasets (large sample size and high dimension) that were not pre-processed for feature abstraction. For 16 out of 18 tested datasets, the performance of SSDAE-RR is more stable than other tested approaches. We also note that the sparsity regularization and denoising mechanism seem to be mandatory for constructing interpretable feature representations. The fact that a SSDAE-RR approach often has a comparable training time to ELM makes it useful in some real applications. |
| Publikationsart: | Article |
| Sprache: | English |
| ISSN: | 0925-2312 |
| DOI: | 10.1016/j.neucom.2015.02.096 |
| Zugangs-URL: | https://dblp.uni-trier.de/db/journals/ijon/ijon174.html#CaoHS16 https://dl.acm.org/doi/10.1016/j.neucom.2015.02.096 https://dl.acm.org/citation.cfm?id=2852657 http://www.sciencedirect.com/science/article/pii/S0925231215011674 https://www.sciencedirect.com/science/article/pii/S0925231215011674 |
| Rights: | Elsevier TDM |
| Dokumentencode: | edsair.doi.dedup.....6ae08b9a8ab75d3d94e7b392109a1b98 |
| Datenbank: | OpenAIRE |
| Abstract: | The random-hidden-node extreme learning machine (ELM) is a much more generalized cluster of single-hidden-layer feed-forward neural networks (SLFNs) which has three parts: random projection, non-linear transformation, and ridge regression (RR) model. Networks with deep architectures have demonstrated state-of-the-art performance in a variety of settings, especially with computer vision tasks. Deep learning algorithms such as stacked autoencoder (SAE) and deep belief network (DBN) are built on learning several levels of representation of the input. Beyond simply learning features by stacking autoencoders (AE), there is a need for increasing its robustness to noise and reinforcing the sparsity of weights to make it easier to discover interesting and prominent features. The sparse AE and denoising AE was hence developed for this purpose. This paper proposes an approach: SSDAE-RR (stacked sparse denoising autoencoder - ridge regression) that effectively integrates the advantages in SAE, sparse AE, denoising AE, and the RR implementation in ELM algorithm. We conducted experimental study on real-world classification (binary and multiclass) and regression problems with different scales among several relevant approaches: SSDAE-RR, ELM, DBN, neural network (NN), and SAE. The performance analysis shows that the SSDAE-RR tends to achieve a better generalization ability on relatively large datasets (large sample size and high dimension) that were not pre-processed for feature abstraction. For 16 out of 18 tested datasets, the performance of SSDAE-RR is more stable than other tested approaches. We also note that the sparsity regularization and denoising mechanism seem to be mandatory for constructing interpretable feature representations. The fact that a SSDAE-RR approach often has a comparable training time to ELM makes it useful in some real applications. |
|---|---|
| ISSN: | 09252312 |
| DOI: | 10.1016/j.neucom.2015.02.096 |
Full Text Finder
Nájsť tento článok vo Web of Science