Deep incremental random vector functional-link network: A non-iterative constructive sketch via greedy feature learning
The incremental version of randomized neural networks provides a greedy constructive algorithm for the shallow network, which adds new nodes through different stochastic methods rather than gradient optimization. However, the potential of the random incremental mechanism is still underutilized in de...
Uložené v:
| Vydané v: | Applied soft computing Ročník 143; s. 110410 |
|---|---|
| Hlavní autori: | , |
| Médium: | Journal Article |
| Jazyk: | English |
| Vydavateľské údaje: |
Elsevier B.V
01.08.2023
|
| Predmet: | |
| ISSN: | 1568-4946 |
| On-line prístup: | Získať plný text |
| Tagy: |
Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
|
| Shrnutí: | The incremental version of randomized neural networks provides a greedy constructive algorithm for the shallow network, which adds new nodes through different stochastic methods rather than gradient optimization. However, the potential of the random incremental mechanism is still underutilized in deep structures. To address this research gap, we propose an unsupervised algorithm termed the incremental randomization-based autoencoder (IR-AE) for greedy feature learning, which applies an integrated optimized constructive algorithm to train the feature extractor. Using IR-AE as a hierarchical stacked block, we synthesize the deep incremental random vector functional-link (DI-RVFL) network that builds a deep structure with overall feature-output links through a feedforward approach. Furthermore, it is a novel data-driven initialization to implement the feedforward constructive sketch (CoSketch) as a pre-trained model for multi-layer perceptron. The simulation results empirically demonstrate that the proposed IR-AE can realize a higher reconstruction efficiency than AE and randomization-based AE. Moreover, DI-RVFL shows the advantages of deep structures in higher-level feature extraction compared to other stacked random structures. The overall performance of deep RVFLs outperforms those of multi-layer extreme learning machines. As data-driven initialization, CoSketch significantly improves the convergence performance of gradient descent.
[Display omitted]
•Incremental randomization-based autoencoder (IR-AE) achieves high efficiency.•A hierarchical stack IR-AE shows the advantages of incremental mechanism.•Deep constructive sketch improves the convergence speed of gradient optimization. |
|---|---|
| ISSN: | 1568-4946 |
| DOI: | 10.1016/j.asoc.2023.110410 |