An adaptive growing and pruning algorithm for designing recurrent neural network

The training of recurrent neural networks (RNNs) concerns the selection of their structures and the connection weights. To efficiently enhance generalization capabilities of RNNs, a recurrent self-organizing neural networks (RSONN), using an adaptive growing and pruning algorithm (AGPA), is proposed...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Neurocomputing (Amsterdam) Ročník 242; s. 51 - 62
Hlavní autoři: Han, Hong-Gui, Zhang, Shuo, Qiao, Jun-Fei
Médium: Journal Article
Jazyk:angličtina
Vydáno: Elsevier B.V 14.06.2017
Témata:
ISSN:0925-2312, 1872-8286
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:The training of recurrent neural networks (RNNs) concerns the selection of their structures and the connection weights. To efficiently enhance generalization capabilities of RNNs, a recurrent self-organizing neural networks (RSONN), using an adaptive growing and pruning algorithm (AGPA), is proposed for improving their performance in this paper. This AGPA can self-organize the structures of RNNs based on the information processing ability and competitiveness of hidden neurons in the learning process. Then, the hidden neurons of RSONN can be added or pruned to improve the generalization performance. Furthermore, an adaptive second-order algorithm with adaptive learning rate is employed to adjust the parameters of RSONN. And the convergence of RSONN is given to show the computational efficiency. To demonstrate the merits of RSONN for data modeling, several benchmark datasets and a real world application associated with nonlinear systems modeling problems are examined with comparisons against other existing methods. Experimental results show that the proposed RSONN effectively simplifies the network structure and performs better than some exiting methods.
ISSN:0925-2312
1872-8286
DOI:10.1016/j.neucom.2017.02.038