A recurrent sigma pi sigma neural network

In this paper, a novel recurrent sigma‒sigma neural network (RSPSNN) that contains the same advantages as the higher-order and recurrent neural networks is proposed. The batch gradient algorithm is used to train the RSPSNN to search for the optimal weights based on the minimal mean squared error (MS...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:Scientific reports Ročník 15; číslo 1; s. 588 - 14
Hlavní autori: Deng, Fei, Liang, Shibin, Qian, Kaiguo, Yu, Jing, Li, Xuanxuan
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: London Nature Publishing Group UK 02.01.2025
Nature Publishing Group
Nature Portfolio
Predmet:
ISSN:2045-2322, 2045-2322
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:In this paper, a novel recurrent sigma‒sigma neural network (RSPSNN) that contains the same advantages as the higher-order and recurrent neural networks is proposed. The batch gradient algorithm is used to train the RSPSNN to search for the optimal weights based on the minimal mean squared error (MSE). To substantiate the unique equilibrium state of the RSPSNN, the characteristic of stability convergence is proven, which is one of the most significant indices for reflecting the effectiveness and overcoming the instability problem in the training of this network. Finally, to establish a more precise evaluation of its validity, five empirical experiments are used. The RSPSNN is successfully applied to the function approximation problem, prediction problem, parity problem, classification problem, and image simulation, which verifies its effectiveness and practicability.
Bibliografia:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:2045-2322
2045-2322
DOI:10.1038/s41598-024-84299-y