Convergence analysis for sigma-pi-sigma neural network based on some relaxed conditions
This work proves the deterministic convergence of the Sigma-Pi-Sigma neural network based on the batch gradient learning algorithm under certain relaxed conditions. We establish strong and weak convergence results and prove that the error function decreases monotonically and tends to zero. The bound...
Uloženo v:
| Vydáno v: | Information sciences Ročník 585; s. 70 - 88 |
|---|---|
| Hlavní autoři: | , , |
| Médium: | Journal Article |
| Jazyk: | angličtina |
| Vydáno: |
Elsevier Inc
01.03.2022
|
| Témata: | |
| ISSN: | 0020-0255, 1872-6291 |
| On-line přístup: | Získat plný text |
| Tagy: |
Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
|
| Shrnutí: | This work proves the deterministic convergence of the Sigma-Pi-Sigma neural network based on the batch gradient learning algorithm under certain relaxed conditions. We establish strong and weak convergence results and prove that the error function decreases monotonically and tends to zero. The boundedness of weights is also proved simply and efficiently. In contrast to the usual requirements for the boundedness of weights, this work shows that such weight boundedness is no more one of the necessary conditions for ensuring convergence. In addition, we also show that the requirements on the learning rate and the stationary point set of the error function can be relaxed. Finally, the effectiveness of the proposed algorithm is validated by numerical experiments, followed by brief conclusions. |
|---|---|
| ISSN: | 0020-0255 1872-6291 |
| DOI: | 10.1016/j.ins.2021.11.044 |