Optimal parameters selection of back propagation algorithm in the feedforward neural network
•An improved BP algorithm which can seek optimal weights and thresholds is proposed.•The iteration process of the improved BP algorithm can be accelerated.•The improved BP algorithm has the probability of global optimization.•Solutions corresponding to the increasing scales of data are convergent.•I...
Uloženo v:
| Vydáno v: | Engineering analysis with boundary elements Ročník 151; s. 575 - 596 |
|---|---|
| Hlavní autoři: | , , , , |
| Médium: | Journal Article |
| Jazyk: | angličtina |
| Vydáno: |
Elsevier Ltd
01.06.2023
|
| Témata: | |
| ISSN: | 0955-7997 |
| On-line přístup: | Získat plný text |
| Tagy: |
Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
|
| Shrnutí: | •An improved BP algorithm which can seek optimal weights and thresholds is proposed.•The iteration process of the improved BP algorithm can be accelerated.•The improved BP algorithm has the probability of global optimization.•Solutions corresponding to the increasing scales of data are convergent.•Improved BP can achieve higher convergence and efficiency than conventional BP.
The back propagation (BP) is one of the most widely used algorithms in the feedforward neural network (FNN), but selecting the non-optimal weights and thresholds may induce the slow convergence and local optimization. In this work, we propose an improved BP algorithm in which the loss function is constructed based on a small scale of data to seek the optimal weights and thresholds. The iteration process of the improved BP algorithm can be accelerated and the solutions will not be trapped into the local optimum. Solutions corresponding to the increasing scales of data are convergent which demonstrates the convergence of the parameters selection. Numerical simulations of elasticity and graphic reconstruction and repair confirm that the improved BP algorithm can achieve much faster convergence and higher efficiency than the conventional BP algorithm. Moreover, different initial values leading to the same optimal parameters indicates that the improved algorithm can achieve the global optimization during the learning process. |
|---|---|
| ISSN: | 0955-7997 |
| DOI: | 10.1016/j.enganabound.2023.03.033 |