A Free From Local Minima Algorithm for Training Regressive MLP Neural Networks

In this article an innovative method for training regressive MLP networks is presented, which is not subject to local minima. The Error-Back-Propagation algorithm, proposed by WilliamHinton-Rummelhart, has had the merit of favoring the development of machine learning techniques, which has permeated...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:Advances in Artificial Intelligence and Machine Learning Ročník 4; číslo 1; s. 2103 - 2112
Hlavný autor: Montisci, Augusto
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: 2024
ISSN:2582-9793, 2582-9793
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:In this article an innovative method for training regressive MLP networks is presented, which is not subject to local minima. The Error-Back-Propagation algorithm, proposed by WilliamHinton-Rummelhart, has had the merit of favoring the development of machine learning techniques, which has permeated every branch of research and technology since the mid1980s. This extraordinary success is largely due to the black-box approach, but this same factor was also seen as a limitation, as soon more challenging problems were approached. One of the most critical aspects of the training algorithms was that of local minima of the loss function, typically the mean squared error of the output on the training set. In fact, as the most popular training algorithms are driven by the derivatives of the loss function, there is no possibility to evaluate if a reached minimum is local or global. The algorithm presented in this paper avoids the problem of local minima, as the training is based on the properties of the distribution of the training set, or better on its image internal to the neural network. The performance of the algorithm is shown for a well-known benchmark.
ISSN:2582-9793
2582-9793
DOI:10.54364/AAIML.2024.41120