An ensemble of differential evolution and Adam for training feed-forward neural networks

Adam is an adaptive gradient descent approach that is commonly used in back-propagation (BP) algorithms for training feed-forward neural networks (FFNNs). However, it has the defect that it may easily fall into local optima. To solve this problem, some metaheuristic approaches have been proposed to...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Information sciences Ročník 608; s. 453 - 471
Hlavní autoři: Xue, Yu, Tong, Yiling, Neri, Ferrante
Médium: Journal Article
Jazyk:angličtina
Vydáno: Elsevier Inc 01.08.2022
Témata:
ISSN:0020-0255, 1872-6291
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Adam is an adaptive gradient descent approach that is commonly used in back-propagation (BP) algorithms for training feed-forward neural networks (FFNNs). However, it has the defect that it may easily fall into local optima. To solve this problem, some metaheuristic approaches have been proposed to train FFNNs. While these approaches have stronger global search capabilities enabling them to more readily escape from local optima, their convergence performance is not as good as that of Adam. The proposed algorithm is an ensemble of differential evolution and Adam (EDEAdam), which integrates a modern version of the differential evolution algorithm with Adam, using two different sub-algorithms to evolve two sub-populations in parallel and thereby achieving good results in both global and local search. Compared with traditional algorithms, the integration of the two algorithms endows EDEAdam with powerful capabilities to handle different classification problems. Experimental results prove that EDEAdam not only exhibits improved global and local search capabilities, but also achieves a fast convergence speed.
ISSN:0020-0255
1872-6291
DOI:10.1016/j.ins.2022.06.036