An ensemble of differential evolution and Adam for training feed-forward neural networks

Adam is an adaptive gradient descent approach that is commonly used in back-propagation (BP) algorithms for training feed-forward neural networks (FFNNs). However, it has the defect that it may easily fall into local optima. To solve this problem, some metaheuristic approaches have been proposed to...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Information sciences Jg. 608; S. 453 - 471
Hauptverfasser: Xue, Yu, Tong, Yiling, Neri, Ferrante
Format: Journal Article
Sprache:Englisch
Veröffentlicht: Elsevier Inc 01.08.2022
Schlagworte:
ISSN:0020-0255, 1872-6291
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Adam is an adaptive gradient descent approach that is commonly used in back-propagation (BP) algorithms for training feed-forward neural networks (FFNNs). However, it has the defect that it may easily fall into local optima. To solve this problem, some metaheuristic approaches have been proposed to train FFNNs. While these approaches have stronger global search capabilities enabling them to more readily escape from local optima, their convergence performance is not as good as that of Adam. The proposed algorithm is an ensemble of differential evolution and Adam (EDEAdam), which integrates a modern version of the differential evolution algorithm with Adam, using two different sub-algorithms to evolve two sub-populations in parallel and thereby achieving good results in both global and local search. Compared with traditional algorithms, the integration of the two algorithms endows EDEAdam with powerful capabilities to handle different classification problems. Experimental results prove that EDEAdam not only exhibits improved global and local search capabilities, but also achieves a fast convergence speed.
ISSN:0020-0255
1872-6291
DOI:10.1016/j.ins.2022.06.036