An ensemble of differential evolution and Adam for training feed-forward neural networks

Adam is an adaptive gradient descent approach that is commonly used in back-propagation (BP) algorithms for training feed-forward neural networks (FFNNs). However, it has the defect that it may easily fall into local optima. To solve this problem, some metaheuristic approaches have been proposed to...

Full description

Saved in:
Bibliographic Details
Published in:Information sciences Vol. 608; pp. 453 - 471
Main Authors: Xue, Yu, Tong, Yiling, Neri, Ferrante
Format: Journal Article
Language:English
Published: Elsevier Inc 01.08.2022
Subjects:
ISSN:0020-0255, 1872-6291
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Adam is an adaptive gradient descent approach that is commonly used in back-propagation (BP) algorithms for training feed-forward neural networks (FFNNs). However, it has the defect that it may easily fall into local optima. To solve this problem, some metaheuristic approaches have been proposed to train FFNNs. While these approaches have stronger global search capabilities enabling them to more readily escape from local optima, their convergence performance is not as good as that of Adam. The proposed algorithm is an ensemble of differential evolution and Adam (EDEAdam), which integrates a modern version of the differential evolution algorithm with Adam, using two different sub-algorithms to evolve two sub-populations in parallel and thereby achieving good results in both global and local search. Compared with traditional algorithms, the integration of the two algorithms endows EDEAdam with powerful capabilities to handle different classification problems. Experimental results prove that EDEAdam not only exhibits improved global and local search capabilities, but also achieves a fast convergence speed.
ISSN:0020-0255
1872-6291
DOI:10.1016/j.ins.2022.06.036