Evolutionary extreme learning machine based on an improved MOPSO algorithm

Extreme learning machine (ELM), as a single hidden layer feedforward neural network (SLFN), has attracted extensive attention because of its fast learning speed and high accuracy. However, the random selection of input weights and hidden biases is the main reason that deteriorates the generalization...

Full description

Saved in:
Bibliographic Details
Published in:Neural computing & applications Vol. 37; no. 12; pp. 7733 - 7750
Main Authors: Ling, Qinghua, Tan, Kaimin, Wang, Yuyan, Li, Zexu, Liu, Wenkai
Format: Journal Article
Language:English
Published: London Springer London 01.04.2025
Springer Nature B.V
Subjects:
ISSN:0941-0643, 1433-3058
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Extreme learning machine (ELM), as a single hidden layer feedforward neural network (SLFN), has attracted extensive attention because of its fast learning speed and high accuracy. However, the random selection of input weights and hidden biases is the main reason that deteriorates the generalization performance and stability of ELM network. In this study, an improved ELM (IMOPSO-ELM) is proposed to enhance the generalization performance and convergence stability of the SLFN by using a multi-objective particle swarm optimization (MOPSO) to determine the input parameters including input weights and hidden biases of the SLFN. Firstly, different from the traditional improved ELM based on single-objective evolutionary algorithm, the proposed algorithm used MOPSO to optimize the input weights and hidden biases of SLFN by considering the two objectives including accuracy on the validation set and the 2-norm of the SLFN output weights. Secondly, in order to improve the diversity and convergence of the solution set obtained by MOPSO, an improved MOPSO (IMOPSO) is proposed. The improved MOPSO uses a new optimal global particle selection strategy, by randomly dividing the population into several subpopulations, each subpopulation uses different particle information in the external archive to guide the subpopulation update, and uses the external archive set as the platform to share the information between sub-swarms. Finally, the experiment on the four regression problems and four classification problems verifies the effectiveness of the approach in improving ELM generalization performance and performance stability.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0941-0643
1433-3058
DOI:10.1007/s00521-024-10578-4