Evolutionary extreme learning machine based on an improved MOPSO algorithm

Extreme learning machine (ELM), as a single hidden layer feedforward neural network (SLFN), has attracted extensive attention because of its fast learning speed and high accuracy. However, the random selection of input weights and hidden biases is the main reason that deteriorates the generalization...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Neural computing & applications Jg. 37; H. 12; S. 7733 - 7750
Hauptverfasser: Ling, Qinghua, Tan, Kaimin, Wang, Yuyan, Li, Zexu, Liu, Wenkai
Format: Journal Article
Sprache:Englisch
Veröffentlicht: London Springer London 01.04.2025
Springer Nature B.V
Schlagworte:
ISSN:0941-0643, 1433-3058
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Extreme learning machine (ELM), as a single hidden layer feedforward neural network (SLFN), has attracted extensive attention because of its fast learning speed and high accuracy. However, the random selection of input weights and hidden biases is the main reason that deteriorates the generalization performance and stability of ELM network. In this study, an improved ELM (IMOPSO-ELM) is proposed to enhance the generalization performance and convergence stability of the SLFN by using a multi-objective particle swarm optimization (MOPSO) to determine the input parameters including input weights and hidden biases of the SLFN. Firstly, different from the traditional improved ELM based on single-objective evolutionary algorithm, the proposed algorithm used MOPSO to optimize the input weights and hidden biases of SLFN by considering the two objectives including accuracy on the validation set and the 2-norm of the SLFN output weights. Secondly, in order to improve the diversity and convergence of the solution set obtained by MOPSO, an improved MOPSO (IMOPSO) is proposed. The improved MOPSO uses a new optimal global particle selection strategy, by randomly dividing the population into several subpopulations, each subpopulation uses different particle information in the external archive to guide the subpopulation update, and uses the external archive set as the platform to share the information between sub-swarms. Finally, the experiment on the four regression problems and four classification problems verifies the effectiveness of the approach in improving ELM generalization performance and performance stability.
Bibliographie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0941-0643
1433-3058
DOI:10.1007/s00521-024-10578-4