Evolutionary extreme learning machine based on an improved MOPSO algorithm

Extreme learning machine (ELM), as a single hidden layer feedforward neural network (SLFN), has attracted extensive attention because of its fast learning speed and high accuracy. However, the random selection of input weights and hidden biases is the main reason that deteriorates the generalization...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Neural computing & applications Ročník 37; číslo 12; s. 7733 - 7750
Hlavní autoři: Ling, Qinghua, Tan, Kaimin, Wang, Yuyan, Li, Zexu, Liu, Wenkai
Médium: Journal Article
Jazyk:angličtina
Vydáno: London Springer London 01.04.2025
Springer Nature B.V
Témata:
ISSN:0941-0643, 1433-3058
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Extreme learning machine (ELM), as a single hidden layer feedforward neural network (SLFN), has attracted extensive attention because of its fast learning speed and high accuracy. However, the random selection of input weights and hidden biases is the main reason that deteriorates the generalization performance and stability of ELM network. In this study, an improved ELM (IMOPSO-ELM) is proposed to enhance the generalization performance and convergence stability of the SLFN by using a multi-objective particle swarm optimization (MOPSO) to determine the input parameters including input weights and hidden biases of the SLFN. Firstly, different from the traditional improved ELM based on single-objective evolutionary algorithm, the proposed algorithm used MOPSO to optimize the input weights and hidden biases of SLFN by considering the two objectives including accuracy on the validation set and the 2-norm of the SLFN output weights. Secondly, in order to improve the diversity and convergence of the solution set obtained by MOPSO, an improved MOPSO (IMOPSO) is proposed. The improved MOPSO uses a new optimal global particle selection strategy, by randomly dividing the population into several subpopulations, each subpopulation uses different particle information in the external archive to guide the subpopulation update, and uses the external archive set as the platform to share the information between sub-swarms. Finally, the experiment on the four regression problems and four classification problems verifies the effectiveness of the approach in improving ELM generalization performance and performance stability.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0941-0643
1433-3058
DOI:10.1007/s00521-024-10578-4