Solving Large-Scale Multiobjective Optimization Problems With Sparse Optimal Solutions via Unsupervised Neural Networks

Due to the curse of dimensionality of search space, it is extremely difficult for evolutionary algorithms to approximate the optimal solutions of large-scale multiobjective optimization problems (LMOPs) by using a limited budget of evaluations. If the Pareto-optimal subspace is approximated during t...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on cybernetics Jg. 51; H. 6; S. 3115 - 3128
Hauptverfasser: Tian, Ye, Lu, Chang, Zhang, Xingyi, Tan, Kay Chen, Jin, Yaochu
Format: Journal Article
Sprache:Englisch
Veröffentlicht: United States IEEE 01.06.2021
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Schlagworte:
ISSN:2168-2267, 2168-2275, 2168-2275
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Due to the curse of dimensionality of search space, it is extremely difficult for evolutionary algorithms to approximate the optimal solutions of large-scale multiobjective optimization problems (LMOPs) by using a limited budget of evaluations. If the Pareto-optimal subspace is approximated during the evolutionary process, the search space can be reduced and the difficulty encountered by evolutionary algorithms can be highly alleviated. Following the above idea, this article proposes an evolutionary algorithm to solve sparse LMOPs by learning the Pareto-optimal subspace. The proposed algorithm uses two unsupervised neural networks, a restricted Boltzmann machine, and a denoising autoencoder to learn a sparse distribution and a compact representation of the decision variables, where the combination of the learnt sparse distribution and compact representation is regarded as an approximation of the Pareto-optimal subspace. The genetic operators are conducted in the learnt subspace, and the resultant offspring solutions then can be mapped back to the original search space by the two neural networks. According to the experimental results on eight benchmark problems and eight real-world problems, the proposed algorithm can effectively solve sparse LMOPs with 10000 decision variables by only 100000 evaluations.
Bibliographie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:2168-2267
2168-2275
2168-2275
DOI:10.1109/TCYB.2020.2979930