An incremental-learning model-based multiobjective estimation of distribution algorithm

Knowledge obtained from the properties of a Pareto-optimal set can guide an evolutionary search. Learning models for multiobjective estimation of distributions have led to improved search efficiency, but they incur a high computational cost owing to their use of a repetitive learning or iterative st...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:Information sciences Ročník 569; s. 430 - 449
Hlavní autori: Liu, Tingrui, Li, Xin, Tan, Liguo, Song, Shenmin
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: Elsevier Inc 01.08.2021
Predmet:
ISSN:0020-0255, 1872-6291
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:Knowledge obtained from the properties of a Pareto-optimal set can guide an evolutionary search. Learning models for multiobjective estimation of distributions have led to improved search efficiency, but they incur a high computational cost owing to their use of a repetitive learning or iterative strategy. To overcome this drawback, we propose an algorithm for incremental-learning model-based multiobjective estimation of distributions. A learning mechanism based on an incremental Gaussian mixture model is embedded within the search procedure. In the proposed algorithm, all new solutions generated during the evolution are passed to a data stream, which is fed incrementally into the learning model to adaptively discover the structure of the Pareto-optimal set. The parameters of the model are updated continually as each newly generated datum is collected. Each datum is learned only once for the model, regardless of whether it has been preserved or deleted. Moreover, a sampling strategy based on the learned model is designed to balance the exploration/exploitation dilemma in the evolutionary search. The proposed algorithm is compared with six state-of-the-art algorithms for several benchmarks. The experimental results show that there is a significant improvement over the representative algorithms.
ISSN:0020-0255
1872-6291
DOI:10.1016/j.ins.2021.04.011