An incremental-learning model-based multiobjective estimation of distribution algorithm

Knowledge obtained from the properties of a Pareto-optimal set can guide an evolutionary search. Learning models for multiobjective estimation of distributions have led to improved search efficiency, but they incur a high computational cost owing to their use of a repetitive learning or iterative st...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Information sciences Jg. 569; S. 430 - 449
Hauptverfasser: Liu, Tingrui, Li, Xin, Tan, Liguo, Song, Shenmin
Format: Journal Article
Sprache:Englisch
Veröffentlicht: Elsevier Inc 01.08.2021
Schlagworte:
ISSN:0020-0255, 1872-6291
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Knowledge obtained from the properties of a Pareto-optimal set can guide an evolutionary search. Learning models for multiobjective estimation of distributions have led to improved search efficiency, but they incur a high computational cost owing to their use of a repetitive learning or iterative strategy. To overcome this drawback, we propose an algorithm for incremental-learning model-based multiobjective estimation of distributions. A learning mechanism based on an incremental Gaussian mixture model is embedded within the search procedure. In the proposed algorithm, all new solutions generated during the evolution are passed to a data stream, which is fed incrementally into the learning model to adaptively discover the structure of the Pareto-optimal set. The parameters of the model are updated continually as each newly generated datum is collected. Each datum is learned only once for the model, regardless of whether it has been preserved or deleted. Moreover, a sampling strategy based on the learned model is designed to balance the exploration/exploitation dilemma in the evolutionary search. The proposed algorithm is compared with six state-of-the-art algorithms for several benchmarks. The experimental results show that there is a significant improvement over the representative algorithms.
ISSN:0020-0255
1872-6291
DOI:10.1016/j.ins.2021.04.011