Learning improvement representations to accelerate evolutionary large-scale multiobjective optimization

Large-Scale multi-objective optimization problems present significant challenges to traditional evolutionary algorithms due to the exponentially increased search space and computational burden. To address these issues, we propose a novel framework that integrates improvement representation learning...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Information sciences Jg. 705; S. 121973
Hauptverfasser: Liu, Songbai, Wang, Zeyi, Ma, Lijia, Chen, Jianyong, Zhou, Xun
Format: Journal Article
Sprache:Englisch
Veröffentlicht: Elsevier Inc 01.07.2025
Schlagworte:
ISSN:0020-0255
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Large-Scale multi-objective optimization problems present significant challenges to traditional evolutionary algorithms due to the exponentially increased search space and computational burden. To address these issues, we propose a novel framework that integrates improvement representation learning into the evolutionary optimization process. It employs neural models to capture performance improvement patterns by learning from the transitions between suboptimal and superior solutions, which are then used to guide the generation of higher-quality offspring. These learnable evolutionary generators explore both the original search space and the learned representation space, enabling more effective navigation and accelerated convergence toward global optima. The proposed framework incorporates simulated binary crossover and differential evolution operators, ensuring adaptability to diverse problem landscapes. Comparative experiments on widely studied benchmark problems demonstrate that our approach achieves comparable cost in computational resources while delivering superior convergence efficiency and solution quality compared to state-of-the-art algorithms. This performance boost is particularly notable for benchmarks with up to 10,000 decision variables, where traditional methods often struggle. These results highlight the potential of combining evolutionary algorithms with representation learning to address the critical challenges of scaling-up optimizations. •Learnable generators plus evolutionary algorithms to accelerate convergence.•Learning diverse improvement-based representations to enhance efficiency.•A comprehensive learnable evolutionary framework to enhance scalability.
ISSN:0020-0255
DOI:10.1016/j.ins.2025.121973