Learning improvement representations to accelerate evolutionary large-scale multiobjective optimization

Large-Scale multi-objective optimization problems present significant challenges to traditional evolutionary algorithms due to the exponentially increased search space and computational burden. To address these issues, we propose a novel framework that integrates improvement representation learning...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Information sciences Ročník 705; s. 121973
Hlavní autoři: Liu, Songbai, Wang, Zeyi, Ma, Lijia, Chen, Jianyong, Zhou, Xun
Médium: Journal Article
Jazyk:angličtina
Vydáno: Elsevier Inc 01.07.2025
Témata:
ISSN:0020-0255
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Large-Scale multi-objective optimization problems present significant challenges to traditional evolutionary algorithms due to the exponentially increased search space and computational burden. To address these issues, we propose a novel framework that integrates improvement representation learning into the evolutionary optimization process. It employs neural models to capture performance improvement patterns by learning from the transitions between suboptimal and superior solutions, which are then used to guide the generation of higher-quality offspring. These learnable evolutionary generators explore both the original search space and the learned representation space, enabling more effective navigation and accelerated convergence toward global optima. The proposed framework incorporates simulated binary crossover and differential evolution operators, ensuring adaptability to diverse problem landscapes. Comparative experiments on widely studied benchmark problems demonstrate that our approach achieves comparable cost in computational resources while delivering superior convergence efficiency and solution quality compared to state-of-the-art algorithms. This performance boost is particularly notable for benchmarks with up to 10,000 decision variables, where traditional methods often struggle. These results highlight the potential of combining evolutionary algorithms with representation learning to address the critical challenges of scaling-up optimizations. •Learnable generators plus evolutionary algorithms to accelerate convergence.•Learning diverse improvement-based representations to enhance efficiency.•A comprehensive learnable evolutionary framework to enhance scalability.
ISSN:0020-0255
DOI:10.1016/j.ins.2025.121973