An offline-online learning framework combining meta-learning and reinforcement learning for evolutionary multi-objective optimization

•An offline-online learning framework combining meta-learning and reinforcement learning (O2-MRL) is first proposed for evolutionary multi-objective optimization. O2-MRL can adaptively select and schedule the most appropriate MOEAs for diverse MOPs, thereby fully leveraging the complementary strengt...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Swarm and evolutionary computation Ročník 97; s. 102037
Hlavní autoři: Li, Shuxiang, Pang, Yongsheng, Huang, Zhaorong, Chu, Xianghua
Médium: Journal Article
Jazyk:angličtina
Vydáno: Elsevier B.V 01.08.2025
Témata:
ISSN:2210-6502
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:•An offline-online learning framework combining meta-learning and reinforcement learning (O2-MRL) is first proposed for evolutionary multi-objective optimization. O2-MRL can adaptively select and schedule the most appropriate MOEAs for diverse MOPs, thereby fully leveraging the complementary strengths of different MOEAs and providing a novel perspective for solving MOPs.•O2-MRL overcomes the respective limitations of existing offline and online algorithm selection methods by integrating their advantages into a unified learning framework.•Experiments are conducted on forty-seven benchmark MOPs and two real-world MOPs. The experimental results demonstrate that O2-MRL consistently achieves superior and robust performance across diverse MOPs with varying dimensions, without increasing computational complexity.•The framework of proposed O2-MRL is flexible and applicable to various MOPs, and it can be extended to solve MOPs across different application domains. Many multi-objective evolutionary algorithms (MOEAs) have been proposed in addressing the multi-objective optimization problems (MOPs). However, the performance of MOEAs varies significantly across various MOPs and there is no single MOEA that performs well on all MOP instances. In addition, existing methods for adaptive MOEA selection still face limitations, which restrict the further optimization for MOPs. To fill these gaps and improve the efficiency of solving MOPs, this study proposes an offline-online learning framework combining meta-learning and reinforcement learning (O2-MRL). Instead of proposing a new MOEA or optimizing a strategy, O2-MRL solves MOPs by taking full advantage of the existing MOEAs and addresses the limitations of existing MOEA selection methods. O2-MRL can adaptively select the appropriate MOEAs for various types of MOPs with different dimensions (Offline) and automatically schedule the selected MOEAs during the optimization process (Online), offering a new idea for optimizing MOPs. To evaluate the performance of the proposed O2-MRL, forty-seven benchmark MOPs are used as instances, and nine representative MOEAs are selected for comparison. Comprehensive experiments demonstrate the significant efficiency of O2-MRL, as it achieves optimal solutions in 60.28 % of the MOPs across different dimensions and improves the optimization results in 48.23 % of them, with an average improvement of 8.72 %. In addition to maintaining high optimization performance, O2-MRL also demonstrates superior convergence speed and stability across various types of MOPs. Two real-world MOPs are employed to evaluate the practicality of O2-MRL, and the experimental results indicate that it achieves optimal solutions in both cases.
ISSN:2210-6502
DOI:10.1016/j.swevo.2025.102037