Integrating Active Learning for Improved Preference Modeling in Tree-Based Interactive Evolutionary Multi-Objective Algorithms

Multi-objective optimization problems are characterized by conflicting objectives, making it impossible to identify a single optimal solution. Instead, solution methods aim to produce a diverse set of non-dominated solutions, aka Pareto optimal solutions, each offering different tradeoffs among the...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:2025 IEEE Congress on Evolutionary Computation (CEC) S. 1 - 8
Hauptverfasser: Shavarani, Seyed Mahdi, Golabi, Mahmoud, Idoumghar, Lhassane
Format: Tagungsbericht
Sprache:Englisch
Veröffentlicht: IEEE 08.06.2025
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Multi-objective optimization problems are characterized by conflicting objectives, making it impossible to identify a single optimal solution. Instead, solution methods aim to produce a diverse set of non-dominated solutions, aka Pareto optimal solutions, each offering different tradeoffs among the objectives. Evolutionary multi-objective algorithms (EMOAs) are commonly employed to generate these varied sets of solutions. However, the abundance of solutions presents a significant challenge for decision-makers (DMs) in identifying the most preferred solution. The problem becomes even more pronounced as the number of objectives increases, requiring exponentially more computational resources and more solutions to properly represent the Pareto optimal set. Interactive EMOAs (iEMOAs) mitigate this challenge by integrating DM preferences into the optimization process to limit the search to regions of the Pareto front that are interesting to the DM. Despite their advantages, existing methods often struggle with effectively learning and utilizing DM preferences. This study investigates tree-based learning methods for preference modeling in iEMOAs by conducting a systematic comparison of decision trees (DTs) and random forests (RFs). Additionally, it examines the impact of active learning as a solution selection strategy for improving preference elicitation. Experimental results demonstrate that RF achieves significantly higher accuracy than DT in learning DM preferences. Furthermore, integrating active learning enhances preference learning within RF, further improving its accuracy. These findings highlight the potential of active learning for enhancing preference-driven optimization, offering more effective strategies for interactive multi-objective decision-making.
DOI:10.1109/CEC65147.2025.11042957