A Large-Scale Experimental Evaluation of High-Performing Multi- and Many-Objective Evolutionary Algorithms

Uloženo v:
Podrobná bibliografie
Název: A Large-Scale Experimental Evaluation of High-Performing Multi- and Many-Objective Evolutionary Algorithms
Autoři: Bezerra, Leonardo, López-Ibáñez, Manuel, Stützle, Thomas
Zdroj: Bezerra, L C T, Lopez-Ibanez, M & Stützle, T 2017, 'A Large-Scale Experimental Evaluation of High-Performing Multi-and Many-Objective Evolutionary Algorithms', Evolutionary Computation. https://doi.org/10.1162/evco_a_00217
Informace o vydavateli: MIT Press - Journals, 2018.
Rok vydání: 2018
Témata: Multi-objective optimization, Performance assessment, Automatic algorithm configuration, 0211 other engineering and technologies, 0202 electrical engineering, electronic engineering, information engineering, 02 engineering and technology, Evolutionary algorithms, Sciences exactes et naturelles
Popis: Research on multi-objective evolutionary algorithms (MOEAs) has produced over the past decades a large number of algorithms and a rich literature on performance assessment tools to evaluate and compare them. Yet, newly proposed MOEAs are typically compared against very few, often a decade older MOEAs. One reason for this apparent contradiction is the lack of a common baseline for comparison, with each subsequent study often devising its own experimental scenario, slightly different from other studies. As a result, the state of the art in MOEAs is a disputed topic. This article reports a systematic, comprehensive evaluation of a large number of MOEAs that covers a wide range of experimental scenarios. A novelty of this study is the separation between the higher-level algorithmic components related to multi-objective optimization (MO), which characterize each particular MOEA, and the underlying parameters—such as evolutionary operators, population size, etc.—whose configuration may be tuned for each scenario. Instead of relying on a common or “default” parameter configuration that may be low-performing for particular MOEAs or scenarios and unintentionally biased, we tune the parameters of each MOEA for each scenario using automatic algorithm configuration methods. Our results confirm some of the assumed knowledge in the field, while at the same time they provide new insights on the relative performance of MOEAs for many-objective problems. For example, under certain conditions, indicator-based MOEAs are more competitive for such problems than previously assumed. We also analyze problem-specific features affecting performance, the agreement between performance metrics, and the improvement of tuned configurations over the default configurations used in the literature. Finally, the data produced is made publicly available to motivate further analysis and a baseline for future comparisons.
Druh dokumentu: Article
Popis souboru: application/pdf; No full-text files
Jazyk: English
ISSN: 1530-9304
1063-6560
DOI: 10.1162/evco_a_00217
Přístupová URL adresa: https://www.research.manchester.ac.uk/portal/files/61730519/BezLopStu2017assessment.pdf
https://pubmed.ncbi.nlm.nih.gov/29155605
https://research.manchester.ac.uk/en/publications/dd900608-9380-48fe-896a-7ec8180c4efa
https://doi.org/10.1162/evco_a_00217
https://pubmed.ncbi.nlm.nih.gov/29155605/
https://difusion.ulb.ac.be/vufind/Record/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/283598/Details
https://www.mitpressjournals.org/doi/full/10.1162/evco_a_00217
https://core.ac.uk/display/132156078
https://www.ncbi.nlm.nih.gov/pubmed/29155605
https://dblp.uni-trier.de/db/journals/ec/ec26.html#BezerraLS18
https://research.manchester.ac.uk/en/publications/dd900608-9380-48fe-896a-7ec8180c4efa
https://doi.org/10.1162/evco_a_00217
https://pure.manchester.ac.uk/ws/files/61730519/BezLopStu2017assessment.pdf
Přístupové číslo: edsair.doi.dedup.....59dffbe88c6dfc54cdad1cfdea35f62c
Databáze: OpenAIRE
Popis
Abstrakt:Research on multi-objective evolutionary algorithms (MOEAs) has produced over the past decades a large number of algorithms and a rich literature on performance assessment tools to evaluate and compare them. Yet, newly proposed MOEAs are typically compared against very few, often a decade older MOEAs. One reason for this apparent contradiction is the lack of a common baseline for comparison, with each subsequent study often devising its own experimental scenario, slightly different from other studies. As a result, the state of the art in MOEAs is a disputed topic. This article reports a systematic, comprehensive evaluation of a large number of MOEAs that covers a wide range of experimental scenarios. A novelty of this study is the separation between the higher-level algorithmic components related to multi-objective optimization (MO), which characterize each particular MOEA, and the underlying parameters—such as evolutionary operators, population size, etc.—whose configuration may be tuned for each scenario. Instead of relying on a common or “default” parameter configuration that may be low-performing for particular MOEAs or scenarios and unintentionally biased, we tune the parameters of each MOEA for each scenario using automatic algorithm configuration methods. Our results confirm some of the assumed knowledge in the field, while at the same time they provide new insights on the relative performance of MOEAs for many-objective problems. For example, under certain conditions, indicator-based MOEAs are more competitive for such problems than previously assumed. We also analyze problem-specific features affecting performance, the agreement between performance metrics, and the improvement of tuned configurations over the default configurations used in the literature. Finally, the data produced is made publicly available to motivate further analysis and a baseline for future comparisons.
ISSN:15309304
10636560
DOI:10.1162/evco_a_00217