Benchmarking evolutionary multiobjective optimization algorithms

Choosing and tuning an optimization procedure for a given class of nonlinear optimization problems is not an easy task. One way to proceed is to consider this as a tournament, where each procedure will compete in different `disciplines'. Here, disciplines could either be different functions, wh...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE Congress on Evolutionary Computation S. 1 - 8
Hauptverfasser: Mersmann, O, Trautmann, H, Naujoks, B, Weihs, C
Format: Tagungsbericht
Sprache:Englisch
Veröffentlicht: IEEE 01.07.2010
Schlagworte:
ISBN:1424469090, 9781424469093
ISSN:1089-778X
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Choosing and tuning an optimization procedure for a given class of nonlinear optimization problems is not an easy task. One way to proceed is to consider this as a tournament, where each procedure will compete in different `disciplines'. Here, disciplines could either be different functions, which we want to optimize, or specific performance measures of the optimization procedure. We would then be interested in the algorithm that performs best in a majority of cases or whose average performance is maximal. We will focus on evolutionary multiobjective optimization algorithms (EMOA), and will present a novel approach to the design and analysis of evolutionary multiobjective benchmark experiments based on similar work from the context of machine learning. We focus on deriving a consensus among several benchmarks over different test problems and illustrate the methodology by reanalyzing the results of the CEC 2007 EMOA competition.
ISBN:1424469090
9781424469093
ISSN:1089-778X
DOI:10.1109/CEC.2010.5586241