A probabilistic metric for comparing metaheuristic optimization algorithms

•Metaheuristic optimization algorithm runs are based on random numbers.•Several runs required for comparing performance of algorithms.•Single probabilistic metric proposed for ranking metaheuristic optimization algorithms.•Metric yields probability that given algorithm is better than alternative, in...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Structural safety Jg. 70; S. 59 - 70
Hauptverfasser: Gomes, Wellison J.S., Beck, André T., Lopez, Rafael H., Miguel, Leandro F.F.
Format: Journal Article
Sprache:Englisch
Veröffentlicht: Amsterdam Elsevier Ltd 01.01.2018
Elsevier BV
Schlagworte:
ISSN:0167-4730, 1879-3355
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:•Metaheuristic optimization algorithm runs are based on random numbers.•Several runs required for comparing performance of algorithms.•Single probabilistic metric proposed for ranking metaheuristic optimization algorithms.•Metric yields probability that given algorithm is better than alternative, in a single run.•Metric also quantifies how much better algorithm is. The evolution of metaheuristic optimization algorithms towards identification of a global minimum is based on random numbers, making each run unique. Comparing the performance of different algorithms hence requires several runs, and some statistical metric of the results. Mean, standard deviation, best and worst values metrics have been used with this purpose. In this paper, a single probabilistic metric is proposed for comparing metaheuristic optimization algorithms. It is based on the idea of population interference, and yields the probability that a given algorithm produces a smaller (global?) minimum than an alternative algorithm, in a single run. Three benchmark example problems and four optimization algorithms are employed to demonstrate that the proposed metric is better than usual statistics such as mean, standard deviation, best and worst values obtained over several runs. The proposed metric actually quantifies how much better a given algorithm is, in comparison to an alternative algorithm. Statements about the superiority of an algorithm can also be made in consideration of the number of algorithm runs and the number of objective function evaluations allowed in each run.
Bibliographie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0167-4730
1879-3355
DOI:10.1016/j.strusafe.2017.10.006