A probabilistic metric for comparing metaheuristic optimization algorithms

•Metaheuristic optimization algorithm runs are based on random numbers.•Several runs required for comparing performance of algorithms.•Single probabilistic metric proposed for ranking metaheuristic optimization algorithms.•Metric yields probability that given algorithm is better than alternative, in...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Structural safety Ročník 70; s. 59 - 70
Hlavní autoři: Gomes, Wellison J.S., Beck, André T., Lopez, Rafael H., Miguel, Leandro F.F.
Médium: Journal Article
Jazyk:angličtina
Vydáno: Amsterdam Elsevier Ltd 01.01.2018
Elsevier BV
Témata:
ISSN:0167-4730, 1879-3355
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:•Metaheuristic optimization algorithm runs are based on random numbers.•Several runs required for comparing performance of algorithms.•Single probabilistic metric proposed for ranking metaheuristic optimization algorithms.•Metric yields probability that given algorithm is better than alternative, in a single run.•Metric also quantifies how much better algorithm is. The evolution of metaheuristic optimization algorithms towards identification of a global minimum is based on random numbers, making each run unique. Comparing the performance of different algorithms hence requires several runs, and some statistical metric of the results. Mean, standard deviation, best and worst values metrics have been used with this purpose. In this paper, a single probabilistic metric is proposed for comparing metaheuristic optimization algorithms. It is based on the idea of population interference, and yields the probability that a given algorithm produces a smaller (global?) minimum than an alternative algorithm, in a single run. Three benchmark example problems and four optimization algorithms are employed to demonstrate that the proposed metric is better than usual statistics such as mean, standard deviation, best and worst values obtained over several runs. The proposed metric actually quantifies how much better a given algorithm is, in comparison to an alternative algorithm. Statements about the superiority of an algorithm can also be made in consideration of the number of algorithm runs and the number of objective function evaluations allowed in each run.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0167-4730
1879-3355
DOI:10.1016/j.strusafe.2017.10.006