A Large-Scale Experimental Evaluation of High-Performing Multi- and Many-Objective Evolutionary Algorithms

Research on multi-objective evolutionary algorithms (MOEAs) has produced over the past decades a large number of algorithms and a rich literature on performance assessment tools to evaluate and compare them. Yet, newly proposed MOEAs are typically compared against very few, often a decade older MOEA...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Evolutionary computation Jg. 26; H. 4; S. 621
Hauptverfasser: Bezerra, Leonardo C T, López-Ibáñez, Manuel, Stützle, Thomas
Format: Journal Article
Sprache:Englisch
Veröffentlicht: United States 2018
Schlagworte:
ISSN:1530-9304, 1530-9304
Online-Zugang:Weitere Angaben
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Abstract Research on multi-objective evolutionary algorithms (MOEAs) has produced over the past decades a large number of algorithms and a rich literature on performance assessment tools to evaluate and compare them. Yet, newly proposed MOEAs are typically compared against very few, often a decade older MOEAs. One reason for this apparent contradiction is the lack of a common baseline for comparison, with each subsequent study often devising its own experimental scenario, slightly different from other studies. As a result, the state of the art in MOEAs is a disputed topic. This article reports a systematic, comprehensive evaluation of a large number of MOEAs that covers a wide range of experimental scenarios. A novelty of this study is the separation between the higher-level algorithmic components related to multi-objective optimization (MO), which characterize each particular MOEA, and the underlying parameters-such as evolutionary operators, population size, etc.-whose configuration may be tuned for each scenario. Instead of relying on a common or "default" parameter configuration that may be low-performing for particular MOEAs or scenarios and unintentionally biased, we tune the parameters of each MOEA for each scenario using automatic algorithm configuration methods. Our results confirm some of the assumed knowledge in the field, while at the same time they provide new insights on the relative performance of MOEAs for many-objective problems. For example, under certain conditions, indicator-based MOEAs are more competitive for such problems than previously assumed. We also analyze problem-specific features affecting performance, the agreement between performance metrics, and the improvement of tuned configurations over the default configurations used in the literature. Finally, the data produced is made publicly available to motivate further analysis and a baseline for future comparisons.
AbstractList Research on multi-objective evolutionary algorithms (MOEAs) has produced over the past decades a large number of algorithms and a rich literature on performance assessment tools to evaluate and compare them. Yet, newly proposed MOEAs are typically compared against very few, often a decade older MOEAs. One reason for this apparent contradiction is the lack of a common baseline for comparison, with each subsequent study often devising its own experimental scenario, slightly different from other studies. As a result, the state of the art in MOEAs is a disputed topic. This article reports a systematic, comprehensive evaluation of a large number of MOEAs that covers a wide range of experimental scenarios. A novelty of this study is the separation between the higher-level algorithmic components related to multi-objective optimization (MO), which characterize each particular MOEA, and the underlying parameters-such as evolutionary operators, population size, etc.-whose configuration may be tuned for each scenario. Instead of relying on a common or "default" parameter configuration that may be low-performing for particular MOEAs or scenarios and unintentionally biased, we tune the parameters of each MOEA for each scenario using automatic algorithm configuration methods. Our results confirm some of the assumed knowledge in the field, while at the same time they provide new insights on the relative performance of MOEAs for many-objective problems. For example, under certain conditions, indicator-based MOEAs are more competitive for such problems than previously assumed. We also analyze problem-specific features affecting performance, the agreement between performance metrics, and the improvement of tuned configurations over the default configurations used in the literature. Finally, the data produced is made publicly available to motivate further analysis and a baseline for future comparisons.Research on multi-objective evolutionary algorithms (MOEAs) has produced over the past decades a large number of algorithms and a rich literature on performance assessment tools to evaluate and compare them. Yet, newly proposed MOEAs are typically compared against very few, often a decade older MOEAs. One reason for this apparent contradiction is the lack of a common baseline for comparison, with each subsequent study often devising its own experimental scenario, slightly different from other studies. As a result, the state of the art in MOEAs is a disputed topic. This article reports a systematic, comprehensive evaluation of a large number of MOEAs that covers a wide range of experimental scenarios. A novelty of this study is the separation between the higher-level algorithmic components related to multi-objective optimization (MO), which characterize each particular MOEA, and the underlying parameters-such as evolutionary operators, population size, etc.-whose configuration may be tuned for each scenario. Instead of relying on a common or "default" parameter configuration that may be low-performing for particular MOEAs or scenarios and unintentionally biased, we tune the parameters of each MOEA for each scenario using automatic algorithm configuration methods. Our results confirm some of the assumed knowledge in the field, while at the same time they provide new insights on the relative performance of MOEAs for many-objective problems. For example, under certain conditions, indicator-based MOEAs are more competitive for such problems than previously assumed. We also analyze problem-specific features affecting performance, the agreement between performance metrics, and the improvement of tuned configurations over the default configurations used in the literature. Finally, the data produced is made publicly available to motivate further analysis and a baseline for future comparisons.
Research on multi-objective evolutionary algorithms (MOEAs) has produced over the past decades a large number of algorithms and a rich literature on performance assessment tools to evaluate and compare them. Yet, newly proposed MOEAs are typically compared against very few, often a decade older MOEAs. One reason for this apparent contradiction is the lack of a common baseline for comparison, with each subsequent study often devising its own experimental scenario, slightly different from other studies. As a result, the state of the art in MOEAs is a disputed topic. This article reports a systematic, comprehensive evaluation of a large number of MOEAs that covers a wide range of experimental scenarios. A novelty of this study is the separation between the higher-level algorithmic components related to multi-objective optimization (MO), which characterize each particular MOEA, and the underlying parameters-such as evolutionary operators, population size, etc.-whose configuration may be tuned for each scenario. Instead of relying on a common or "default" parameter configuration that may be low-performing for particular MOEAs or scenarios and unintentionally biased, we tune the parameters of each MOEA for each scenario using automatic algorithm configuration methods. Our results confirm some of the assumed knowledge in the field, while at the same time they provide new insights on the relative performance of MOEAs for many-objective problems. For example, under certain conditions, indicator-based MOEAs are more competitive for such problems than previously assumed. We also analyze problem-specific features affecting performance, the agreement between performance metrics, and the improvement of tuned configurations over the default configurations used in the literature. Finally, the data produced is made publicly available to motivate further analysis and a baseline for future comparisons.
Author Bezerra, Leonardo C T
López-Ibáñez, Manuel
Stützle, Thomas
Author_xml – sequence: 1
  givenname: Leonardo C T
  surname: Bezerra
  fullname: Bezerra, Leonardo C T
  email: leobezerra@imd.ufrn.br
  organization: IMD, Universidade Federal do Rio Grande do Norte, Natal, RN, Brazil DCC, CI, Universidade Federal da Paraíba, João Pessoa, PB, Brazil leobezerra@imd.ufrn.br
– sequence: 2
  givenname: Manuel
  surname: López-Ibáñez
  fullname: López-Ibáñez, Manuel
  email: manuel.lopez-ibanez@manchester.ac.uk
  organization: Alliance Manchester Business School, University of Manchester, UK manuel.lopez-ibanez@manchester.ac.uk
– sequence: 3
  givenname: Thomas
  surname: Stützle
  fullname: Stützle, Thomas
  email: stuetzle@ulb.ac.be
  organization: IRIDIA, CoDE, Université Libre de Bruxelles, Belgium stuetzle@ulb.ac.be
BackLink https://www.ncbi.nlm.nih.gov/pubmed/29155605$$D View this record in MEDLINE/PubMed
BookMark eNpNkL1PwzAUxC1URD9gY0YeWQx2YjvxWFWFIqUqEjBHTvKSunLski_R_54gisR0b_i9093N0cR5BwjdMvrAmAweYch9qlNKAxZdoBkTISUqpHzy756iedseKGVhQNkVmgaKCSGpmKHDEie6qYC85doCXn8doTE1uE5bvB607XVnvMO-xBtT7ckrNKVvauMqvO1tZwjWrsBb7U5klx0g78wwmgze9j9vujnhpa18Y7p93V6jy1LbFm7OukAfT-v31YYku-eX1TIhOedRR8bQYQSKFoXUheQlSJ6JuBQ6EjFnNBeiGKWMQq4oSKZiyZmSYRHHPIpFlgULdP_re2z8Zw9tl9amzcFa7cD3bTrSUikecjmid2e0z2oo0uPYfQyd_u0TfAOEDGjT
CitedBy_id crossref_primary_10_1088_1361_651X_ad15a9
crossref_primary_10_1007_s10458_024_09662_9
crossref_primary_10_1109_TEVC_2019_2909636
crossref_primary_10_1007_s00500_023_08505_1
crossref_primary_10_1109_TEVC_2020_3011829
crossref_primary_10_1162_evco_a_00263
crossref_primary_10_1016_j_energy_2022_126390
crossref_primary_10_1016_j_swevo_2024_101838
crossref_primary_10_1162_evco_a_00258
crossref_primary_10_1007_s12065_024_00941_8
crossref_primary_10_1016_j_autcon_2020_103522
crossref_primary_10_1016_j_eswa_2023_120813
crossref_primary_10_1109_TEVC_2021_3084119
crossref_primary_10_1109_TEVC_2022_3154231
crossref_primary_10_1145_3466624
crossref_primary_10_1016_j_swevo_2024_101654
crossref_primary_10_1016_j_swevo_2024_101644
crossref_primary_10_1109_TEVC_2023_3289872
ContentType Journal Article
DBID NPM
7X8
DOI 10.1162/evco_a_00217
DatabaseName PubMed
MEDLINE - Academic
DatabaseTitle PubMed
MEDLINE - Academic
DatabaseTitleList MEDLINE - Academic
PubMed
Database_xml – sequence: 1
  dbid: NPM
  name: PubMed
  url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 2
  dbid: 7X8
  name: MEDLINE - Academic
  url: https://search.proquest.com/medline
  sourceTypes: Aggregation Database
DeliveryMethod no_fulltext_linktorsrc
Discipline Engineering
Computer Science
EISSN 1530-9304
ExternalDocumentID 29155605
Genre Journal Article
GroupedDBID ---
.4S
.DC
0R~
36B
4.4
53G
5GY
5VS
6IK
AAJGR
AAKMM
AALFJ
AALMD
AAYFX
AAYOK
ABAZT
ABDBF
ABGDV
ABJNI
ACM
ACUHS
ADL
ADPZR
AEBYY
AENEX
AENSD
AFWIH
AFWXC
AIKLT
ALMA_UNASSIGNED_HOLDINGS
ARCSS
ASPBG
AVWKF
AZFZN
BDXCO
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CAG
CCLIF
COF
CS3
DU5
EAP
EAS
EBC
EBD
EBS
ECS
EDO
EJD
EMB
EMK
EMOBN
EPL
EST
ESX
F5P
FEDTE
FNEHJ
GUFHI
HGAVV
HZ~
I-F
I07
IPLJI
JAVBF
LHSKQ
MCG
MINIK
NPM
O9-
OCL
P2P
PK0
RMI
SV3
TUS
W7O
ZWS
7X8
ABVLG
AEFXT
AEJOY
AKRVB
ID FETCH-LOGICAL-c447t-15337e90dd6ad64fe64b58f5a758410c55d410f73490e6198641963d884785bb2
IEDL.DBID 7X8
ISICitedReferencesCount 28
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000451760800004&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 1530-9304
IngestDate Fri Jul 11 15:41:21 EDT 2025
Wed Feb 19 02:30:48 EST 2025
IsDoiOpenAccess false
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Issue 4
Keywords automatic algorithm configuration
performance assessment
Multi-objective optimization
evolutionary algorithms
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c447t-15337e90dd6ad64fe64b58f5a758410c55d410f73490e6198641963d884785bb2
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
OpenAccessLink https://www.research.manchester.ac.uk/portal/en/publications/a-largescale-experimental-evaluation-of-highperforming-multi-and-manyobjective-evolutionary-algorithms(dd900608-9380-48fe-896a-7ec8180c4efa).html
PMID 29155605
PQID 1966994346
PQPubID 23479
ParticipantIDs proquest_miscellaneous_1966994346
pubmed_primary_29155605
PublicationCentury 2000
PublicationDate 2018-00-00
PublicationDateYYYYMMDD 2018-01-01
PublicationDate_xml – year: 2018
  text: 2018-00-00
PublicationDecade 2010
PublicationPlace United States
PublicationPlace_xml – name: United States
PublicationTitle Evolutionary computation
PublicationTitleAlternate Evol Comput
PublicationYear 2018
SSID ssj0013201
Score 2.3724337
Snippet Research on multi-objective evolutionary algorithms (MOEAs) has produced over the past decades a large number of algorithms and a rich literature on...
SourceID proquest
pubmed
SourceType Aggregation Database
Index Database
StartPage 621
Title A Large-Scale Experimental Evaluation of High-Performing Multi- and Many-Objective Evolutionary Algorithms
URI https://www.ncbi.nlm.nih.gov/pubmed/29155605
https://www.proquest.com/docview/1966994346
Volume 26
WOSCitedRecordID wos000451760800004&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText
inHoldings 1
isFullTextHit
isPrint
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV3LSgMxFA1qXejCan3VFxHchs4jk0lWUqTFRVsLKnQ3ZJKMWnSmdmrBv_dmHtqNILiZWQVCcnJzTnJzLkJX1EkkMAFDYuUKQhUIVun7inDtA_2XymW6cNcfhKMRn0zEuDpwy6u0yjomFoFaZ8qekXcAKUxYMzN2PXsntmqUvV2tSmiso4YPVMaiOpys3iI4lV-qQwTo9jrxnXkds1RZJKOCkv9OLotNpt_8b_d20U5FL3G3xMMeWjNpCzXr0g24WskttL3iQ7iPpl08sBnh5B5mzODeius_7n3bgeMswTYthIzLxwbQFBfvdwmWqcZDCCvkLp6WERTaVaCW80_cfX2Czi6e3_ID9NjvPdzckqoIA1GUhgti-WBohKM1k5rRxDAaBzwJJAgN6joqCDT8ktCnwjGgxjijdlFrDtseD-LYO0QbaZaaY4Q9FkhQh7GQ0lAFOo-5UvoQbblQQBXcNrqsxzYCkNubC5ma7COPfka3jY7KCYpmpRtH5FmHexBlJ39ofYq2ABK8PEI5Q40Elrg5R5tquXjJ5xcFeuA7Gg-_AOYuzxY
linkProvider ProQuest
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=A+Large-Scale+Experimental+Evaluation+of+High-Performing+Multi-+and+Many-Objective+Evolutionary+Algorithms&rft.jtitle=Evolutionary+computation&rft.au=Bezerra%2C+Leonardo+C+T&rft.au=L%C3%B3pez-Ib%C3%A1%C3%B1ez%2C+Manuel&rft.au=St%C3%BCtzle%2C+Thomas&rft.date=2018-01-01&rft.issn=1530-9304&rft.eissn=1530-9304&rft.volume=26&rft.issue=4&rft.spage=621&rft_id=info:doi/10.1162%2Fevco_a_00217&rft.externalDBID=NO_FULL_TEXT
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1530-9304&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1530-9304&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1530-9304&client=summon