Deep Reinforcement Learning Based Adaptive Operator Selection for Evolutionary Multi-Objective Optimization

Evolutionary algorithms (EAs) have become one of the most effective techniques for multi-objective optimization, where a number of variation operators have been developed to handle the problems with various difficulties. While most EAs use a fixed operator all the time, it is a labor-intensive proce...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:IEEE transactions on emerging topics in computational intelligence Ročník 7; číslo 4; s. 1051 - 1064
Hlavní autoři: Tian, Ye, Li, Xiaopeng, Ma, Haiping, Zhang, Xingyi, Tan, Kay Chen, Jin, Yaochu
Médium: Journal Article
Jazyk:angličtina
Vydáno: Piscataway IEEE 01.08.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Témata:
ISSN:2471-285X, 2471-285X
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Abstract Evolutionary algorithms (EAs) have become one of the most effective techniques for multi-objective optimization, where a number of variation operators have been developed to handle the problems with various difficulties. While most EAs use a fixed operator all the time, it is a labor-intensive process to determine the best EA for a new problem. Hence, some recent studies have been dedicated to the adaptive selection of the best operators during the search process. To address the exploration versus exploitation dilemma in operator selection, this paper proposes a novel operator selection method based on reinforcement learning. In the proposed method, the decision variables are regarded as states and the candidate operators are regarded as actions. By using deep neural networks to learn a policy that estimates the <inline-formula><tex-math notation="LaTeX">Q</tex-math></inline-formula> value of each action given a state, the proposed method can determine the best operator for each parent that maximizes its cumulative improvement. An EA is developed based on the proposed method, which is verified to be more effective than the state-of-the-art ones on challenging multi-objective optimization problems.
AbstractList Evolutionary algorithms (EAs) have become one of the most effective techniques for multi-objective optimization, where a number of variation operators have been developed to handle the problems with various difficulties. While most EAs use a fixed operator all the time, it is a labor-intensive process to determine the best EA for a new problem. Hence, some recent studies have been dedicated to the adaptive selection of the best operators during the search process. To address the exploration versus exploitation dilemma in operator selection, this paper proposes a novel operator selection method based on reinforcement learning. In the proposed method, the decision variables are regarded as states and the candidate operators are regarded as actions. By using deep neural networks to learn a policy that estimates the [Formula Omitted] value of each action given a state, the proposed method can determine the best operator for each parent that maximizes its cumulative improvement. An EA is developed based on the proposed method, which is verified to be more effective than the state-of-the-art ones on challenging multi-objective optimization problems.
Evolutionary algorithms (EAs) have become one of the most effective techniques for multi-objective optimization, where a number of variation operators have been developed to handle the problems with various difficulties. While most EAs use a fixed operator all the time, it is a labor-intensive process to determine the best EA for a new problem. Hence, some recent studies have been dedicated to the adaptive selection of the best operators during the search process. To address the exploration versus exploitation dilemma in operator selection, this paper proposes a novel operator selection method based on reinforcement learning. In the proposed method, the decision variables are regarded as states and the candidate operators are regarded as actions. By using deep neural networks to learn a policy that estimates the <inline-formula><tex-math notation="LaTeX">Q</tex-math></inline-formula> value of each action given a state, the proposed method can determine the best operator for each parent that maximizes its cumulative improvement. An EA is developed based on the proposed method, which is verified to be more effective than the state-of-the-art ones on challenging multi-objective optimization problems.
Author Zhang, Xingyi
Tan, Kay Chen
Li, Xiaopeng
Ma, Haiping
Tian, Ye
Jin, Yaochu
Author_xml – sequence: 1
  givenname: Ye
  orcidid: 0000-0002-3487-5126
  surname: Tian
  fullname: Tian, Ye
  email: field910921@gmail.com
  organization: Key Laboratory of Intelligent Computing and Signal Processing of Ministry of Education, Institutes of Physical Science and Information Technology, Anhui University, Hefei, Anhui, China
– sequence: 2
  givenname: Xiaopeng
  orcidid: 0000-0003-4387-8107
  surname: Li
  fullname: Li, Xiaopeng
  email: lxp@stu.ahu.edu.cn
  organization: School of Computer Science and Technology, Anhui University, Hefei, China
– sequence: 3
  givenname: Haiping
  orcidid: 0000-0002-3115-6855
  surname: Ma
  fullname: Ma, Haiping
  email: hpma@ahu.edu.cn
  organization: Key Laboratory of Intelligent Computing and Signal Processing of Ministry of Education, Institutes of Physical Science and Information Technology, Anhui University, Hefei, Anhui, China
– sequence: 4
  givenname: Xingyi
  orcidid: 0000-0002-5052-000X
  surname: Zhang
  fullname: Zhang, Xingyi
  email: xyzhanghust@gmail.com
  organization: Key Laboratory of Intelligent Computing and Signal Processing of Ministry of Education, School of Artificial Intelligence, Anhui University, Hefei, Anhui, China
– sequence: 5
  givenname: Kay Chen
  orcidid: 0000-0002-6802-2463
  surname: Tan
  fullname: Tan, Kay Chen
  email: kctan@polyu.edu.hk
  organization: Department of Computing, The Hong Kong Polytechnic University, Hong Kong, China
– sequence: 6
  givenname: Yaochu
  orcidid: 0000-0003-1100-0631
  surname: Jin
  fullname: Jin, Yaochu
  email: jin@uni-bielefeld.de
  organization: Faculty of Technology, Bielefeld University, Bielefeld, North Rhine-Westphalia, Germany
BookMark eNp9kF9LwzAUxYMoOOe-gL4EfO7Mv7bp45xTB5OBTvCtZOmtZHZJTbOBfnrbbYj44FNuuOd377nnDB1bZwGhC0qGlJLsejFZjKdDRhgbcioSKdkR6jGR0ojJ-PX4V32KBk2zIoSwLKY8Fj30fgtQ4ycwtnRewxpswDNQ3hr7hm9UAwUeFaoOZgt4XoNXwXn8DBXoYJzFLYQnW1dtup_yn_hxUwUTzZerTrBjglmbL9X1z9FJqaoGBoe3j17uWusP0Wx-Px2PZpFubYVIF4TGnMklz2JRSJpKSoVIGFcFF2VcpiouslTohCRCk3KpmWQqIRJKmkGRMt5HV_u5tXcfG2hCvnIbb9uVOZOCcCmzLG1Vcq_S3jWNhzLXJux8Bq9MlVOSd-nmu3TzLt38kG6Lsj9o7c26Pf9_6HIPGQD4AbKUMs4E_wbbKoij
CODEN ITETCU
CitedBy_id crossref_primary_10_3390_info15030168
crossref_primary_10_1016_j_eswa_2024_125555
crossref_primary_10_1007_s13042_024_02300_6
crossref_primary_10_1109_TASE_2023_3327792
crossref_primary_10_1007_s10489_023_05105_2
crossref_primary_10_1109_TAI_2024_3444736
crossref_primary_10_3390_jmse13061068
crossref_primary_10_1016_j_asoc_2025_113790
crossref_primary_10_1016_j_asoc_2025_113791
crossref_primary_10_1109_TETCI_2023_3309736
crossref_primary_10_3390_math13182909
crossref_primary_10_1109_TPDS_2023_3334519
crossref_primary_10_1016_j_swevo_2022_101198
crossref_primary_10_1016_j_mejo_2024_106244
crossref_primary_10_1016_j_neucom_2024_127491
crossref_primary_10_1093_jcde_qwaf023
crossref_primary_10_1016_j_eswa_2025_128913
crossref_primary_10_1016_j_swevo_2025_101892
crossref_primary_10_1109_TETCI_2024_3372441
crossref_primary_10_1016_j_eswa_2025_127227
crossref_primary_10_1007_s11227_024_06016_w
crossref_primary_10_1007_s11227_025_07527_w
crossref_primary_10_1109_TASE_2025_3536198
crossref_primary_10_1007_s10489_024_05906_z
crossref_primary_10_3390_rs15163932
crossref_primary_10_1109_TSMC_2023_3345928
crossref_primary_10_26599_TST_2024_9010185
crossref_primary_10_1007_s40747_025_01846_4
crossref_primary_10_1016_j_swevo_2023_101449
crossref_primary_10_1016_j_neucom_2024_127943
crossref_primary_10_1108_RIA_12_2024_0270
crossref_primary_10_1016_j_swevo_2024_101746
crossref_primary_10_1109_TETCI_2024_3359042
crossref_primary_10_1016_j_cie_2025_110863
crossref_primary_10_1016_j_swevo_2025_102139
crossref_primary_10_1093_jcde_qwaf014
crossref_primary_10_1007_s13042_024_02297_y
crossref_primary_10_1007_s11227_024_06258_8
crossref_primary_10_3390_separations12080203
crossref_primary_10_1109_TTE_2024_3400534
crossref_primary_10_1016_j_eswa_2025_129538
crossref_primary_10_26599_TST_2025_9010012
crossref_primary_10_1016_j_eswa_2024_124929
crossref_primary_10_1109_TEVC_2024_3376729
crossref_primary_10_61435_ijred_2024_59988
crossref_primary_10_1109_TSMC_2023_3305541
crossref_primary_10_1016_j_ins_2024_121397
crossref_primary_10_1016_j_swevo_2025_101935
crossref_primary_10_1016_j_ins_2024_120267
crossref_primary_10_1016_j_neucom_2025_130633
crossref_primary_10_3390_math12060913
crossref_primary_10_1016_j_eswa_2025_129458
crossref_primary_10_1109_TPAMI_2025_3554669
crossref_primary_10_1016_j_eswa_2024_123592
crossref_primary_10_1016_j_cie_2025_110856
crossref_primary_10_1016_j_engappai_2025_110447
crossref_primary_10_1109_TAI_2025_3545792
crossref_primary_10_1016_j_matcom_2025_01_007
crossref_primary_10_1016_j_energy_2024_133412
crossref_primary_10_1007_s41965_024_00174_9
crossref_primary_10_1109_TETCI_2022_3221940
crossref_primary_10_1109_TAES_2023_3312626
crossref_primary_10_1016_j_eswa_2024_125722
crossref_primary_10_1016_j_rico_2025_100606
crossref_primary_10_1016_j_rcim_2025_103140
crossref_primary_10_1016_j_cie_2024_109917
crossref_primary_10_1016_j_cja_2024_103351
crossref_primary_10_1007_s40747_025_01845_5
crossref_primary_10_1016_j_jii_2025_100829
crossref_primary_10_1016_j_swevo_2025_101949
crossref_primary_10_1016_j_asoc_2024_111697
crossref_primary_10_1016_j_swevo_2024_101644
crossref_primary_10_1016_j_oceaneng_2025_121241
crossref_primary_10_1109_TAI_2025_3528381
crossref_primary_10_1016_j_swevo_2024_101683
crossref_primary_10_3390_pr13010095
crossref_primary_10_1016_j_swevo_2025_102037
crossref_primary_10_1109_JAS_2023_123687
Cites_doi 10.1109/CEC.2009.4982949
10.1145/1276958.1277190
10.1007/978-3-030-58115-2_19
10.1109/TEVC.2008.927706
10.1109/SSCI44817.2019.9003018
10.1038/nature14236
10.1109/TEVC.2013.2281533
10.1007/978-3-642-11169-3_13
10.1007/978-3-540-87700-4_18
10.1145/1389095.1389272
10.1109/TCYB.2020.2985081
10.1109/MCI.2017.2742868
10.1016/j.swevo.2011.03.001
10.1016/j.swevo.2020.100790
10.1007/978-3-642-37140-0_28
10.1613/jair.4726
10.1109/TNNLS.2021.3061630
10.1109/TCYB.2019.2906383
10.1007/s40747-019-00126-2
10.1016/j.ins.2015.12.022
10.1007/978-3-642-25566-3_37
10.1109/TEVC.2007.892759
10.1109/TEVC.2005.861417
10.1007/978-3-030-72062-9_53
10.7551/mitpress/1090.001.0001
10.1016/j.ins.2018.09.005
10.1109/CEC.2018.8477693
10.1016/j.neucom.2019.12.048
10.1109/4235.797969
10.1109/TAI.2020.3022339
10.1007/1-84628-137-7_6
10.1016/j.ins.2020.08.101
10.1109/TEVC.2021.3060811
10.1109/MHS.1995.494215
10.1023/A:1013689704352
10.1109/4235.996017
10.1609/aaai.v30i1.10295
10.1016/j.swevo.2011.02.002
10.1109/TEVC.2003.810758
10.1109/TEVC.2013.2239648
10.1109/TEVC.2016.2521868
10.1109/TEVC.2016.2602348
10.1145/3470971
10.1073/pnas.0610471104
10.1162/106365600568202
10.1109/TCYB.2020.2977661
10.1016/j.ins.2020.10.036
10.1162/evco_a_00242
10.1109/TEVC.2016.2600642
10.1007/978-3-642-15844-5_20
10.1109/CEC48606.2020.9185577
10.1109/CEC.2007.4424939
10.1109/TEVC.2008.925798
10.1145/2598394.2598451
10.1023/A:1008202821328
10.1109/TEVC.2015.2395073
10.1109/CEC.2018.8477755
10.1109/CEC.2018.8477730
10.1109/TEVC.2009.2014613
10.1023/A:1022676722315
10.1109/4235.585893
10.1109/CEC.2013.6557555
10.1109/TEVC.2015.2504730
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023
DBID 97E
RIA
RIE
AAYXX
CITATION
7SP
8FD
L7M
DOI 10.1109/TETCI.2022.3146882
DatabaseName IEEE Xplore (IEEE)
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Electronic Library (IEL)
CrossRef
Electronics & Communications Abstracts
Technology Research Database
Advanced Technologies Database with Aerospace
DatabaseTitle CrossRef
Technology Research Database
Advanced Technologies Database with Aerospace
Electronics & Communications Abstracts
DatabaseTitleList Technology Research Database

Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Statistics
EISSN 2471-285X
EndPage 1064
ExternalDocumentID 10_1109_TETCI_2022_3146882
9712324
Genre orig-research
GrantInformation_xml – fundername: Key Program of Natural Science Project of Educational Commission of Anhui Province
  grantid: KJ2020A0036
– fundername: National Natural Science Foundation of China
  grantid: 6182230161876123; 61906001; 62107001; 62136008
  funderid: 10.13039/501100001809
– fundername: Anhui Provincial Natural Science Foundation
  grantid: 2108085QF272
– fundername: Alexander Von Humboldt Professorship for Artificial Intelligence
– fundername: National Key R&D Program of China
  grantid: 2018AAA0100100
– fundername: Federal Ministry of Education and Research
– fundername: Research Grants Council of the Hong Kong Special Administrative Region, China
  grantid: PolyU11202418; PolyU11209219
– fundername: Collaborative Innovation Program of Universities in Anhui Province
  grantid: GXXT-2020-013; GXXT-2020-051
GroupedDBID 0R~
97E
AAJGR
AASAJ
AAWTH
ABAZT
ABJNI
ABQJQ
ABVLG
ACGFS
AGQYO
AHBIQ
AKJIK
AKQYR
ALMA_UNASSIGNED_HOLDINGS
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
EBS
EJD
IFIPE
JAVBF
OCL
RIA
RIE
AAYXX
CITATION
7SP
8FD
L7M
ID FETCH-LOGICAL-c295t-cd015328b3954d81781144623ad34f5f7a5d974c6064c0fbc282a608ef19ed723
IEDL.DBID RIE
ISICitedReferencesCount 127
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000758183300001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 2471-285X
IngestDate Mon Jun 30 03:20:34 EDT 2025
Tue Nov 18 22:38:11 EST 2025
Sat Nov 29 05:12:08 EST 2025
Wed Aug 27 02:18:13 EDT 2025
IsPeerReviewed true
IsScholarly true
Issue 4
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
https://doi.org/10.15223/policy-029
https://doi.org/10.15223/policy-037
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c295t-cd015328b3954d81781144623ad34f5f7a5d974c6064c0fbc282a608ef19ed723
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ORCID 0000-0002-6802-2463
0000-0002-3115-6855
0000-0003-1100-0631
0000-0002-3487-5126
0000-0002-5052-000X
0000-0003-4387-8107
PQID 2840388997
PQPubID 4437216
PageCount 14
ParticipantIDs crossref_citationtrail_10_1109_TETCI_2022_3146882
crossref_primary_10_1109_TETCI_2022_3146882
proquest_journals_2840388997
ieee_primary_9712324
PublicationCentury 2000
PublicationDate 2023-08-01
PublicationDateYYYYMMDD 2023-08-01
PublicationDate_xml – month: 08
  year: 2023
  text: 2023-08-01
  day: 01
PublicationDecade 2020
PublicationPlace Piscataway
PublicationPlace_xml – name: Piscataway
PublicationTitle IEEE transactions on emerging topics in computational intelligence
PublicationTitleAbbrev TETCI
PublicationYear 2023
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References ref13
ref57
ref12
ref56
mcclymont (ref15) 0
ref59
ref14
ref58
ref53
ref11
ref55
ref10
ref17
ref16
dearden (ref54) 0
ref19
ref18
sutton (ref52) 0
ref50
ref45
ref48
ref47
ref42
ref41
ref44
ref43
dheeru (ref71) 2017
ref49
naga (ref24) 2001
ref8
ref7
ref9
ref4
ref3
deb (ref61) 1996; 26
ref6
ref5
ref40
davis (ref29) 1991
ref35
ref34
ref37
ref36
ref31
ref75
ref30
ref74
ref33
sutton (ref46) 1998
ref32
ref2
ref1
ref39
ref38
shao (ref51) 2019
ref73
ref72
coello coello (ref20) 2007
ref68
ref23
ref67
ref26
ref25
ref69
ref64
ref63
ref22
ref21
ref65
ref28
ref27
zhang (ref70) 2008
kingma (ref66) 2014
ref60
ref62
References_xml – ident: ref26
  doi: 10.1109/CEC.2009.4982949
– ident: ref59
  doi: 10.1145/1276958.1277190
– ident: ref37
  doi: 10.1007/978-3-030-58115-2_19
– start-page: 1057
  year: 0
  ident: ref52
  article-title: Policy gradient methods for reinforcement learning with function approximation
  publication-title: Adv Neural Inf Process Syst
– ident: ref41
  doi: 10.1109/TEVC.2008.927706
– ident: ref13
  doi: 10.1109/SSCI44817.2019.9003018
– ident: ref56
  doi: 10.1038/nature14236
– ident: ref60
  doi: 10.1109/TEVC.2013.2281533
– ident: ref32
  doi: 10.1007/978-3-642-11169-3_13
– ident: ref14
  doi: 10.1007/978-3-540-87700-4_18
– ident: ref16
  doi: 10.1145/1389095.1389272
– ident: ref27
  doi: 10.1109/TCYB.2020.2985081
– ident: ref65
  doi: 10.1109/MCI.2017.2742868
– ident: ref1
  doi: 10.1016/j.swevo.2011.03.001
– ident: ref38
  doi: 10.1016/j.swevo.2020.100790
– ident: ref47
  doi: 10.1007/978-3-642-37140-0_28
– ident: ref10
  doi: 10.1613/jair.4726
– ident: ref4
  doi: 10.1109/TNNLS.2021.3061630
– ident: ref7
  doi: 10.1109/TCYB.2019.2906383
– ident: ref11
  doi: 10.1007/s40747-019-00126-2
– ident: ref30
  doi: 10.1016/j.ins.2015.12.022
– ident: ref35
  doi: 10.1007/978-3-642-25566-3_37
– ident: ref36
  doi: 10.1109/TEVC.2007.892759
– year: 1998
  ident: ref46
  publication-title: Reinforcement Learning An Introduction
– ident: ref69
  doi: 10.1109/TEVC.2005.861417
– ident: ref39
  doi: 10.1007/978-3-030-72062-9_53
– ident: ref21
  doi: 10.7551/mitpress/1090.001.0001
– ident: ref44
  doi: 10.1016/j.ins.2018.09.005
– ident: ref8
  doi: 10.1109/CEC.2018.8477693
– ident: ref45
  doi: 10.1016/j.neucom.2019.12.048
– start-page: 2003
  year: 0
  ident: ref15
  article-title: Markov chain hyper-heuristic (MCHH) an online selective hyper-heuristic for multi-objective continuous problems
  publication-title: Proc 13th Annu Conf Genet Evol Computation
– ident: ref73
  doi: 10.1109/4235.797969
– ident: ref3
  doi: 10.1109/TAI.2020.3022339
– ident: ref68
  doi: 10.1007/1-84628-137-7_6
– year: 2014
  ident: ref66
  article-title: Adam: A method for stochastic optimization
– year: 2019
  ident: ref51
  article-title: A survey of deep reinforcement learning in video games
– ident: ref49
  doi: 10.1016/j.ins.2020.08.101
– ident: ref53
  doi: 10.1109/TEVC.2021.3060811
– year: 2001
  ident: ref24
  publication-title: Estimation of Distribution Algorithms A New Tool for Evolutionary Computation
– ident: ref23
  doi: 10.1109/MHS.1995.494215
– year: 1991
  ident: ref29
  publication-title: Handbook of Genetic Algorithms
– ident: ref28
  doi: 10.1023/A:1013689704352
– year: 2017
  ident: ref71
  article-title: UCI machine learning repository
– ident: ref5
  doi: 10.1109/4235.996017
– ident: ref62
  doi: 10.1609/aaai.v30i1.10295
– ident: ref74
  doi: 10.1016/j.swevo.2011.02.002
– ident: ref72
  doi: 10.1109/TEVC.2003.810758
– ident: ref17
  doi: 10.1109/TEVC.2013.2239648
– ident: ref64
  doi: 10.1109/TEVC.2016.2521868
– ident: ref33
  doi: 10.1109/TEVC.2016.2602348
– ident: ref2
  doi: 10.1145/3470971
– ident: ref40
  doi: 10.1073/pnas.0610471104
– ident: ref67
  doi: 10.1162/106365600568202
– ident: ref48
  doi: 10.1109/TCYB.2020.2977661
– year: 2007
  ident: ref20
  publication-title: Evolutionary Algorithms for Solving Multi-Objective Problems
– ident: ref43
  doi: 10.1016/j.ins.2020.10.036
– start-page: 761
  year: 0
  ident: ref54
  article-title: Bayesian Q-learning
  publication-title: Proc AAAI Conf Artif Intell
– ident: ref12
  doi: 10.1162/evco_a_00242
– ident: ref25
  doi: 10.1109/TEVC.2016.2600642
– ident: ref57
  doi: 10.1007/978-3-642-15844-5_20
– ident: ref19
  doi: 10.1109/CEC48606.2020.9185577
– ident: ref34
  doi: 10.1109/CEC.2007.4424939
– ident: ref6
  doi: 10.1109/TEVC.2008.925798
– ident: ref50
  doi: 10.1145/2598394.2598451
– ident: ref22
  doi: 10.1023/A:1008202821328
– ident: ref63
  doi: 10.1109/TEVC.2015.2395073
– year: 2008
  ident: ref70
  article-title: Multiobjective optimization test instances for the CEC 2009 special session and competition
– ident: ref18
  doi: 10.1109/CEC.2018.8477755
– ident: ref58
  doi: 10.1109/CEC.2018.8477730
– ident: ref42
  doi: 10.1109/TEVC.2009.2014613
– volume: 26
  start-page: 30
  year: 1996
  ident: ref61
  article-title: A combined genetic adaptive search (geneas) for engineering design
  publication-title: J Inform Comput Sci
– ident: ref55
  doi: 10.1023/A:1022676722315
– ident: ref9
  doi: 10.1109/4235.585893
– ident: ref31
  doi: 10.1109/CEC.2013.6557555
– ident: ref75
  doi: 10.1109/TEVC.2015.2504730
SSID ssj0002951354
Score 2.5670488
Snippet Evolutionary algorithms (EAs) have become one of the most effective techniques for multi-objective optimization, where a number of variation operators have...
SourceID proquest
crossref
ieee
SourceType Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 1051
SubjectTerms Artificial neural networks
Convergence
Deep learning
Evolutionary algorithm
Evolutionary algorithms
Machine learning
multi-objective optimization
Multiple objective analysis
Neural networks
operator selection
Operators
Optimization
Particle swarm optimization
Reinforcement learning
Search process
Sociology
Statistics
Title Deep Reinforcement Learning Based Adaptive Operator Selection for Evolutionary Multi-Objective Optimization
URI https://ieeexplore.ieee.org/document/9712324
https://www.proquest.com/docview/2840388997
Volume 7
WOSCitedRecordID wos000758183300001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVIEE
  databaseName: IEEE Electronic Library (IEL)
  customDbUrl:
  eissn: 2471-285X
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0002951354
  issn: 2471-285X
  databaseCode: RIE
  dateStart: 20170101
  isFulltext: true
  titleUrlDefault: https://ieeexplore.ieee.org/
  providerName: IEEE
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LSwMxEB6seOjFVxXrixy86dpuNttsjlUrCqLii96WbB7iq11qK_jvnWTTgiiCtz1kIJsvycyXZL4B2MOomDoht0hxbiMWa1xzyr0RkxmTXBY29aI-Dxf88jLr98X1HBzMcmGMMf7xmTl0n_4uXw_VxB2VtQT3AUANapx3qlyt2XkKxVAhSdk0L6YtWne9u-NzZICUIjFlnSyj33yPL6byYwf2buV06X8dWobFED6SboX3CsyZwSrUXcRYCS434OXEmJLcGK-IqvzhHwkiqo_kCH2WJl0tS7fLkavS-Ft2cuur4SBEBI1I7yPMRzn6JD5DN7oqnqudEW3GT28heXMN7k_x98-iUFEhUjg-40hp9P4JzYpEpExnsUszRT5IE6kTZlPLZaqRYChkNUy1baGQkMlOOzM2FkZzmqzD_GA4MBtAFDNpxgsh3DKLEyuEQWZCmRRWMMvSJsTTsc5VkBt3VS9ec0872iL3-OQOnzzg04T9mU1ZiW382brhEJm1DGA0YXsKaR7W43uOTtjJ3gjBN3-32oK6KyRfPe3bhvnxaGJ2YEF9IHyjXT_VvgDz09Hb
linkProvider IEEE
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3JTsMwEB2xSXBhR5TVB24QSBwHx0dWgSgFQUHcIscLYitVaSvx94wdtxICIXHLwSM5frZnnu15A7CFUTF1Qm6R4txGLNG45pR7IyZzJrksbeZFfe7rvNHIHx7E9QjsDHNhjDH-8ZnZdZ_-Ll-_q547KtsT3AcAozCeMUbjKltreKJCMVhIMzbIjInFXvOkeXSOHJBSpKZsP8_pN-_jy6n82IO9Yzmd-V-XZmE6BJDkoEJ8DkZMax6mXMxYSS4vwMuxMW1yY7wmqvLHfyTIqD6SQ_Ramhxo2Xb7HLlqG3_PTm59PRwEiaAROemHGSk7n8Tn6EZX5XO1N6JN9-ktpG8uwt0p_v5ZFGoqRArHpxspjf4_pXmZiozpPHGJpsgIaSp1ymxmucw0UgyFvIap2JYKKZncj3NjE2E0p-kSjLXeW2YZiGImy3kphFtoSWqFMMhNKJPCCmZZVoNkMNaFCoLjru7Fa-GJRywKj0_h8CkCPjXYHtq0K7mNP1svOESGLQMYNVgbQFqEFflRoBt2wjdC8JXfrTZh8qx5WS_q542LVZhyZeWrh35rMNbt9Mw6TKg-QtnZ8NPuCwSw1SI
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Deep+Reinforcement+Learning+Based+Adaptive+Operator+Selection+for+Evolutionary+Multi-Objective+Optimization&rft.jtitle=IEEE+transactions+on+emerging+topics+in+computational+intelligence&rft.au=Tian%2C+Ye&rft.au=Li%2C+Xiaopeng&rft.au=Ma%2C+Haiping&rft.au=Zhang%2C+Xingyi&rft.date=2023-08-01&rft.pub=The+Institute+of+Electrical+and+Electronics+Engineers%2C+Inc.+%28IEEE%29&rft.eissn=2471-285X&rft.volume=7&rft.issue=4&rft.spage=1051&rft_id=info:doi/10.1109%2FTETCI.2022.3146882&rft.externalDBID=NO_FULL_TEXT
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2471-285X&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2471-285X&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2471-285X&client=summon