Multiagent Reinforcement Learning for Hyperparameter Optimization of Convolutional Neural Networks

Nowadays, deep convolutional neural networks (DCNNs) play a significant role in many application domains, such as computer vision, medical imaging, and image processing. Nonetheless, designing a DCNN, able to defeat the state of the art, is a manual, challenging, and time-consuming task, due to the...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:IEEE transactions on computer-aided design of integrated circuits and systems Ročník 41; číslo 4; s. 1034 - 1047
Hlavní autoři: Iranfar, Arman, Zapater, Marina, Atienza, David
Médium: Journal Article
Jazyk:angličtina
Vydáno: New York IEEE 01.04.2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Témata:
ISSN:0278-0070, 1937-4151
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Abstract Nowadays, deep convolutional neural networks (DCNNs) play a significant role in many application domains, such as computer vision, medical imaging, and image processing. Nonetheless, designing a DCNN, able to defeat the state of the art, is a manual, challenging, and time-consuming task, due to the extremely large design space, as a consequence of a large number of layers and their corresponding hyperparameters. In this work, we address the challenge of performing hyperparameter optimization of DCNNs through a novel multiagent reinforcement learning (MARL)-based approach, eliminating the human effort. In particular, we adapt <inline-formula> <tex-math notation="LaTeX">Q </tex-math></inline-formula>-learning and define learning agents per layer to split the design space into independent smaller design subspaces such that each agent fine tunes the hyperparameters of the assigned layer concerning a global reward. Moreover, we provide a novel formation of <inline-formula> <tex-math notation="LaTeX">Q </tex-math></inline-formula>-tables along with a new update rule that facilitates agents' communication. Our MARL-based approach is data driven and able to consider an arbitrary set of design objectives and constraints. We apply our MARL-based solution to different well-known DCNNs, including GoogLeNet, VGG, and U-Net, and various datasets for image classification and semantic segmentation. Our results have shown that compared to the original CNNs, the MARL-based approach can reduce the model size, training time, and inference time by up to, respectively, <inline-formula> <tex-math notation="LaTeX">83\times </tex-math></inline-formula>, 52%, and 54% without any degradation in accuracy. Moreover, our approach is very competitive to state-of-the-art neural architecture search methods in terms of the designed CNN accuracy and its number of parameters while significantly reducing the optimization cost.
AbstractList Nowadays, deep convolutional neural networks (DCNNs) play a significant role in many application domains, such as computer vision, medical imaging, and image processing. Nonetheless, designing a DCNN, able to defeat the state of the art, is a manual, challenging, and time-consuming task, due to the extremely large design space, as a consequence of a large number of layers and their corresponding hyperparameters. In this work, we address the challenge of performing hyperparameter optimization of DCNNs through a novel multiagent reinforcement learning (MARL)-based approach, eliminating the human effort. In particular, we adapt [Formula Omitted]-learning and define learning agents per layer to split the design space into independent smaller design subspaces such that each agent fine tunes the hyperparameters of the assigned layer concerning a global reward. Moreover, we provide a novel formation of [Formula Omitted]-tables along with a new update rule that facilitates agents’ communication. Our MARL-based approach is data driven and able to consider an arbitrary set of design objectives and constraints. We apply our MARL-based solution to different well-known DCNNs, including GoogLeNet, VGG, and U-Net, and various datasets for image classification and semantic segmentation. Our results have shown that compared to the original CNNs, the MARL-based approach can reduce the model size, training time, and inference time by up to, respectively, [Formula Omitted], 52%, and 54% without any degradation in accuracy. Moreover, our approach is very competitive to state-of-the-art neural architecture search methods in terms of the designed CNN accuracy and its number of parameters while significantly reducing the optimization cost.
Nowadays, deep convolutional neural networks (DCNNs) play a significant role in many application domains, such as computer vision, medical imaging, and image processing. Nonetheless, designing a DCNN, able to defeat the state of the art, is a manual, challenging, and time-consuming task, due to the extremely large design space, as a consequence of a large number of layers and their corresponding hyperparameters. In this work, we address the challenge of performing hyperparameter optimization of DCNNs through a novel multiagent reinforcement learning (MARL)-based approach, eliminating the human effort. In particular, we adapt <inline-formula> <tex-math notation="LaTeX">Q </tex-math></inline-formula>-learning and define learning agents per layer to split the design space into independent smaller design subspaces such that each agent fine tunes the hyperparameters of the assigned layer concerning a global reward. Moreover, we provide a novel formation of <inline-formula> <tex-math notation="LaTeX">Q </tex-math></inline-formula>-tables along with a new update rule that facilitates agents' communication. Our MARL-based approach is data driven and able to consider an arbitrary set of design objectives and constraints. We apply our MARL-based solution to different well-known DCNNs, including GoogLeNet, VGG, and U-Net, and various datasets for image classification and semantic segmentation. Our results have shown that compared to the original CNNs, the MARL-based approach can reduce the model size, training time, and inference time by up to, respectively, <inline-formula> <tex-math notation="LaTeX">83\times </tex-math></inline-formula>, 52%, and 54% without any degradation in accuracy. Moreover, our approach is very competitive to state-of-the-art neural architecture search methods in terms of the designed CNN accuracy and its number of parameters while significantly reducing the optimization cost.
Author Zapater, Marina
Iranfar, Arman
Atienza, David
Author_xml – sequence: 1
  givenname: Arman
  orcidid: 0000-0001-6803-589X
  surname: Iranfar
  fullname: Iranfar, Arman
  email: arman.iranfar@epfl.ch
  organization: Embedded Systems Laboratory, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
– sequence: 2
  givenname: Marina
  orcidid: 0000-0002-6971-1965
  surname: Zapater
  fullname: Zapater, Marina
  email: marina.zapater@heig-vd.ch, marina.zapater@epfl.ch
  organization: Embedded Systems Laboratory, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
– sequence: 3
  givenname: David
  orcidid: 0000-0001-9536-4947
  surname: Atienza
  fullname: Atienza, David
  email: david.atienza@epfl.ch
  organization: Embedded Systems Laboratory, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
BookMark eNp9kE1Lw0AQhhdRsFV_gHgJeE6d_Ug3eyz1o0JVED2HcTuR1TQbNxul_noTKx48eJmXGd5nmHnHbLf2NTF2zGHCOZizh_nsfCJA8IkErbmRO2zUV50qnvFdNgKh8xRAwz4bt-0LAFeZMCP2dNNV0eEz1TG5J1eXPlhaD92SMNSufk76UbLYNBQaDLimSCG5a6Jbu0-MzteJL5O5r9991Q0tVsktdeFb4ocPr-0h2yuxaunoRw_Y4-XFw3yRLu-uruezZWqlnMZUlCAF5kZk2dRKrrNpiToHxNKSEU8r07tsBsZQvsq1VStlVgiolVUZRyR5wE63e5vg3zpqY_Hiu9Af1BZiqkCJ3OSqd-mtywbftoHKwrr4_UgM6KqCQzEEWgyBFkOgxU-gPcn_kE1wawybf5mTLeOI6NdvlAAtjfwCitKErg
CODEN ITCSDI
CitedBy_id crossref_primary_10_1080_17477778_2023_2219401
crossref_primary_10_1186_s40538_025_00785_z
crossref_primary_10_1080_0954898X_2024_2374852
crossref_primary_10_1007_s12530_024_09621_5
crossref_primary_10_1007_s10489_025_06878_4
crossref_primary_10_1109_TIA_2023_3234935
crossref_primary_10_1109_IOTM_001_2300285
crossref_primary_10_1016_j_atech_2025_101016
crossref_primary_10_3390_app12147067
crossref_primary_10_3390_systems11050228
crossref_primary_10_1109_ACCESS_2021_3100857
crossref_primary_10_1016_j_cose_2022_103005
Cites_doi 10.1007/978-3-7908-2604-3_16
10.1038/sdata.2017.117
10.1109/CVPR.2009.5206848
10.1088/1757-899x/750/1/012223
10.1109/TNN.1998.712192
10.1007/978-3-319-46723-8_48
10.1109/5.726791
10.1017/CBO9780511811654
10.1109/CVPR.2016.90
10.1038/sdata.2018.161
10.1007/978-3-642-14435-67
10.1007/978-3-319-24574-4_28
10.1145/3292500.3330648
10.5555/2999134.2999257
10.1609/aaai.v32i1.11709
10.1007/978-3-030-05318-5_8
10.1109/TMI.2014.2377694
10.1007/978-3-030-01246-5_2
10.1109/WACV.2018.00083
10.1609/aaai.v33i01.33014780
10.4324/9781410605337-29
10.1145/3071178.3071229
10.1109/CVPR.2018.00257
10.1007/978-3-030-60990-0_12
10.1007/978-3-319-28929-8
10.1162/neco.1989.1.4.541
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022
DBID 97E
RIA
RIE
AAYXX
CITATION
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
DOI 10.1109/TCAD.2021.3077193
DatabaseName IEEE All-Society Periodicals Package (ASPP) 2005–Present
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Xplore
CrossRef
Computer and Information Systems Abstracts
Electronics & Communications Abstracts
Technology Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
DatabaseTitle CrossRef
Technology Research Database
Computer and Information Systems Abstracts – Academic
Electronics & Communications Abstracts
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts Professional
DatabaseTitleList Technology Research Database

Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Xplore
  url: https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
EISSN 1937-4151
EndPage 1047
ExternalDocumentID 10_1109_TCAD_2021_3077193
9420739
Genre orig-research
GrantInformation_xml – fundername: H2020 DeepHealth Project
  grantid: 825111
– fundername: Unrestricted Research Gift by the AI Hardware Infrastructure Unit of Facebook for ESL-EPFL
  funderid: 10.13039/100005801
– fundername: ERC Consolidator Grant COMPUSAPIEN
  grantid: 725657
GroupedDBID --Z
-~X
0R~
29I
4.4
5GY
5VS
6IK
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABQJQ
ABVLG
ACGFS
ACIWK
ACNCT
AENEX
AETIX
AGQYO
AGSQL
AHBIQ
AI.
AIBXA
AKJIK
AKQYR
ALLEH
ALMA_UNASSIGNED_HOLDINGS
ASUFR
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CS3
DU5
EBS
EJD
HZ~
H~9
IBMZZ
ICLAB
IFIPE
IFJZH
IPLJI
JAVBF
LAI
M43
O9-
OCL
P2P
PZZ
RIA
RIE
RNS
TN5
VH1
VJK
AAYXX
CITATION
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
ID FETCH-LOGICAL-c336t-2f032a892556c31756fa780aafce92bd9336c5099e8d87c4d49da0a74c451aae3
IEDL.DBID RIE
ISICitedReferencesCount 15
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000770597100021&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 0278-0070
IngestDate Mon Jun 30 10:15:18 EDT 2025
Sat Nov 29 01:40:43 EST 2025
Tue Nov 18 22:03:06 EST 2025
Wed Aug 27 02:48:01 EDT 2025
IsDoiOpenAccess false
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Issue 4
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
https://doi.org/10.15223/policy-029
https://doi.org/10.15223/policy-037
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c336t-2f032a892556c31756fa780aafce92bd9336c5099e8d87c4d49da0a74c451aae3
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ORCID 0000-0001-6803-589X
0000-0001-9536-4947
0000-0002-6971-1965
OpenAccessLink https://infoscience.epfl.ch/handle/20.500.14299/177662
PQID 2640428984
PQPubID 85470
PageCount 14
ParticipantIDs proquest_journals_2640428984
crossref_citationtrail_10_1109_TCAD_2021_3077193
crossref_primary_10_1109_TCAD_2021_3077193
ieee_primary_9420739
PublicationCentury 2000
PublicationDate 2022-04-01
PublicationDateYYYYMMDD 2022-04-01
PublicationDate_xml – month: 04
  year: 2022
  text: 2022-04-01
  day: 01
PublicationDecade 2020
PublicationPlace New York
PublicationPlace_xml – name: New York
PublicationTitle IEEE transactions on computer-aided design of integrated circuits and systems
PublicationTitleAbbrev TCAD
PublicationYear 2022
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References ref13
ref34
Bakas (ref20) 2018
ref37
ref14
ref31
Miller (ref24); 89
ref30
ref10
ref2
Hsu (ref33) 2018
Zoph (ref9) 2016
Kingma (ref15) 2014
ref17
ref39
Baker (ref12) 2016
ref16
ref38
ref19
ref18
Codella (ref22) 2019
Carlucci (ref35) 2019
Krizhevsky (ref45) 2009
Simonyan (ref4) 2014
Feurer (ref11) 2018
ref26
ref25
Pham (ref32) 2018
ref42
ref41
Alom (ref1) 2018
Liu (ref29) 2018
ref44
ref21
ref43
Bergstra (ref6) 2012; 13
ref28
ref27
Xie (ref36) 2018
ref7
Lanctot (ref40) 2019
ref3
ref5
Kandasamy (ref8) 2018
Elsken (ref23) 2018
References_xml – volume-title: Adam: A method for stochastic optimization
  year: 2014
  ident: ref15
– ident: ref16
  doi: 10.1007/978-3-7908-2604-3_16
– volume-title: Very deep convolutional networks for large-scale image recognition
  year: 2014
  ident: ref4
– ident: ref19
  doi: 10.1038/sdata.2017.117
– ident: ref44
  doi: 10.1109/CVPR.2009.5206848
– volume-title: Neural architecture search with reinforcement learning
  year: 2016
  ident: ref9
– ident: ref34
  doi: 10.1088/1757-899x/750/1/012223
– volume-title: Designing neural network architectures using reinforcement learning
  year: 2016
  ident: ref12
– volume-title: Neural architecture search: A survey
  year: 2018
  ident: ref23
– ident: ref37
  doi: 10.1109/TNN.1998.712192
– ident: ref43
  doi: 10.1007/978-3-319-46723-8_48
– ident: ref3
  doi: 10.1109/5.726791
– ident: ref39
  doi: 10.1017/CBO9780511811654
– volume-title: MONAS: Multi-objective neural architecture search using reinforcement learning
  year: 2018
  ident: ref33
– ident: ref5
  doi: 10.1109/CVPR.2016.90
– year: 2018
  ident: ref1
  article-title: The history began from AlexNet: A comprehensive survey on deep learning approaches
– ident: ref21
  doi: 10.1038/sdata.2018.161
– ident: ref13
  doi: 10.1007/978-3-642-14435-67
– volume-title: Practical transfer learning for Bayesian optimization
  year: 2018
  ident: ref11
– ident: ref17
  doi: 10.1007/978-3-319-24574-4_28
– ident: ref7
  doi: 10.1145/3292500.3330648
– ident: ref14
  doi: 10.5555/2999134.2999257
– ident: ref28
  doi: 10.1609/aaai.v32i1.11709
– ident: ref26
  doi: 10.1007/978-3-030-05318-5_8
– start-page: 2016
  volume-title: Advances in Neural Information Processing Systems
  year: 2018
  ident: ref8
  article-title: Neural architecture search with Bayesian optimisation and optimal transport
– ident: ref18
  doi: 10.1109/TMI.2014.2377694
– ident: ref31
  doi: 10.1007/978-3-030-01246-5_2
– volume-title: MANAS: multi-agent neural architecture search
  year: 2019
  ident: ref35
– volume: 89
  start-page: 379
  volume-title: Proc. 3rd Int. Conf. Genet. Algorithms (ICGA)
  ident: ref24
  article-title: Designing neural networks using genetic algorithms
– volume-title: Skin lesion analysis toward melanoma detection 2018: A challenge hosted by the international skin imaging collaboration (ISIC)
  year: 2019
  ident: ref22
– ident: ref27
  doi: 10.1109/WACV.2018.00083
– ident: ref10
  doi: 10.1609/aaai.v33i01.33014780
– ident: ref42
  doi: 10.4324/9781410605337-29
– ident: ref25
  doi: 10.1145/3071178.3071229
– volume: 13
  start-page: 281
  issue: 10
  year: 2012
  ident: ref6
  article-title: Random search for hyper-parameter optimization
  publication-title: J. Mach. Learn. Res.
– ident: ref30
  doi: 10.1109/CVPR.2018.00257
– ident: ref38
  doi: 10.1007/978-3-030-60990-0_12
– volume-title: Identifying the best machine learning algorithms for brain tumor segmentation, progression assessment, and overall survival prediction in the brats challenge
  year: 2018
  ident: ref20
– volume-title: Efficient neural architecture search via parameter sharing
  year: 2018
  ident: ref32
– year: 2009
  ident: ref45
  article-title: Learning multiple layers of features from tiny images
– ident: ref41
  doi: 10.1007/978-3-319-28929-8
– ident: ref2
  doi: 10.1162/neco.1989.1.4.541
– volume-title: DARTS: Differentiable architecture search
  year: 2018
  ident: ref29
– volume-title: SNAS: Stochastic neural architecture search
  year: 2018
  ident: ref36
– volume-title: OpenSpiel: A framework for reinforcement learning in games
  year: 2019
  ident: ref40
SSID ssj0014529
Score 2.4317598
Snippet Nowadays, deep convolutional neural networks (DCNNs) play a significant role in many application domains, such as computer vision, medical imaging, and image...
SourceID proquest
crossref
ieee
SourceType Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 1034
SubjectTerms Accuracy
Artificial neural networks
Computer architecture
Computer vision
Convolution
Convolutional neural network (CNN)
hyperparameter optimization
Image classification
Image processing
Image segmentation
Kernel
Learning
Medical imaging
Multiagent systems
neural architecture search (NAS)
Neural networks
Optimization
Reinforcement learning
reinforcement Learning (RL)
Search problems
Subspaces
Training
Title Multiagent Reinforcement Learning for Hyperparameter Optimization of Convolutional Neural Networks
URI https://ieeexplore.ieee.org/document/9420739
https://www.proquest.com/docview/2640428984
Volume 41
WOSCitedRecordID wos000770597100021&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVIEE
  databaseName: IEEE Xplore
  customDbUrl:
  eissn: 1937-4151
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0014529
  issn: 0278-0070
  databaseCode: RIE
  dateStart: 19820101
  isFulltext: true
  titleUrlDefault: https://ieeexplore.ieee.org/
  providerName: IEEE
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3NS8MwFH9sw4Me_JridEoOnsRubZouyVHEsdMUmbBbSZNUBLfKvv5-87JsDBTBU0tJ6MfvNXmfvwdw64wet28nWWSYURHjNo1EYRwgSpdJpqSl0vhmE3w4FOOxfKnB_bYWxlrrk89sB099LN9Ueomusq5kFANLdahz3lvXam0jBhhA9P4UZIx1chwimEksuyP3Us4SpEnHCTT3MeadPcg3VfmxEvvtpX_0vwc7hsOgRpKHNe4nULPTUzjYIRdsQuFraxWWTpFX6wlStfcFksCp-k7cJTJwhugMCcAnmBhDnt0SMgm1maQqyWM1XQXpdPdDKg9_8Lnj8zN46z-NHgdR6KgQ6TTtLSJaxilVQiLvmEbNoVcqLmKlSm0lLYx0o7RTIaQVRnDNDJNGxYozzbJEKZueQ2NaTe0FEMow4SQznBvG4qJXZAKJbISmqREFTVoQb75xrgPdOHa9-My92RHLHGHJEZY8wNKCu-2UrzXXxl-Dm4jDdmCAoAXtDZB5-BvnuVP60DSUgl3-PusK9imWNfiMnDY0FrOlvYY9vVp8zGc3XtC-AaNn0W4
linkProvider IEEE
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1bS8MwFD7MKagP3qY4nZoHn8TONk3X5FFEmTinyATfSpqkImgr2_T3m5PFMlAEn1pKQi_faXKu3wE4tkaP3bejJNBMy4ClJg54ri0gUhVRIoWhQrtmE-lwyJ-exH0DTutaGGOMSz4zXTx1sXxdqQ90lZ0JRjGwtACL2DnLV2vVMQMMITqPCnLGWkn2McwoFGcj-1rWFqRR14p06qLMc7uQa6vyYy12G8zV-v8ebQPWvCJJzmfIb0LDlFuwOkcv2ILcVddKLJ4iD8ZRpCrnDSSeVfWZ2Eukb03RMVKAv2FqDLmzi8ibr84kVUEuqvLTy6e9H5J5uIPLHp9sw-PV5eiiH_ieCoGK4940oEUYU8kFMo8p1B16hUx5KGWhjKC5FnaUskqEMFzzVDHNhJahTJliSSSliXegWVal2QVCGaacJDpNNWNh3ssTjlQ2XNFY85xGbQi_v3GmPOE49r14zZzhEYoMYckQlszD0oaTesr7jG3jr8EtxKEe6CFoQ-cbyMz_j5PMqn1oHArO9n6fdQTL_dHtIBtcD2_2YYVikYPLz-lAczr-MAewpD6nL5PxoRO6L7S81Lc
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Multiagent+Reinforcement+Learning+for+Hyperparameter+Optimization+of+Convolutional+Neural+Networks&rft.jtitle=IEEE+transactions+on+computer-aided+design+of+integrated+circuits+and+systems&rft.au=Iranfar%2C+Arman&rft.au=Zapater%2C+Marina&rft.au=Atienza%2C+David&rft.date=2022-04-01&rft.pub=The+Institute+of+Electrical+and+Electronics+Engineers%2C+Inc.+%28IEEE%29&rft.issn=0278-0070&rft.eissn=1937-4151&rft.volume=41&rft.issue=4&rft.spage=1034&rft_id=info:doi/10.1109%2FTCAD.2021.3077193&rft.externalDBID=NO_FULL_TEXT
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0278-0070&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0278-0070&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0278-0070&client=summon