Towards a configurable and non-hierarchical search space for NAS

Neural Architecture Search (NAS) outperforms handcrafted Neural Network (NN) design. However, current NAS methods generally use hard-coded search spaces, and predefined hierarchical architectures. As a consequence, adapting them to a new problem can be cumbersome, and it is hard to know which of the...

Full description

Saved in:
Bibliographic Details
Published in:Neural networks Vol. 180; p. 106700
Main Authors: Perrin, Mathieu, Guicquero, William, Paille, Bruno, Sicard, Gilles
Format: Journal Article
Language:English
Published: United States Elsevier Ltd 01.12.2024
Subjects:
ISSN:0893-6080, 1879-2782, 1879-2782
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Abstract Neural Architecture Search (NAS) outperforms handcrafted Neural Network (NN) design. However, current NAS methods generally use hard-coded search spaces, and predefined hierarchical architectures. As a consequence, adapting them to a new problem can be cumbersome, and it is hard to know which of the NAS algorithm or the predefined hierarchical structure impacts performance the most. To improve flexibility, and be less reliant on expert knowledge, this paper proposes a NAS methodology in which the search space is easily customizable, and allows for full network search. NAS is performed with Gaussian Process (GP)-based Bayesian Optimization (BO) in a continuous architecture embedding space. This embedding is built upon a Wasserstein Autoencoder, regularized by both a Maximum Mean Discrepancy (MMD) penalization and a Fully Input Convex Neural Network (FICNN) latent predictor, trained to infer the parameter count of architectures. This paper first assesses the embedding’s suitability for optimization by solving 2 computationally inexpensive problems: minimizing the number of parameters, and maximizing a zero-shot accuracy proxy. Then, two variants of complexity-aware NAS are performed on CIFAR-10 and STL-10, based on two different search spaces, providing competitive NN architectures with limited model sizes.
AbstractList Neural Architecture Search (NAS) outperforms handcrafted Neural Network (NN) design. However, current NAS methods generally use hard-coded search spaces, and predefined hierarchical architectures. As a consequence, adapting them to a new problem can be cumbersome, and it is hard to know which of the NAS algorithm or the predefined hierarchical structure impacts performance the most. To improve flexibility, and be less reliant on expert knowledge, this paper proposes a NAS methodology in which the search space is easily customizable, and allows for full network search. NAS is performed with Gaussian Process (GP)-based Bayesian Optimization (BO) in a continuous architecture embedding space. This embedding is built upon a Wasserstein Autoencoder, regularized by both a Maximum Mean Discrepancy (MMD) penalization and a Fully Input Convex Neural Network (FICNN) latent predictor, trained to infer the parameter count of architectures. This paper first assesses the embedding's suitability for optimization by solving 2 computationally inexpensive problems: minimizing the number of parameters, and maximizing a zero-shot accuracy proxy. Then, two variants of complexity-aware NAS are performed on CIFAR-10 and STL-10, based on two different search spaces, providing competitive NN architectures with limited model sizes.
Neural Architecture Search (NAS) outperforms handcrafted Neural Network (NN) design. However, current NAS methods generally use hard-coded search spaces, and predefined hierarchical architectures. As a consequence, adapting them to a new problem can be cumbersome, and it is hard to know which of the NAS algorithm or the predefined hierarchical structure impacts performance the most. To improve flexibility, and be less reliant on expert knowledge, this paper proposes a NAS methodology in which the search space is easily customizable, and allows for full network search. NAS is performed with Gaussian Process (GP)-based Bayesian Optimization (BO) in a continuous architecture embedding space. This embedding is built upon a Wasserstein Autoencoder, regularized by both a Maximum Mean Discrepancy (MMD) penalization and a Fully Input Convex Neural Network (FICNN) latent predictor, trained to infer the parameter count of architectures. This paper first assesses the embedding's suitability for optimization by solving 2 computationally inexpensive problems: minimizing the number of parameters, and maximizing a zero-shot accuracy proxy. Then, two variants of complexity-aware NAS are performed on CIFAR-10 and STL-10, based on two different search spaces, providing competitive NN architectures with limited model sizes.Neural Architecture Search (NAS) outperforms handcrafted Neural Network (NN) design. However, current NAS methods generally use hard-coded search spaces, and predefined hierarchical architectures. As a consequence, adapting them to a new problem can be cumbersome, and it is hard to know which of the NAS algorithm or the predefined hierarchical structure impacts performance the most. To improve flexibility, and be less reliant on expert knowledge, this paper proposes a NAS methodology in which the search space is easily customizable, and allows for full network search. NAS is performed with Gaussian Process (GP)-based Bayesian Optimization (BO) in a continuous architecture embedding space. This embedding is built upon a Wasserstein Autoencoder, regularized by both a Maximum Mean Discrepancy (MMD) penalization and a Fully Input Convex Neural Network (FICNN) latent predictor, trained to infer the parameter count of architectures. This paper first assesses the embedding's suitability for optimization by solving 2 computationally inexpensive problems: minimizing the number of parameters, and maximizing a zero-shot accuracy proxy. Then, two variants of complexity-aware NAS are performed on CIFAR-10 and STL-10, based on two different search spaces, providing competitive NN architectures with limited model sizes.
ArticleNumber 106700
Author Paille, Bruno
Sicard, Gilles
Guicquero, William
Perrin, Mathieu
Author_xml – sequence: 1
  givenname: Mathieu
  orcidid: 0009-0008-3997-0975
  surname: Perrin
  fullname: Perrin, Mathieu
  email: mat@mathieuperrin.com
  organization: ST Microelectronics, 12 Rue Jules Horowitz, Grenoble, 38019, France
– sequence: 2
  givenname: William
  orcidid: 0000-0001-8925-0441
  surname: Guicquero
  fullname: Guicquero, William
  email: william.guicquero@cea.fr
  organization: CEA-LETI, Université Grenoble Alpes, F-38000, 17 Avenue des Martyrs, Grenoble, 38054, France
– sequence: 3
  givenname: Bruno
  surname: Paille
  fullname: Paille, Bruno
  email: bruno.paille@st.com
  organization: ST Microelectronics, 12 Rue Jules Horowitz, Grenoble, 38019, France
– sequence: 4
  givenname: Gilles
  surname: Sicard
  fullname: Sicard, Gilles
  email: gilles.sicard@cea.fr
  organization: CEA-LETI, Université Grenoble Alpes, F-38000, 17 Avenue des Martyrs, Grenoble, 38054, France
BackLink https://www.ncbi.nlm.nih.gov/pubmed/39293175$$D View this record in MEDLINE/PubMed
BookMark eNqFkE1PAyEQhomp0bb6D4zZo5etLLBfHoxN41fS6EE9E2AHpdlChV2N_16aVQ8e9DSTyfPOwDNBI-ssIHSU4VmGs-J0NbPQW-hmBBMWR0WJ8Q4aZ1VZp6SsyAiNcVXTtMAV3keTEFYY46JidA_t05rUNCvzMbp4dO_CNyERiXJWm-feC9lCImyTxIPpiwEvvHoxSrRJgG2bhI1QkGjnk7v5wwHa1aINcPhVp-jp6vJxcZMu769vF_NlqmhBulRoVdNC15qVpKgzIkkF8cm5ZiTHoASTwMoGSinzBoAVkjJZ1blSWOpISTpFJ8PejXevPYSOr01Q0LbCgusDp1sDlFDKInr8hfZyDQ3feLMW_oN__zoCbACUdyF40D9IhvlWLl_xQS7fyuWD3Bg7-xVTphOdcbbzwrT_hc-HMERJb9EqD8qAVdAYD6rjjTN_L_gEQWeWrQ
CitedBy_id crossref_primary_10_1016_j_measurement_2025_118571
crossref_primary_10_1016_j_neunet_2025_107819
Cites_doi 10.1007/978-3-030-86383-8_44
10.1016/j.neucom.2021.10.118
10.1109/TNNLS.2019.2919608
10.1109/TNNLS.2023.3344294
10.5244/C.30.87
10.1038/323533a0
10.1007/s10479-007-0176-2
10.1007/s10994-020-05899-z
10.1007/978-3-030-05318-5_3
10.1109/TNNLS.2021.3123105
10.1609/aaai.v35i12.17233
10.1007/BF02098169
10.1109/TEVC.2020.3024708
10.1609/aaai.v32i1.11709
10.1109/TPAMI.2021.3052758
ContentType Journal Article
Copyright 2024
Copyright © 2024. Published by Elsevier Ltd.
Copyright_xml – notice: 2024
– notice: Copyright © 2024. Published by Elsevier Ltd.
DBID AAYXX
CITATION
CGR
CUY
CVF
ECM
EIF
NPM
7X8
DOI 10.1016/j.neunet.2024.106700
DatabaseName CrossRef
Medline
MEDLINE
MEDLINE (Ovid)
MEDLINE
MEDLINE
PubMed
MEDLINE - Academic
DatabaseTitle CrossRef
MEDLINE
Medline Complete
MEDLINE with Full Text
PubMed
MEDLINE (Ovid)
MEDLINE - Academic
DatabaseTitleList MEDLINE

MEDLINE - Academic
Database_xml – sequence: 1
  dbid: NPM
  name: PubMed
  url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 2
  dbid: 7X8
  name: MEDLINE - Academic
  url: https://search.proquest.com/medline
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Computer Science
EISSN 1879-2782
ExternalDocumentID 39293175
10_1016_j_neunet_2024_106700
S0893608024006245
Genre Journal Article
GroupedDBID ---
--K
--M
-~X
.DC
.~1
0R~
123
186
1B1
1RT
1~.
1~5
29N
4.4
457
4G.
53G
5RE
5VS
6TJ
7-5
71M
8P~
9JM
9JN
AABNK
AACTN
AAEDT
AAEDW
AAIKJ
AAKOC
AALRI
AAOAW
AAQFI
AAQXK
AAXKI
AAXLA
AAXUO
AAYFN
ABAOU
ABBOA
ABCQJ
ABDPE
ABEFU
ABFNM
ABFRF
ABHFT
ABIVO
ABJNI
ABLJU
ABMAC
ABXDB
ACDAQ
ACGFO
ACGFS
ACIUM
ACNNM
ACRLP
ACZNC
ADBBV
ADEZE
ADGUI
ADJOM
ADMUD
ADRHT
AEBSH
AECPX
AEFWE
AEKER
AENEX
AFJKZ
AFKWA
AFTJW
AFXIZ
AGHFR
AGUBO
AGWIK
AGYEJ
AHHHB
AHJVU
AHZHX
AIALX
AIEXJ
AIKHN
AITUG
AJOXV
AKRWK
ALMA_UNASSIGNED_HOLDINGS
AMFUW
AMRAJ
AOUOD
ARUGR
ASPBG
AVWKF
AXJTR
AZFZN
BJAXD
BKOJK
BLXMC
CS3
DU5
EBS
EFJIC
EJD
EO8
EO9
EP2
EP3
F0J
F5P
FDB
FEDTE
FGOYB
FIRID
FNPLU
FYGXN
G-2
G-Q
GBLVA
GBOLZ
HLZ
HMQ
HVGLF
HZ~
IHE
J1W
JJJVA
K-O
KOM
KZ1
LG9
LMP
M2V
M41
MHUIS
MO0
MOBAO
MVM
N9A
O-L
O9-
OAUVE
OZT
P-8
P-9
P2P
PC.
Q38
R2-
RIG
ROL
RPZ
SBC
SCC
SDF
SDG
SDP
SES
SEW
SNS
SPC
SPCBC
SSN
SST
SSV
SSW
SSZ
T5K
TAE
UAP
UNMZH
VOH
WUQ
XPP
ZMT
~G-
9DU
AATTM
AAYWO
AAYXX
ABWVN
ACLOT
ACRPL
ACVFH
ADCNI
ADNMO
AEIPS
AEUPX
AFPUW
AGQPQ
AIGII
AIIUN
AKBMS
AKYEP
ANKPU
APXCP
CITATION
EFKBS
EFLBG
~HD
AGCQF
AGRNS
BNPGV
CGR
CUY
CVF
ECM
EIF
NPM
SSH
7X8
ID FETCH-LOGICAL-c362t-afc936f9f4726912b28e0675f4250eca4be47de7bb5dee46b34b895cc0bf75fb3
ISICitedReferencesCount 2
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001318038500001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 0893-6080
1879-2782
IngestDate Sun Sep 28 00:10:14 EDT 2025
Mon Jul 21 06:02:01 EDT 2025
Tue Nov 18 20:57:22 EST 2025
Sat Nov 29 05:33:09 EST 2025
Sat Nov 09 15:59:23 EST 2024
IsPeerReviewed true
IsScholarly true
Keywords Wasserstein autoencoder
Neural architecture search
Bayesian optimization
Convolutional neural network
Language English
License Copyright © 2024. Published by Elsevier Ltd.
LinkModel OpenURL
MergedId FETCHMERGED-LOGICAL-c362t-afc936f9f4726912b28e0675f4250eca4be47de7bb5dee46b34b895cc0bf75fb3
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ORCID 0000-0001-8925-0441
0009-0008-3997-0975
PMID 39293175
PQID 3106732334
PQPubID 23479
ParticipantIDs proquest_miscellaneous_3106732334
pubmed_primary_39293175
crossref_primary_10_1016_j_neunet_2024_106700
crossref_citationtrail_10_1016_j_neunet_2024_106700
elsevier_sciencedirect_doi_10_1016_j_neunet_2024_106700
PublicationCentury 2000
PublicationDate December 2024
2024-12-00
2024-Dec
20241201
PublicationDateYYYYMMDD 2024-12-01
PublicationDate_xml – month: 12
  year: 2024
  text: December 2024
PublicationDecade 2020
PublicationPlace United States
PublicationPlace_xml – name: United States
PublicationTitle Neural networks
PublicationTitleAlternate Neural Netw
PublicationYear 2024
Publisher Elsevier Ltd
Publisher_xml – name: Elsevier Ltd
References Huang, Liu, van der Maaten, Weinberger (b33) 2017
Lu, Deb, Boddeti (b51) 2020
Lukasik, Friede, Zela, Hutter, Keuper (b54) 2021
Hutter, Hoos, Leyton-Brown (b35) 2011
Mellor, J., Turner, J., Storkey, A., & Crowley, E. J. (2021). Neural Architecture Search without Training. In
Rao, Zhao, Yi, Liu (b61) 2022
Schwarz Schuler, Romaní, Abdel-nasser, Rashwan, Puig (b67) 2022
Karnin, Z. S., Koren, T., & Somekh, O. (2013). Almost Optimal Exploration in Multi-Armed Bandits. In
Tan, Chen, Pang, Vasudevan, Sandler, Howard (b71) 2019
Dai, Zhang, Wu, Yin, Sun, Wang (b16) 2019
Ioffe, S., & Szegedy, C. (2015). Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. In
Szegedy, Liu, Jia, Sermanet, Reed, Anguelov (b70) 2015
Snoek, J., Swersky, K., Zemel, R. S., & Adams, R. P. (2014). Input Warping for Bayesian Optimization of Non-Stationary Functions. In
Trockman, Kolter (b76) 2022
Jing, Xu, Zhang (b38) 2022; 486
.
Li, Gong, Zhu (b43) 2020
Thost, V., & Chen, J. (2021). Directed Acyclic Graph Neural Networks. In
Higgins, I., Matthey, L., Pal, A., Burgess, C. P., Glorot, X., Botvinick, M. M., et al. (2017). Beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework. In
Schrodi, S., Stoll, D., Ru, B., Sukthanker, R., Brox, T., & Hutter, F. (2023). Construction of Hierarchical Neural Architecture Search Spaces based on Context-free Grammars. In A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, & S. Levine (Eds.)
White, Safari, Sukthanker, Ru, Elsken, Zela (b81) 2023
Klein, A., Falkner, S., Springenberg, J. T., & Hutter, F. (2017). Learning Curve Prediction with Bayesian Neural Networks. In
Tolstikhin, I. O., Bousquet, O., Gelly, S., & Schölkopf, B. (2018). Wasserstein Auto-Encoders. In
Anandalingam, Friesz (b2) 1992; 34
Caillon, Esling (b10) 2021
Baker, B., Gupta, O., Raskar, R., & Naik, N. (2018). Accelerating Neural Architecture Search Using Performance Prediction. In
Lu, Sreekumar, Goodman, Banzhaf, Deb, Boddeti (b52) 2021; 43
Cai, H., Chen, T., Zhang, W., Yu, Y., & Wang, J. (2018). Efficient Architecture Search by Network Transformation. In
Kingma, D. P., & Welling, M. (2014). Auto-Encoding Variational Bayes. In
Zoph, Vasudevan, Shlens, Le (b96) 2018
Chatzianastasis, Dasoulas, Siolas, Vazirgiannis (b11) 2021
Zhang, C., Ren, M., & Urtasun, R. (2019). Graph HyperNetworks for Neural Architecture Search. In
Wan, Dai, Zhang, He, Tian, Xie (b77) 2020
Pham, H., Guan, M., Zoph, B., Le, Q., & Dean, J. (2018). Efficient Neural Architecture Search via Parameters Sharing. In
Guo, Zhang, Mu, Heng, Liu, Wei (b27) 2020
[ISSN: 2640-3498].
Chen, W., Gong, X., & Wang, Z. (2021). Neural Architecture Search on ImageNet in Four GPU Hours: A Theoretically Inspired Perspective. In
Devries, Taylor (b18) 2017
van den Oord, Vinyals, Kavukcuoglu (b59) 2017
Hacene, Lassance, Gripon, Courbariaux, Bengio (b28) 2020
Lopes, Alexandre (b49) 2023
The GPyOpt authors (b73) 2016
Bender, G., Kindermans, P.-J., Zoph, B., Vasudevan, V., & Le, Q. (2018). Understanding and Simplifying One-Shot Architecture Search. In
Real, Aggarwal, Huang, Le (b62) 2019
Li, Sun, Yen, Zhang (b45) 2023; 34
Hu, Shen, Sun (b32) 2018
Luo, Tian, Qin, Chen, Liu (b55) 2018
Li, Jamieson, DeSalvo, Rostamizadeh, Talwalkar (b44) 2017; 18
Xiao, Shen, Tian, Hu (b84) 2023
Frazier (b24) 2018
Yan, Song, Liu, Zhang (b86) 2021; vol. 139
Mikolov, T., Chen, K., Corrado, G., & Dean, J. (2013). Efficient Estimation of Word Representations in Vector Space. In
He, Zhang, Ren, Sun (b29) 2016
Tan, M., & Le, Q. (2019). EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. In
Cai, H., Zhu, L., & Han, S. (2019). ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware. In
Gretton, Borgwardt, Rasch, Schölkopf, Smola (b26) 2012; 13
Jing, Xu, Li (b37) 2022
Hundt, Jain, Hager (b34) 2019
Yu, K., Sciuto, C., Jaggi, M., Musat, C., & Salzmann, M. (2020). Evaluating the Search Phase of Neural Architecture Search. In
Lin, Wang, Sun, Chen, Sun, Qian (b46) 2021
Coiffier, Hacene, Gripon (b13) 2020
Zhang, Jiang, Cui, Garnett, Chen (b91) 2019
Goodfellow, Pouget-Abadie, Mirza, Xu, Warde-Farley, Ozair (b25) 2014
Falkner, S., Klein, A., & Hutter, F. (2018). BOHB: Robust and Efficient Hyperparameter Optimization at Scale. In
Brock, A., Lim, T., Ritchie, J. M., & Weston, N. (2018). SMASH: One-Shot Model Architecture Search through HyperNetworks. In
Lopes, V., Alirezazadeh, S., & Alexandre, L. A. (2021). EPE-NAS: Efficient Performance Estimation Without Training for Neural Architecture Search. In
Zhang, Zhou, Lin, Sun (b94) 2018
Colson, Marcotte, Savard (b14) 2007; 153
Sun, Xue, Zhang, Yen (b69) 2020; 31
Wei, Niu, Tang, Wang, Hu, Liang (b78) 2022
White, Zela, Ru, Liu, Hutter (b82) 2021
Zagoruyko, S., & Komodakis, N. (2016). Wide Residual Networks. In
Zoph, B., & Le, Q. V. (2017). Neural Architecture Search with Reinforcement Learning. In
Balestriero, Pesenti, LeCun (b4) 2021
Cai, H., Gan, C., Wang, T., Zhang, Z., & Han, S. (2020). Once-for-All: Train One Network and Specialize It for Efficient Deployment. In
Wu, Dai, Zhang, Wang, Sun, Wu (b83) 2019
Elsken, T., Metzen, J. H., & Hutter, F. (2019). Efficient Multi-Objective Neural Architecture Search via Lamarckian Evolution. In
Xie, S., Zheng, H., Liu, C., & Lin, L. (2019). SNAS: Stochastic Neural Architecture Search. In
Domhan, T., Springenberg, J. T., & Hutter, F. (2015). Speeding up Automatic Hyperparameter Optimization of Deep Neural Networks by Extrapolation of Learning Curves. In
Rustamov (b64) 2019
Dai, Wan, Zhang, Wu, He, Wei (b15) 2021
Dong, Zhang, Li, Chen (b20) 2022
Lu, Whalen, Dhebar, Deb, Goodman, Banzhaf (b53) 2021; 25
Kandasamy, Neiswanger, Schneider, Poczos, Xing (b39) 2018
Moriconi, Deisenroth, Kumar (b58) 2020; 109
Yan, Zheng, Ao, Zeng, Zhang (b87) 2020
Liu, H., Simonyan, K., & Yang, Y. (2019). DARTS: Differentiable Architecture Search. In
Wei, T., Wang, C., Rui, Y., & Chen, C. W. (2016). Network Morphism. In
Amos, B., Xu, L., & Kolter, J. Z. (2017). Input Convex Neural Networks. In
Elsken, T., Metzen, J., & Hutter, F. (2018). Simple and Efficient Architecture Search for Convolutional Neural Networks. In
Yu, Y., Chen, J., Gao, T., & Yu, M. (2019). DAG-GNN: DAG Structure Learning with Graph Neural Networks. In
White, C., Neiswanger, W., & Savani, Y. (2021). BANANAS: Bayesian Optimization with Neural Architectures for Neural Architecture Search. In
Liu, Zoph, Neumann, Shlens, Hua, Li (b48) 2018
Howard, Sandler, Chen, Wang, Chen, Tan (b31) 2019
Rumelhart, Hinton, Williams (b63) 1986; 323
Devlin, Chang, Lee, Toutanova (b17) 2019
Sandler, Howard, Zhu, Zhmoginov, Chen (b65) 2018
Zhang, Yang, Jiang, Zhu, Liu (b93) 2020
Dong (10.1016/j.neunet.2024.106700_b20) 2022
He (10.1016/j.neunet.2024.106700_b29) 2016
Lopes (10.1016/j.neunet.2024.106700_b49) 2023
Coiffier (10.1016/j.neunet.2024.106700_b13) 2020
Dai (10.1016/j.neunet.2024.106700_b16) 2019
Liu (10.1016/j.neunet.2024.106700_b48) 2018
Hundt (10.1016/j.neunet.2024.106700_b34) 2019
Frazier (10.1016/j.neunet.2024.106700_b24) 2018
Huang (10.1016/j.neunet.2024.106700_b33) 2017
10.1016/j.neunet.2024.106700_b68
10.1016/j.neunet.2024.106700_b22
10.1016/j.neunet.2024.106700_b66
10.1016/j.neunet.2024.106700_b23
10.1016/j.neunet.2024.106700_b1
10.1016/j.neunet.2024.106700_b21
Rao (10.1016/j.neunet.2024.106700_b61) 2022
Zhang (10.1016/j.neunet.2024.106700_b93) 2020
10.1016/j.neunet.2024.106700_b3
10.1016/j.neunet.2024.106700_b30
10.1016/j.neunet.2024.106700_b74
10.1016/j.neunet.2024.106700_b5
Hu (10.1016/j.neunet.2024.106700_b32) 2018
10.1016/j.neunet.2024.106700_b72
10.1016/j.neunet.2024.106700_b7
10.1016/j.neunet.2024.106700_b6
Colson (10.1016/j.neunet.2024.106700_b14) 2007; 153
10.1016/j.neunet.2024.106700_b9
10.1016/j.neunet.2024.106700_b8
Lukasik (10.1016/j.neunet.2024.106700_b54) 2021
Hacene (10.1016/j.neunet.2024.106700_b28) 2020
Lu (10.1016/j.neunet.2024.106700_b53) 2021; 25
Chatzianastasis (10.1016/j.neunet.2024.106700_b11) 2021
Li (10.1016/j.neunet.2024.106700_b43) 2020
10.1016/j.neunet.2024.106700_b19
Lu (10.1016/j.neunet.2024.106700_b52) 2021; 43
Sandler (10.1016/j.neunet.2024.106700_b65) 2018
Devries (10.1016/j.neunet.2024.106700_b18) 2017
10.1016/j.neunet.2024.106700_b57
Jing (10.1016/j.neunet.2024.106700_b37) 2022
10.1016/j.neunet.2024.106700_b12
10.1016/j.neunet.2024.106700_b56
Moriconi (10.1016/j.neunet.2024.106700_b58) 2020; 109
10.1016/j.neunet.2024.106700_b60
Yan (10.1016/j.neunet.2024.106700_b86) 2021; vol. 139
Jing (10.1016/j.neunet.2024.106700_b38) 2022; 486
Schwarz Schuler (10.1016/j.neunet.2024.106700_b67) 2022
van den Oord (10.1016/j.neunet.2024.106700_b59) 2017
Caillon (10.1016/j.neunet.2024.106700_b10) 2021
Anandalingam (10.1016/j.neunet.2024.106700_b2) 1992; 34
Devlin (10.1016/j.neunet.2024.106700_b17) 2019
Lin (10.1016/j.neunet.2024.106700_b46) 2021
Wan (10.1016/j.neunet.2024.106700_b77) 2020
White (10.1016/j.neunet.2024.106700_b81) 2023
Yan (10.1016/j.neunet.2024.106700_b87) 2020
Kandasamy (10.1016/j.neunet.2024.106700_b39) 2018
10.1016/j.neunet.2024.106700_b47
10.1016/j.neunet.2024.106700_b88
Guo (10.1016/j.neunet.2024.106700_b27) 2020
10.1016/j.neunet.2024.106700_b89
10.1016/j.neunet.2024.106700_b42
Lu (10.1016/j.neunet.2024.106700_b51) 2020
The GPyOpt authors (10.1016/j.neunet.2024.106700_b73) 2016
Trockman (10.1016/j.neunet.2024.106700_b76) 2022
10.1016/j.neunet.2024.106700_b95
Hutter (10.1016/j.neunet.2024.106700_b35) 2011
10.1016/j.neunet.2024.106700_b50
Zhang (10.1016/j.neunet.2024.106700_b91) 2019
10.1016/j.neunet.2024.106700_b92
10.1016/j.neunet.2024.106700_b90
Dai (10.1016/j.neunet.2024.106700_b15) 2021
Howard (10.1016/j.neunet.2024.106700_b31) 2019
Zhang (10.1016/j.neunet.2024.106700_b94) 2018
Luo (10.1016/j.neunet.2024.106700_b55) 2018
Sun (10.1016/j.neunet.2024.106700_b69) 2020; 31
Goodfellow (10.1016/j.neunet.2024.106700_b25) 2014
Wei (10.1016/j.neunet.2024.106700_b78) 2022
Rustamov (10.1016/j.neunet.2024.106700_b64) 2019
Gretton (10.1016/j.neunet.2024.106700_b26) 2012; 13
10.1016/j.neunet.2024.106700_b79
10.1016/j.neunet.2024.106700_b36
White (10.1016/j.neunet.2024.106700_b82) 2021
Real (10.1016/j.neunet.2024.106700_b62) 2019
10.1016/j.neunet.2024.106700_b75
10.1016/j.neunet.2024.106700_b40
10.1016/j.neunet.2024.106700_b41
10.1016/j.neunet.2024.106700_b85
10.1016/j.neunet.2024.106700_b80
Xiao (10.1016/j.neunet.2024.106700_b84) 2023
Li (10.1016/j.neunet.2024.106700_b44) 2017; 18
Rumelhart (10.1016/j.neunet.2024.106700_b63) 1986; 323
Szegedy (10.1016/j.neunet.2024.106700_b70) 2015
Li (10.1016/j.neunet.2024.106700_b45) 2023; 34
Tan (10.1016/j.neunet.2024.106700_b71) 2019
Wu (10.1016/j.neunet.2024.106700_b83) 2019
Zoph (10.1016/j.neunet.2024.106700_b96) 2018
Balestriero (10.1016/j.neunet.2024.106700_b4) 2021
References_xml – reference: Zagoruyko, S., & Komodakis, N. (2016). Wide Residual Networks. In
– volume: 34
  start-page: 3832
  year: 2023
  end-page: 3846
  ident: b45
  article-title: Automatic design of convolutional neural network architectures under resource constraints
  publication-title: IEEE Transactions on Neural Networks and Learning Systems
– year: 2014
  ident: b25
  article-title: Generative adversarial networks
– reference: . [ISSN: 2640-3498].
– year: 2022
  ident: b20
  article-title: PACE: A parallelizable computation encoder for directed acyclic graphs
  publication-title: International conference on machine learning
– year: 2019
  ident: b34
  article-title: Sharpdarts: Faster and more accurate differentiable architecture search
– volume: 486
  start-page: 189
  year: 2022
  end-page: 199
  ident: b38
  article-title: A neural architecture generator for efficient search space
  publication-title: Neurocomputing
– year: 2018
  ident: b94
  article-title: ShuffleNet: An extremely efficient convolutional neural network for mobile devices
  publication-title: IEEE conference on computer vision and pattern recognition
– year: 2018
  ident: b39
  article-title: Neural architecture search with Bayesian optimisation and optimal transport
  publication-title: Advances in neural information processing systems (neurIPS)
– reference: Schrodi, S., Stoll, D., Ru, B., Sukthanker, R., Brox, T., & Hutter, F. (2023). Construction of Hierarchical Neural Architecture Search Spaces based on Context-free Grammars. In A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, & S. Levine (Eds.),
– reference: Snoek, J., Swersky, K., Zemel, R. S., & Adams, R. P. (2014). Input Warping for Bayesian Optimization of Non-Stationary Functions. In
– start-page: 1
  year: 2023
  end-page: 13
  ident: b84
  article-title: PP-NAS: Searching for plug-and-play blocks on convolutional neural networks
  publication-title: IEEE Transactions on Neural Networks and Learning Systems
– volume: 13
  start-page: 723
  year: 2012
  end-page: 773
  ident: b26
  article-title: A kernel two-sample test
  publication-title: Journal of Machine Learning Research
– volume: 25
  start-page: 277
  year: 2021
  end-page: 291
  ident: b53
  article-title: Multiobjective evolutionary design of deep convolutional neural networks for image classification
  publication-title: IEEE Transactions on Evolutionary Computation
– volume: 109
  start-page: 1925
  year: 2020
  end-page: 1943
  ident: b58
  article-title: High-dimensional Bayesian optimization using low-dimensional feature spaces
  publication-title: Machine Learning
– reference: Domhan, T., Springenberg, J. T., & Hutter, F. (2015). Speeding up Automatic Hyperparameter Optimization of Deep Neural Networks by Extrapolation of Learning Curves. In
– reference: Yu, K., Sciuto, C., Jaggi, M., Musat, C., & Salzmann, M. (2020). Evaluating the Search Phase of Neural Architecture Search. In
– volume: 18
  start-page: 185:1
  year: 2017
  end-page: 185:52
  ident: b44
  article-title: Hyperband: A novel bandit-based approach to hyperparameter optimization
  publication-title: Journal of Machine Learning Research (JMLR)
– reference: Xie, S., Zheng, H., Liu, C., & Lin, L. (2019). SNAS: Stochastic Neural Architecture Search. In
– year: 2019
  ident: b83
  article-title: Fbnet: Hardware-aware efficient ConvNet design via differentiable neural architecture search
  publication-title: IEEE conference on computer vision and pattern recognition
– volume: 323
  start-page: 533
  year: 1986
  end-page: 536
  ident: b63
  article-title: Learning representations by back-propagating errors
  publication-title: Nature
– reference: Klein, A., Falkner, S., Springenberg, J. T., & Hutter, F. (2017). Learning Curve Prediction with Bayesian Neural Networks. In
– year: 2021
  ident: b11
  article-title: Graph-based neural architecture search with operation embeddings
  publication-title: IEEE international conference on computer vision
– year: 2019
  ident: b17
  article-title: BERT: Pre-training of deep bidirectional transformers for language understanding
  publication-title: Conference of the North American chapter of the association for computational linguistics: human language technologies
– year: 2020
  ident: b51
  article-title: MUXConv: Information multiplexing in convolutional neural networks
  publication-title: IEEE conference on computer vision and pattern recognition
– reference: Baker, B., Gupta, O., Raskar, R., & Naik, N. (2018). Accelerating Neural Architecture Search Using Performance Prediction. In
– year: 2021
  ident: b54
  article-title: Smooth variational graph embeddings for efficient neural architecture search
  publication-title: International joint conference on neural networks
– volume: 34
  start-page: 1
  year: 1992
  end-page: 11
  ident: b2
  article-title: Hierarchical optimization: an introduction
  publication-title: Annals of Operations Research
– year: 2020
  ident: b13
  article-title: ThriftyNets : Convolutional neural networks with tiny parameter budget
– year: 2021
  ident: b82
  article-title: How powerful are performance predictors in neural architecture search?
  publication-title: Advances in neural information processing systems (NeurIPS)
– year: 2023
  ident: b81
  article-title: Neural architecture search: insights from 1000 papers
– year: 2016
  ident: b29
  article-title: Deep residual learning for image recognition
  publication-title: IEEE conference on computer vision and pattern recognition
– year: 2018
  ident: b48
  article-title: Progressive neural architecture search
  publication-title: European conference on computer vision
– year: 2019
  ident: b71
  article-title: MnasNet: Platform-aware neural architecture search for mobile
  publication-title: IEEE conference on computer vision and pattern recognition
– reference: Brock, A., Lim, T., Ritchie, J. M., & Weston, N. (2018). SMASH: One-Shot Model Architecture Search through HyperNetworks. In
– reference: White, C., Neiswanger, W., & Savani, Y. (2021). BANANAS: Bayesian Optimization with Neural Architectures for Neural Architecture Search. In
– year: 2017
  ident: b33
  article-title: Densely connected convolutional networks
  publication-title: IEEE conference on computer vision and pattern recognition
– year: 2021
  ident: b46
  article-title: Zen-NAS: A zero-shot NAS for high-performance image recognition
  publication-title: IEEE international conference on computer vision
– reference: Amos, B., Xu, L., & Kolter, J. Z. (2017). Input Convex Neural Networks. In
– reference: Higgins, I., Matthey, L., Pal, A., Burgess, C. P., Glorot, X., Botvinick, M. M., et al. (2017). Beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework. In
– reference: Zhang, C., Ren, M., & Urtasun, R. (2019). Graph HyperNetworks for Neural Architecture Search. In
– reference: Mellor, J., Turner, J., Storkey, A., & Crowley, E. J. (2021). Neural Architecture Search without Training. In
– year: 2021
  ident: b10
  article-title: RAVE: A variational autoencoder for fast and high-quality neural audio synthesis
– start-page: 3114
  year: 2022
  end-page: 3120
  ident: b37
  article-title: Graph masked autoencoder enhanced predictor for neural architecture search
  publication-title: Proceedings of the thirty-first international joint conference on artificial intelligence
– reference: Falkner, S., Klein, A., & Hutter, F. (2018). BOHB: Robust and Efficient Hyperparameter Optimization at Scale. In
– year: 2018
  ident: b24
  article-title: A tutorial on Bayesian optimization
– year: 2020
  ident: b28
  article-title: Attention based pruning for shift networks
  publication-title: International conference on pattern recognition
– year: 2022
  ident: b67
  article-title: Grouped pointwise convolutions reduce parameters in convolutional neural networks
  publication-title: International conference on soft computing
– year: 2019
  ident: b62
  article-title: Regularized evolution for image classifier architecture search
  publication-title: AAAI conference on artificial intelligence
– volume: 43
  start-page: 2971
  year: 2021
  end-page: 2989
  ident: b52
  article-title: Neural architecture transfer
  publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)
– reference: Liu, H., Simonyan, K., & Yang, Y. (2019). DARTS: Differentiable Architecture Search. In
– volume: vol. 139
  start-page: 11670
  year: 2021
  end-page: 11681
  ident: b86
  article-title: CATE: computation-aware neural architecture encoding with transformers
  publication-title: Proceedings of the 38th international conference on machine learning
– reference: Elsken, T., Metzen, J. H., & Hutter, F. (2019). Efficient Multi-Objective Neural Architecture Search via Lamarckian Evolution. In
– year: 2018
  ident: b65
  article-title: MobileNetV2: Inverted residuals and linear bottlenecks
  publication-title: IEEE conference on computer vision and pattern recognition
– reference: Zoph, B., & Le, Q. V. (2017). Neural Architecture Search with Reinforcement Learning. In
– year: 2020
  ident: b87
  article-title: Does unsupervised architecture representation learning help neural architecture search?
  publication-title: Advances in neural information processing systems (neurIPS)
– year: 2020
  ident: b93
  article-title: Fast hardware-aware neural architecture search
  publication-title: IEEE conference on computer vision and pattern recognition
– reference: Pham, H., Guan, M., Zoph, B., Le, Q., & Dean, J. (2018). Efficient Neural Architecture Search via Parameters Sharing. In
– year: 2016
  ident: b73
  article-title: GPyOpt: A Bayesian optimization framework in Python
– year: 2021
  ident: b15
  article-title: FBNetV3: Joint architecture-recipe search using predictor pretraining
  publication-title: IEEE conference on computer vision and pattern recognition
– reference: Wei, T., Wang, C., Rui, Y., & Chen, C. W. (2016). Network Morphism. In
– reference: Tan, M., & Le, Q. (2019). EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. In
– year: 2017
  ident: b18
  article-title: Improved regularization of convolutional neural networks with cutout
– reference: Cai, H., Zhu, L., & Han, S. (2019). ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware. In
– reference: Ioffe, S., & Szegedy, C. (2015). Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. In
– year: 2020
  ident: b43
  article-title: Neural graph embedding for neural architecture search
  publication-title: AAAI conference on artificial intelligence
– year: 2018
  ident: b32
  article-title: Squeeze-and-excitation networks
  publication-title: IEEE conference on computer vision and pattern recognition
– volume: 153
  start-page: 235
  year: 2007
  end-page: 256
  ident: b14
  article-title: An overview of bilevel optimization
  publication-title: Annals of Operations Research
– reference: Yu, Y., Chen, J., Gao, T., & Yu, M. (2019). DAG-GNN: DAG Structure Learning with Graph Neural Networks. In
– year: 2011
  ident: b35
  article-title: Sequential model-based optimization for general algorithm configuration
  publication-title: Learning and intelligent optimization
– reference: Elsken, T., Metzen, J., & Hutter, F. (2018). Simple and Efficient Architecture Search for Convolutional Neural Networks. In
– reference: Karnin, Z. S., Koren, T., & Somekh, O. (2013). Almost Optimal Exploration in Multi-Armed Bandits. In
– reference: Kingma, D. P., & Welling, M. (2014). Auto-Encoding Variational Bayes. In
– year: 2020
  ident: b27
  article-title: Single path one-shot neural architecture search with uniform sampling
  publication-title: European conference on computer vision
– year: 2018
  ident: b96
  article-title: Learning transferable architectures for scalable image recognition
  publication-title: IEEE conference on computer vision and pattern recognition
– reference: Chen, W., Gong, X., & Wang, Z. (2021). Neural Architecture Search on ImageNet in Four GPU Hours: A Theoretically Inspired Perspective. In
– year: 2015
  ident: b70
  article-title: Going deeper with convolutions
  publication-title: IEEE conference on computer vision and pattern recognition
– reference: Thost, V., & Chen, J. (2021). Directed Acyclic Graph Neural Networks. In
– year: 2021
  ident: b4
  article-title: Learning in high dimension always amounts to extrapolation
– year: 2019
  ident: b31
  article-title: Searching for MobileNetV3
  publication-title: International conference on computer vision
– reference: Tolstikhin, I. O., Bousquet, O., Gelly, S., & Schölkopf, B. (2018). Wasserstein Auto-Encoders. In
– year: 2022
  ident: b61
  article-title: CR-LSO: Convex neural architecture optimization in the latent space of graph variational autoencoder with input convex neural networks
– year: 2019
  ident: b16
  article-title: ChamNet: Towards efficient network design through platform-aware model adaptation
  publication-title: IEEE conference on computer vision and pattern recognition
– reference: Lopes, V., Alirezazadeh, S., & Alexandre, L. A. (2021). EPE-NAS: Efficient Performance Estimation Without Training for Neural Architecture Search. In
– year: 2019
  ident: b64
  article-title: Closed-form expressions for maximum mean discrepancy with applications to wasserstein auto-encoders
– reference: Bender, G., Kindermans, P.-J., Zoph, B., Vasudevan, V., & Le, Q. (2018). Understanding and Simplifying One-Shot Architecture Search. In
– start-page: 1
  year: 2023
  end-page: 15
  ident: b49
  article-title: Toward less constrained macro-neural architecture search
  publication-title: IEEE transactions on neural networks and learning systems
– year: 2017
  ident: b59
  article-title: Neural discrete representation learning
  publication-title: Advances in neural information processing systems (NeurIPS)
– year: 2020
  ident: b77
  article-title: FBNetV2: Differentiable neural architecture search for spatial and channel dimensions
  publication-title: IEEE conference on computer vision and pattern recognition
– year: 2018
  ident: b55
  article-title: Neural architecture optimization
  publication-title: Advances in neural information processing systems (neurIPS)
– reference: .
– reference: Cai, H., Chen, T., Zhang, W., Yu, Y., & Wang, J. (2018). Efficient Architecture Search by Network Transformation. In
– reference: Cai, H., Gan, C., Wang, T., Zhang, Z., & Han, S. (2020). Once-for-All: Train One Network and Specialize It for Efficient Deployment. In
– reference: Mikolov, T., Chen, K., Corrado, G., & Dean, J. (2013). Efficient Estimation of Word Representations in Vector Space. In
– year: 2019
  ident: b91
  article-title: D-VAE: A variational autoencoder for directed acyclic graphs
  publication-title: Advances in neural information processing systems (neurIPS)
– volume: 31
  start-page: 1242
  year: 2020
  end-page: 1254
  ident: b69
  article-title: Completely automated CNN architecture design based on blocks
  publication-title: IEEE Transactions on Neural Networks and Learning Systems
– year: 2022
  ident: b76
  article-title: Patches are all you need?
– start-page: 1
  year: 2022
  end-page: 15
  ident: b78
  article-title: NPENAS: Neural predictor guided evolution for neural architecture search
  publication-title: IEEE Transactions on Neural Networks and Learning Systems
– year: 2014
  ident: 10.1016/j.neunet.2024.106700_b25
– year: 2017
  ident: 10.1016/j.neunet.2024.106700_b18
– year: 2022
  ident: 10.1016/j.neunet.2024.106700_b61
– ident: 10.1016/j.neunet.2024.106700_b50
  doi: 10.1007/978-3-030-86383-8_44
– ident: 10.1016/j.neunet.2024.106700_b95
– year: 2019
  ident: 10.1016/j.neunet.2024.106700_b16
  article-title: ChamNet: Towards efficient network design through platform-aware model adaptation
– ident: 10.1016/j.neunet.2024.106700_b66
– ident: 10.1016/j.neunet.2024.106700_b8
– ident: 10.1016/j.neunet.2024.106700_b47
– volume: 486
  start-page: 189
  year: 2022
  ident: 10.1016/j.neunet.2024.106700_b38
  article-title: A neural architecture generator for efficient search space
  publication-title: Neurocomputing
  doi: 10.1016/j.neucom.2021.10.118
– year: 2019
  ident: 10.1016/j.neunet.2024.106700_b34
– year: 2021
  ident: 10.1016/j.neunet.2024.106700_b46
  article-title: Zen-NAS: A zero-shot NAS for high-performance image recognition
– ident: 10.1016/j.neunet.2024.106700_b5
– volume: 13
  start-page: 723
  year: 2012
  ident: 10.1016/j.neunet.2024.106700_b26
  article-title: A kernel two-sample test
  publication-title: Journal of Machine Learning Research
– ident: 10.1016/j.neunet.2024.106700_b1
– volume: 31
  start-page: 1242
  issue: 4
  year: 2020
  ident: 10.1016/j.neunet.2024.106700_b69
  article-title: Completely automated CNN architecture design based on blocks
  publication-title: IEEE Transactions on Neural Networks and Learning Systems
  doi: 10.1109/TNNLS.2019.2919608
– ident: 10.1016/j.neunet.2024.106700_b19
– year: 2021
  ident: 10.1016/j.neunet.2024.106700_b4
– year: 2021
  ident: 10.1016/j.neunet.2024.106700_b82
  article-title: How powerful are performance predictors in neural architecture search?
– ident: 10.1016/j.neunet.2024.106700_b57
– ident: 10.1016/j.neunet.2024.106700_b72
– ident: 10.1016/j.neunet.2024.106700_b30
– year: 2020
  ident: 10.1016/j.neunet.2024.106700_b43
  article-title: Neural graph embedding for neural architecture search
– year: 2019
  ident: 10.1016/j.neunet.2024.106700_b17
  article-title: BERT: Pre-training of deep bidirectional transformers for language understanding
– ident: 10.1016/j.neunet.2024.106700_b75
– year: 2021
  ident: 10.1016/j.neunet.2024.106700_b15
  article-title: FBNetV3: Joint architecture-recipe search using predictor pretraining
– year: 2020
  ident: 10.1016/j.neunet.2024.106700_b77
  article-title: FBNetV2: Differentiable neural architecture search for spatial and channel dimensions
– year: 2018
  ident: 10.1016/j.neunet.2024.106700_b39
  article-title: Neural architecture search with Bayesian optimisation and optimal transport
– ident: 10.1016/j.neunet.2024.106700_b79
– year: 2020
  ident: 10.1016/j.neunet.2024.106700_b51
  article-title: MUXConv: Information multiplexing in convolutional neural networks
– year: 2019
  ident: 10.1016/j.neunet.2024.106700_b64
– year: 2015
  ident: 10.1016/j.neunet.2024.106700_b70
  article-title: Going deeper with convolutions
– start-page: 1
  year: 2022
  ident: 10.1016/j.neunet.2024.106700_b78
  article-title: NPENAS: Neural predictor guided evolution for neural architecture search
  publication-title: IEEE Transactions on Neural Networks and Learning Systems
– start-page: 1
  year: 2023
  ident: 10.1016/j.neunet.2024.106700_b84
  article-title: PP-NAS: Searching for plug-and-play blocks on convolutional neural networks
  publication-title: IEEE Transactions on Neural Networks and Learning Systems
  doi: 10.1109/TNNLS.2023.3344294
– ident: 10.1016/j.neunet.2024.106700_b23
– ident: 10.1016/j.neunet.2024.106700_b60
– year: 2019
  ident: 10.1016/j.neunet.2024.106700_b62
  article-title: Regularized evolution for image classifier architecture search
– ident: 10.1016/j.neunet.2024.106700_b40
– ident: 10.1016/j.neunet.2024.106700_b12
– ident: 10.1016/j.neunet.2024.106700_b85
– ident: 10.1016/j.neunet.2024.106700_b90
  doi: 10.5244/C.30.87
– ident: 10.1016/j.neunet.2024.106700_b89
– volume: 323
  start-page: 533
  issue: 6088
  year: 1986
  ident: 10.1016/j.neunet.2024.106700_b63
  article-title: Learning representations by back-propagating errors
  publication-title: Nature
  doi: 10.1038/323533a0
– year: 2020
  ident: 10.1016/j.neunet.2024.106700_b93
  article-title: Fast hardware-aware neural architecture search
– year: 2021
  ident: 10.1016/j.neunet.2024.106700_b10
– ident: 10.1016/j.neunet.2024.106700_b92
– year: 2016
  ident: 10.1016/j.neunet.2024.106700_b73
– year: 2022
  ident: 10.1016/j.neunet.2024.106700_b67
  article-title: Grouped pointwise convolutions reduce parameters in convolutional neural networks
– year: 2020
  ident: 10.1016/j.neunet.2024.106700_b13
– year: 2020
  ident: 10.1016/j.neunet.2024.106700_b27
  article-title: Single path one-shot neural architecture search with uniform sampling
– year: 2018
  ident: 10.1016/j.neunet.2024.106700_b32
  article-title: Squeeze-and-excitation networks
– ident: 10.1016/j.neunet.2024.106700_b74
– start-page: 1
  year: 2023
  ident: 10.1016/j.neunet.2024.106700_b49
  article-title: Toward less constrained macro-neural architecture search
– year: 2018
  ident: 10.1016/j.neunet.2024.106700_b65
  article-title: MobileNetV2: Inverted residuals and linear bottlenecks
– volume: 153
  start-page: 235
  issue: 1
  year: 2007
  ident: 10.1016/j.neunet.2024.106700_b14
  article-title: An overview of bilevel optimization
  publication-title: Annals of Operations Research
  doi: 10.1007/s10479-007-0176-2
– volume: 18
  start-page: 185:1
  year: 2017
  ident: 10.1016/j.neunet.2024.106700_b44
  article-title: Hyperband: A novel bandit-based approach to hyperparameter optimization
  publication-title: Journal of Machine Learning Research (JMLR)
– ident: 10.1016/j.neunet.2024.106700_b68
– year: 2018
  ident: 10.1016/j.neunet.2024.106700_b55
  article-title: Neural architecture optimization
– year: 2020
  ident: 10.1016/j.neunet.2024.106700_b28
  article-title: Attention based pruning for shift networks
– start-page: 3114
  year: 2022
  ident: 10.1016/j.neunet.2024.106700_b37
  article-title: Graph masked autoencoder enhanced predictor for neural architecture search
– ident: 10.1016/j.neunet.2024.106700_b41
– year: 2020
  ident: 10.1016/j.neunet.2024.106700_b87
  article-title: Does unsupervised architecture representation learning help neural architecture search?
– year: 2018
  ident: 10.1016/j.neunet.2024.106700_b96
  article-title: Learning transferable architectures for scalable image recognition
– volume: 109
  start-page: 1925
  issue: 9–10
  year: 2020
  ident: 10.1016/j.neunet.2024.106700_b58
  article-title: High-dimensional Bayesian optimization using low-dimensional feature spaces
  publication-title: Machine Learning
  doi: 10.1007/s10994-020-05899-z
– ident: 10.1016/j.neunet.2024.106700_b3
– year: 2019
  ident: 10.1016/j.neunet.2024.106700_b83
  article-title: Fbnet: Hardware-aware efficient ConvNet design via differentiable neural architecture search
– ident: 10.1016/j.neunet.2024.106700_b22
  doi: 10.1007/978-3-030-05318-5_3
– ident: 10.1016/j.neunet.2024.106700_b88
– year: 2023
  ident: 10.1016/j.neunet.2024.106700_b81
– year: 2019
  ident: 10.1016/j.neunet.2024.106700_b71
  article-title: MnasNet: Platform-aware neural architecture search for mobile
– year: 2019
  ident: 10.1016/j.neunet.2024.106700_b91
  article-title: D-VAE: A variational autoencoder for directed acyclic graphs
– year: 2021
  ident: 10.1016/j.neunet.2024.106700_b54
  article-title: Smooth variational graph embeddings for efficient neural architecture search
– year: 2016
  ident: 10.1016/j.neunet.2024.106700_b29
  article-title: Deep residual learning for image recognition
– ident: 10.1016/j.neunet.2024.106700_b36
– volume: 34
  start-page: 3832
  issue: 8
  year: 2023
  ident: 10.1016/j.neunet.2024.106700_b45
  article-title: Automatic design of convolutional neural network architectures under resource constraints
  publication-title: IEEE Transactions on Neural Networks and Learning Systems
  doi: 10.1109/TNNLS.2021.3123105
– ident: 10.1016/j.neunet.2024.106700_b80
  doi: 10.1609/aaai.v35i12.17233
– volume: 34
  start-page: 1
  year: 1992
  ident: 10.1016/j.neunet.2024.106700_b2
  article-title: Hierarchical optimization: an introduction
  publication-title: Annals of Operations Research
  doi: 10.1007/BF02098169
– year: 2021
  ident: 10.1016/j.neunet.2024.106700_b11
  article-title: Graph-based neural architecture search with operation embeddings
– volume: 25
  start-page: 277
  issue: 2
  year: 2021
  ident: 10.1016/j.neunet.2024.106700_b53
  article-title: Multiobjective evolutionary design of deep convolutional neural networks for image classification
  publication-title: IEEE Transactions on Evolutionary Computation
  doi: 10.1109/TEVC.2020.3024708
– ident: 10.1016/j.neunet.2024.106700_b21
  doi: 10.1007/978-3-030-05318-5_3
– year: 2011
  ident: 10.1016/j.neunet.2024.106700_b35
  article-title: Sequential model-based optimization for general algorithm configuration
– year: 2022
  ident: 10.1016/j.neunet.2024.106700_b20
  article-title: PACE: A parallelizable computation encoder for directed acyclic graphs
– ident: 10.1016/j.neunet.2024.106700_b9
– year: 2017
  ident: 10.1016/j.neunet.2024.106700_b33
  article-title: Densely connected convolutional networks
– ident: 10.1016/j.neunet.2024.106700_b7
  doi: 10.1609/aaai.v32i1.11709
– ident: 10.1016/j.neunet.2024.106700_b42
– ident: 10.1016/j.neunet.2024.106700_b6
– year: 2019
  ident: 10.1016/j.neunet.2024.106700_b31
  article-title: Searching for MobileNetV3
– volume: 43
  start-page: 2971
  issue: 9
  year: 2021
  ident: 10.1016/j.neunet.2024.106700_b52
  article-title: Neural architecture transfer
  publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)
  doi: 10.1109/TPAMI.2021.3052758
– year: 2022
  ident: 10.1016/j.neunet.2024.106700_b76
– ident: 10.1016/j.neunet.2024.106700_b56
– year: 2017
  ident: 10.1016/j.neunet.2024.106700_b59
  article-title: Neural discrete representation learning
– year: 2018
  ident: 10.1016/j.neunet.2024.106700_b48
  article-title: Progressive neural architecture search
– year: 2018
  ident: 10.1016/j.neunet.2024.106700_b24
– year: 2018
  ident: 10.1016/j.neunet.2024.106700_b94
  article-title: ShuffleNet: An extremely efficient convolutional neural network for mobile devices
– volume: vol. 139
  start-page: 11670
  year: 2021
  ident: 10.1016/j.neunet.2024.106700_b86
  article-title: CATE: computation-aware neural architecture encoding with transformers
SSID ssj0006843
Score 2.4552727
Snippet Neural Architecture Search (NAS) outperforms handcrafted Neural Network (NN) design. However, current NAS methods generally use hard-coded search spaces, and...
SourceID proquest
pubmed
crossref
elsevier
SourceType Aggregation Database
Index Database
Enrichment Source
Publisher
StartPage 106700
SubjectTerms Algorithms
Bayes Theorem
Bayesian optimization
Convolutional neural network
Humans
Neural architecture search
Neural Networks, Computer
Normal Distribution
Wasserstein autoencoder
Title Towards a configurable and non-hierarchical search space for NAS
URI https://dx.doi.org/10.1016/j.neunet.2024.106700
https://www.ncbi.nlm.nih.gov/pubmed/39293175
https://www.proquest.com/docview/3106732334
Volume 180
WOSCitedRecordID wos001318038500001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVESC
  databaseName: Elsevier SD Freedom Collection Journals 2021
  customDbUrl:
  eissn: 1879-2782
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0006843
  issn: 0893-6080
  databaseCode: AIEXJ
  dateStart: 19950101
  isFulltext: true
  titleUrlDefault: https://www.sciencedirect.com
  providerName: Elsevier
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV07b9swECYcp0OXvh_uI2CBohsNWaREcmtQOH3AdTs4haYSIkUFNgzZla0gP79HUZKdNkHSoYsg0KRI8z6d7o73QOhtwGVELbOEyiwmLAsFSWUsSZjFOnA1r3QdX_FjwqdTkSTye6_3s42FOV_yohAXF3L9X0kNbUBsFzr7D-TuHgoNcA9EhyuQHa63I3ztCLtxUY-rIp-fVWUdHeUM5KDqE1f7uj49qMMhvc0DuIrxyb-nTV3iRZvUqc7KUXhf8c2Oj5alzz3w1Tkv2qrz4qnmBj40PnamMeXsHVItve-yy3aw6kw7sBDvXv_R_X7JDBGyPZcOzzkFBwpzX0loaK9o69htsMcwR3Wc0JW83JsVFsPCVvA3h27S4a775dTZ02_q5HQyUbNxMnu3_kVcVTF3-t6UWDlAhyGPpOijw-PP4-RL962OhQ_BaBfaBlfWHoB_T3yd8HKdclILKbMH6F6jXeBjj4qHqGeLR-h-W7kDN4z8MXrfgASneB8kGECC_wQJ9iDBNUgwgAQDSJ6g05Px7MMn0tTSIAZElC1JcyNpnMuc8TCWo1CHwjplMQeeHViTMm0ZzyzXOsqsZbGmTAsZGRPoHHpp-hT1YX77HGETxYyzMBepcVnuc3gaS4XIZE6zUarFANF2k5RpEs27eidL1XoULpTfWuW2VvmtHSDSjVr7RCs39Oft_qtGWPRCoAL83DDyTUsuBbzUHZClhV1VG0VdBxpSygbomadjtxanRzhZ-8UtRr9Ed3evyCvU35aVfY3umPPtfFMeoQOeiKMGib8BjdaeDg
linkProvider Elsevier
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Towards+a+configurable+and+non-hierarchical+search+space+for+NAS&rft.jtitle=Neural+networks&rft.au=Perrin%2C+Mathieu&rft.au=Guicquero%2C+William&rft.au=Paille%2C+Bruno&rft.au=Sicard%2C+Gilles&rft.date=2024-12-01&rft.issn=1879-2782&rft.eissn=1879-2782&rft.volume=180&rft.spage=106700&rft_id=info:doi/10.1016%2Fj.neunet.2024.106700&rft.externalDBID=NO_FULL_TEXT
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0893-6080&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0893-6080&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0893-6080&client=summon