Towards a configurable and non-hierarchical search space for NAS

Neural Architecture Search (NAS) outperforms handcrafted Neural Network (NN) design. However, current NAS methods generally use hard-coded search spaces, and predefined hierarchical architectures. As a consequence, adapting them to a new problem can be cumbersome, and it is hard to know which of the...

Full description

Saved in:
Bibliographic Details
Published in:Neural networks Vol. 180; p. 106700
Main Authors: Perrin, Mathieu, Guicquero, William, Paille, Bruno, Sicard, Gilles
Format: Journal Article
Language:English
Published: United States Elsevier Ltd 01.12.2024
Subjects:
ISSN:0893-6080, 1879-2782, 1879-2782
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Neural Architecture Search (NAS) outperforms handcrafted Neural Network (NN) design. However, current NAS methods generally use hard-coded search spaces, and predefined hierarchical architectures. As a consequence, adapting them to a new problem can be cumbersome, and it is hard to know which of the NAS algorithm or the predefined hierarchical structure impacts performance the most. To improve flexibility, and be less reliant on expert knowledge, this paper proposes a NAS methodology in which the search space is easily customizable, and allows for full network search. NAS is performed with Gaussian Process (GP)-based Bayesian Optimization (BO) in a continuous architecture embedding space. This embedding is built upon a Wasserstein Autoencoder, regularized by both a Maximum Mean Discrepancy (MMD) penalization and a Fully Input Convex Neural Network (FICNN) latent predictor, trained to infer the parameter count of architectures. This paper first assesses the embedding’s suitability for optimization by solving 2 computationally inexpensive problems: minimizing the number of parameters, and maximizing a zero-shot accuracy proxy. Then, two variants of complexity-aware NAS are performed on CIFAR-10 and STL-10, based on two different search spaces, providing competitive NN architectures with limited model sizes.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:0893-6080
1879-2782
1879-2782
DOI:10.1016/j.neunet.2024.106700