PertNAS: Architectural Perturbations for Memory-Efficient Neural Architecture Search

Differentiable Neural Architecture Search (NAS) relies on aggressive weight-sharing to reduce its search cost. This leads to GPU-memory bottlenecks that hamper the algorithm's scalability. To resolve these bottlenecks, we propose a perturbations-based evolutionary approach that significantly re...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:2023 60th ACM/IEEE Design Automation Conference (DAC) S. 1 - 6
Hauptverfasser: Ahmad, Afzal, Xie, Zhiyao, Zhang, Wei
Format: Tagungsbericht
Sprache:Englisch
Veröffentlicht: IEEE 09.07.2023
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Abstract Differentiable Neural Architecture Search (NAS) relies on aggressive weight-sharing to reduce its search cost. This leads to GPU-memory bottlenecks that hamper the algorithm's scalability. To resolve these bottlenecks, we propose a perturbations-based evolutionary approach that significantly reduces the memory cost while largely maintaining the efficiency benefits of weight-sharing. Our approach makes minute changes to compact neural architectures and measures their impact on performance. In this way, it extracts high-quality motifs from the search space. We utilize these perturbations to perform NAS in compact models evolving over time to traverse the search space. Our method disentangles GPU-memory consumption from search space size, offering exceptional scalability to large search spaces. Results show competitive accuracy on multiple benchmarks, including CIFAR10, ImageNet2012, and NASBench-301. Specifically, our approach improves accuracy on ImageNet and NASBench-301 by 0.3% and 0.87%, respectively. Furthermore, the memory consumption of search is reduced by roughly 80% against state-of-the-art weight-shared differentiable NAS works while achieving a search time of only 6 GPU hours.
AbstractList Differentiable Neural Architecture Search (NAS) relies on aggressive weight-sharing to reduce its search cost. This leads to GPU-memory bottlenecks that hamper the algorithm's scalability. To resolve these bottlenecks, we propose a perturbations-based evolutionary approach that significantly reduces the memory cost while largely maintaining the efficiency benefits of weight-sharing. Our approach makes minute changes to compact neural architectures and measures their impact on performance. In this way, it extracts high-quality motifs from the search space. We utilize these perturbations to perform NAS in compact models evolving over time to traverse the search space. Our method disentangles GPU-memory consumption from search space size, offering exceptional scalability to large search spaces. Results show competitive accuracy on multiple benchmarks, including CIFAR10, ImageNet2012, and NASBench-301. Specifically, our approach improves accuracy on ImageNet and NASBench-301 by 0.3% and 0.87%, respectively. Furthermore, the memory consumption of search is reduced by roughly 80% against state-of-the-art weight-shared differentiable NAS works while achieving a search time of only 6 GPU hours.
Author Zhang, Wei
Ahmad, Afzal
Xie, Zhiyao
Author_xml – sequence: 1
  givenname: Afzal
  surname: Ahmad
  fullname: Ahmad, Afzal
  email: afzal.ahmad@connect.ust.hk
  organization: The Hong Kong University of Science and Technology
– sequence: 2
  givenname: Zhiyao
  surname: Xie
  fullname: Xie, Zhiyao
  email: eezhiyao@ust.hk
  organization: The Hong Kong University of Science and Technology
– sequence: 3
  givenname: Wei
  surname: Zhang
  fullname: Zhang, Wei
  email: eeweiz@ust.hk
  organization: The Hong Kong University of Science and Technology
BookMark eNpNj9tKw0AYhFdQUGveQGRfIHEP2ZN3IdYD1Cq0Xpfd5P9xoU1ks73o23sGr4YZvhmYc3I8jAMQcsVZxTlz17dNq7QTrhJMyIozURuj9BEpnHFWKiaFrC0_JcU0xcA0U7Zmuj4j6xdIedmsbmiTureYocv75Lf0K96n4HMch4nimOgT7MZ0KOeIsYswZLqEb_JfD-gK_Ke9ICfotxMUvzojr3fzdftQLp7vH9tmUXrhWC5NMAKtEwhg0HPpRa9szz3XynkvggXpgsdOYUCrQpC97FWokVmNRnEuZ-TyZzcCwOY9xZ1Ph83fefkBN7xTrQ
ContentType Conference Proceeding
DBID 6IE
6IH
CBEJK
RIE
RIO
DOI 10.1109/DAC56929.2023.10247756
DatabaseName IEEE Electronic Library (IEL) Conference Proceedings
IEEE Proceedings Order Plan (POP) 1998-present by volume
IEEE Xplore All Conference Proceedings
IEEE Electronic Library (IEL)
IEEE Proceedings Order Plans (POP) 1998-present
DatabaseTitleList
Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
EISBN 9798350323481
EndPage 6
ExternalDocumentID 10247756
Genre orig-research
GroupedDBID 6IE
6IH
ACM
ALMA_UNASSIGNED_HOLDINGS
CBEJK
RIE
RIO
ID FETCH-LOGICAL-a290t-7b72f892fee7fa13a2d58d1a1659aa2b8e39bafc5fbf85bb3d3d5b4f086f75113
IEDL.DBID RIE
ISICitedReferencesCount 0
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001073487300085&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
IngestDate Wed Aug 27 02:47:47 EDT 2025
IsPeerReviewed false
IsScholarly true
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-a290t-7b72f892fee7fa13a2d58d1a1659aa2b8e39bafc5fbf85bb3d3d5b4f086f75113
PageCount 6
ParticipantIDs ieee_primary_10247756
PublicationCentury 2000
PublicationDate 2023-July-9
PublicationDateYYYYMMDD 2023-07-09
PublicationDate_xml – month: 07
  year: 2023
  text: 2023-July-9
  day: 09
PublicationDecade 2020
PublicationTitle 2023 60th ACM/IEEE Design Automation Conference (DAC)
PublicationTitleAbbrev DAC
PublicationYear 2023
Publisher IEEE
Publisher_xml – name: IEEE
SSID ssib060584064
Score 2.2252448
Snippet Differentiable Neural Architecture Search (NAS) relies on aggressive weight-sharing to reduce its search cost. This leads to GPU-memory bottlenecks that hamper...
SourceID ieee
SourceType Publisher
StartPage 1
SubjectTerms Costs
Design automation
Memory architecture
Memory management
Microprocessors
Perturbation methods
Scalability
Title PertNAS: Architectural Perturbations for Memory-Efficient Neural Architecture Search
URI https://ieeexplore.ieee.org/document/10247756
WOSCitedRecordID wos001073487300085&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1NSwMxEA22ePCkYsVvcvCaupvdbBJvpbZ40FKwSm8l2UxAkFa2W8F_7yTbWj148BZCZgOT2X2zmXkzhFwDYkKomIPGC8ByXjqmykwwhGbjkxItIHZReHmQo5GaTvV4TVaPXBgAiMln0A3DGMt3i3IVrsrwDee5lKJokZaUsiFrbYwnhPcQnPI1CzhN9M1dry8KhP9uaBHe3Qj_aqMSUWS4_8_9D0hny8ej42-kOSQ7MD8ikzFU9aj3dEt722CAeaNhelXZ5iaOok9KH0M27ScbxGoRuAMNFTlw5Q85oE3ecYc8DweT_j1b90hghuukZtJK7pXmHkB6k2aGO6FcatJCaGO4VZBpa3wpvPVKWJu5zAmbe_yT8RKdreyYtOeLOZwQCoWw-G1MlJEGnaRESXwsujcuTUpbcHdKOkEls_emDMZso42zP-bPyV5QfMxt1RekXVcruCS75Uf9uqyu4uF9AUAOmys
linkProvider IEEE
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3PS8MwFA46BT2pOPG3OXjtbNOmSb2NuTFxKwOn7DaS5gUE2aR2gv-9L-nm9ODBWwlNWpKXfl_z3vceIdeAmOAy5qDxAgQJK0wgi5gHCM3KhgVagK-i8DwQeS4nk2y0FKt7LQwA-OAzaLlL78s382Lhjspwh7NECJ5uki2eJCyq5Vor83EOPoSnZKkDjsLs5q7d4SkSgJYrEt5adf9VSMXjSG_vn2-wT5prRR4dfWPNAdmA2SEZj6Cs8vbjLW2v3QHqlbrmRanrsziKrJQOXTztZ9D1-SLwCdTl5MA7f_QDWkceN8lTrzvu9INllYRAsSysAqEFszJjFkBYFcWKGS5NpKKUZ0oxLSHOtLIFt9pKrnVsYsN1YvFfxgqkW_ERaczmMzgmFFKu8esYSiUU0qRQChwWCY6JwkKnzJyQppuS6VudCGO6mo3TP9qvyE5_PBxMB_f5wxnZdYvgI12zc9KoygVckO3io3p5Ly_9Qn4BgT2ecg
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&rft.genre=proceeding&rft.title=2023+60th+ACM%2FIEEE+Design+Automation+Conference+%28DAC%29&rft.atitle=PertNAS%3A+Architectural+Perturbations+for+Memory-Efficient+Neural+Architecture+Search&rft.au=Ahmad%2C+Afzal&rft.au=Xie%2C+Zhiyao&rft.au=Zhang%2C+Wei&rft.date=2023-07-09&rft.pub=IEEE&rft.spage=1&rft.epage=6&rft_id=info:doi/10.1109%2FDAC56929.2023.10247756&rft.externalDocID=10247756