PertNAS: Architectural Perturbations for Memory-Efficient Neural Architecture Search

Differentiable Neural Architecture Search (NAS) relies on aggressive weight-sharing to reduce its search cost. This leads to GPU-memory bottlenecks that hamper the algorithm's scalability. To resolve these bottlenecks, we propose a perturbations-based evolutionary approach that significantly re...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:2023 60th ACM/IEEE Design Automation Conference (DAC) S. 1 - 6
Hauptverfasser: Ahmad, Afzal, Xie, Zhiyao, Zhang, Wei
Format: Tagungsbericht
Sprache:Englisch
Veröffentlicht: IEEE 09.07.2023
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Abstract Differentiable Neural Architecture Search (NAS) relies on aggressive weight-sharing to reduce its search cost. This leads to GPU-memory bottlenecks that hamper the algorithm's scalability. To resolve these bottlenecks, we propose a perturbations-based evolutionary approach that significantly reduces the memory cost while largely maintaining the efficiency benefits of weight-sharing. Our approach makes minute changes to compact neural architectures and measures their impact on performance. In this way, it extracts high-quality motifs from the search space. We utilize these perturbations to perform NAS in compact models evolving over time to traverse the search space. Our method disentangles GPU-memory consumption from search space size, offering exceptional scalability to large search spaces. Results show competitive accuracy on multiple benchmarks, including CIFAR10, ImageNet2012, and NASBench-301. Specifically, our approach improves accuracy on ImageNet and NASBench-301 by 0.3% and 0.87%, respectively. Furthermore, the memory consumption of search is reduced by roughly 80% against state-of-the-art weight-shared differentiable NAS works while achieving a search time of only 6 GPU hours.
AbstractList Differentiable Neural Architecture Search (NAS) relies on aggressive weight-sharing to reduce its search cost. This leads to GPU-memory bottlenecks that hamper the algorithm's scalability. To resolve these bottlenecks, we propose a perturbations-based evolutionary approach that significantly reduces the memory cost while largely maintaining the efficiency benefits of weight-sharing. Our approach makes minute changes to compact neural architectures and measures their impact on performance. In this way, it extracts high-quality motifs from the search space. We utilize these perturbations to perform NAS in compact models evolving over time to traverse the search space. Our method disentangles GPU-memory consumption from search space size, offering exceptional scalability to large search spaces. Results show competitive accuracy on multiple benchmarks, including CIFAR10, ImageNet2012, and NASBench-301. Specifically, our approach improves accuracy on ImageNet and NASBench-301 by 0.3% and 0.87%, respectively. Furthermore, the memory consumption of search is reduced by roughly 80% against state-of-the-art weight-shared differentiable NAS works while achieving a search time of only 6 GPU hours.
Author Zhang, Wei
Ahmad, Afzal
Xie, Zhiyao
Author_xml – sequence: 1
  givenname: Afzal
  surname: Ahmad
  fullname: Ahmad, Afzal
  email: afzal.ahmad@connect.ust.hk
  organization: The Hong Kong University of Science and Technology
– sequence: 2
  givenname: Zhiyao
  surname: Xie
  fullname: Xie, Zhiyao
  email: eezhiyao@ust.hk
  organization: The Hong Kong University of Science and Technology
– sequence: 3
  givenname: Wei
  surname: Zhang
  fullname: Zhang, Wei
  email: eeweiz@ust.hk
  organization: The Hong Kong University of Science and Technology
BookMark eNpNj9tKw0AYhFdQUGveQGRfIHEP2ZN3IdYD1Cq0Xpfd5P9xoU1ks73o23sGr4YZvhmYc3I8jAMQcsVZxTlz17dNq7QTrhJMyIozURuj9BEpnHFWKiaFrC0_JcU0xcA0U7Zmuj4j6xdIedmsbmiTureYocv75Lf0K96n4HMch4nimOgT7MZ0KOeIsYswZLqEb_JfD-gK_Ke9ICfotxMUvzojr3fzdftQLp7vH9tmUXrhWC5NMAKtEwhg0HPpRa9szz3XynkvggXpgsdOYUCrQpC97FWokVmNRnEuZ-TyZzcCwOY9xZ1Ph83fefkBN7xTrQ
ContentType Conference Proceeding
DBID 6IE
6IH
CBEJK
RIE
RIO
DOI 10.1109/DAC56929.2023.10247756
DatabaseName IEEE Electronic Library (IEL) Conference Proceedings
IEEE Proceedings Order Plan (POP) 1998-present by volume
IEEE Xplore All Conference Proceedings
IEEE Electronic Library (IEL)
IEEE Proceedings Order Plans (POP) 1998-present
DatabaseTitleList
Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
EISBN 9798350323481
EndPage 6
ExternalDocumentID 10247756
Genre orig-research
GroupedDBID 6IE
6IH
ACM
ALMA_UNASSIGNED_HOLDINGS
CBEJK
RIE
RIO
ID FETCH-LOGICAL-a290t-7b72f892fee7fa13a2d58d1a1659aa2b8e39bafc5fbf85bb3d3d5b4f086f75113
IEDL.DBID RIE
ISICitedReferencesCount 0
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001073487300085&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
IngestDate Wed Aug 27 02:47:47 EDT 2025
IsPeerReviewed false
IsScholarly true
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-a290t-7b72f892fee7fa13a2d58d1a1659aa2b8e39bafc5fbf85bb3d3d5b4f086f75113
PageCount 6
ParticipantIDs ieee_primary_10247756
PublicationCentury 2000
PublicationDate 2023-July-9
PublicationDateYYYYMMDD 2023-07-09
PublicationDate_xml – month: 07
  year: 2023
  text: 2023-July-9
  day: 09
PublicationDecade 2020
PublicationTitle 2023 60th ACM/IEEE Design Automation Conference (DAC)
PublicationTitleAbbrev DAC
PublicationYear 2023
Publisher IEEE
Publisher_xml – name: IEEE
SSID ssib060584064
Score 2.2252448
Snippet Differentiable Neural Architecture Search (NAS) relies on aggressive weight-sharing to reduce its search cost. This leads to GPU-memory bottlenecks that hamper...
SourceID ieee
SourceType Publisher
StartPage 1
SubjectTerms Costs
Design automation
Memory architecture
Memory management
Microprocessors
Perturbation methods
Scalability
Title PertNAS: Architectural Perturbations for Memory-Efficient Neural Architecture Search
URI https://ieeexplore.ieee.org/document/10247756
WOSCitedRecordID wos001073487300085&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1NSwMxEA22ePCkYsVvcvCaupvsNDveSm3xYilYobeS7E5AkFbWbcF_b5JtrR48eAtDwsAkYTKZeW8Yu8VA0Y5JJhB1ITJwWhhFidA-GgIyITHjYrMJPR7nsxlONmD1iIUholh8Rt0wjLn8clmswleZv-Ey0xp6LdbSWjdgre3hCek975yyDQo4TfDuoT-Annf_3dAivLtd_KuNSvQio8N_6j9inR0ej0--Pc0x26PFCZtOqKrH_ed73t8lA8wbD-JVZZufOO7fpPwpVNN-imFki_AaeGDk8DN_rCPe1B132MtoOB08ik2PBGEkJrXQVkuXo3RE2plUGVlCXqbeyIDGSJuTQmtcAc66HKxVpSrBZs5HMk77x5Y6Ze3FckFnjHuBA0qtyywEYjAETaVSgA6MLFCds04wyfy9ocGYb61x8Yf8kh0Ew8faVrxi7bpa0TXbL9b160d1EzfvC9ogm1U
linkProvider IEEE
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1NSwMxEA1aBT2pWPHbHLym7m4yzcZbqZaK7VKwQm8l2Z2AIK2sW8F_b5JtrR48eAtDQsIkYSaZeW8IuVaeol1FgiklcybASqY5Rky61xCg9oEZG4pNyCxLJxM1WoLVAxYGEUPyGbZ8M8Tyi3m-8F9l7oYnQkpob5ItECKJa7jW6vj4AJ8zT2KJA44jdXPX6ULbOQAtXyS8tRr-q5BKsCO9vX-uYJ8014g8Ovq2NQdkA2eHZDzCsso6T7e0sw4H6FfqxYvS1H9x1HmldOjzaT_ZfeCLcDNQz8nhev4Yh7TOPG6S5979uNtnyyoJTCcqqpg0MrGpSiyitDrmOikgLWKnZlBaJyZFroy2OVhjUzCGF7wAI6x7y1jp3C1-RBqz-QyPCXUCCxgbKwx4ajAFEgvOQVnQSa74CWl6lUzfaiKM6Uobp3_Ir8hOfzwcTAcP2eMZ2fWbEDJd1TlpVOUCL8h2_lG9vJeXYSO_ABzMnpw
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&rft.genre=proceeding&rft.title=2023+60th+ACM%2FIEEE+Design+Automation+Conference+%28DAC%29&rft.atitle=PertNAS%3A+Architectural+Perturbations+for+Memory-Efficient+Neural+Architecture+Search&rft.au=Ahmad%2C+Afzal&rft.au=Xie%2C+Zhiyao&rft.au=Zhang%2C+Wei&rft.date=2023-07-09&rft.pub=IEEE&rft.spage=1&rft.epage=6&rft_id=info:doi/10.1109%2FDAC56929.2023.10247756&rft.externalDocID=10247756