PertNAS: Architectural Perturbations for Memory-Efficient Neural Architecture Search

Differentiable Neural Architecture Search (NAS) relies on aggressive weight-sharing to reduce its search cost. This leads to GPU-memory bottlenecks that hamper the algorithm's scalability. To resolve these bottlenecks, we propose a perturbations-based evolutionary approach that significantly re...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:2023 60th ACM/IEEE Design Automation Conference (DAC) s. 1 - 6
Hlavní autoři: Ahmad, Afzal, Xie, Zhiyao, Zhang, Wei
Médium: Konferenční příspěvek
Jazyk:angličtina
Vydáno: IEEE 09.07.2023
Témata:
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Abstract Differentiable Neural Architecture Search (NAS) relies on aggressive weight-sharing to reduce its search cost. This leads to GPU-memory bottlenecks that hamper the algorithm's scalability. To resolve these bottlenecks, we propose a perturbations-based evolutionary approach that significantly reduces the memory cost while largely maintaining the efficiency benefits of weight-sharing. Our approach makes minute changes to compact neural architectures and measures their impact on performance. In this way, it extracts high-quality motifs from the search space. We utilize these perturbations to perform NAS in compact models evolving over time to traverse the search space. Our method disentangles GPU-memory consumption from search space size, offering exceptional scalability to large search spaces. Results show competitive accuracy on multiple benchmarks, including CIFAR10, ImageNet2012, and NASBench-301. Specifically, our approach improves accuracy on ImageNet and NASBench-301 by 0.3% and 0.87%, respectively. Furthermore, the memory consumption of search is reduced by roughly 80% against state-of-the-art weight-shared differentiable NAS works while achieving a search time of only 6 GPU hours.
AbstractList Differentiable Neural Architecture Search (NAS) relies on aggressive weight-sharing to reduce its search cost. This leads to GPU-memory bottlenecks that hamper the algorithm's scalability. To resolve these bottlenecks, we propose a perturbations-based evolutionary approach that significantly reduces the memory cost while largely maintaining the efficiency benefits of weight-sharing. Our approach makes minute changes to compact neural architectures and measures their impact on performance. In this way, it extracts high-quality motifs from the search space. We utilize these perturbations to perform NAS in compact models evolving over time to traverse the search space. Our method disentangles GPU-memory consumption from search space size, offering exceptional scalability to large search spaces. Results show competitive accuracy on multiple benchmarks, including CIFAR10, ImageNet2012, and NASBench-301. Specifically, our approach improves accuracy on ImageNet and NASBench-301 by 0.3% and 0.87%, respectively. Furthermore, the memory consumption of search is reduced by roughly 80% against state-of-the-art weight-shared differentiable NAS works while achieving a search time of only 6 GPU hours.
Author Zhang, Wei
Ahmad, Afzal
Xie, Zhiyao
Author_xml – sequence: 1
  givenname: Afzal
  surname: Ahmad
  fullname: Ahmad, Afzal
  email: afzal.ahmad@connect.ust.hk
  organization: The Hong Kong University of Science and Technology
– sequence: 2
  givenname: Zhiyao
  surname: Xie
  fullname: Xie, Zhiyao
  email: eezhiyao@ust.hk
  organization: The Hong Kong University of Science and Technology
– sequence: 3
  givenname: Wei
  surname: Zhang
  fullname: Zhang, Wei
  email: eeweiz@ust.hk
  organization: The Hong Kong University of Science and Technology
BookMark eNpNj9tKw0AYhFdQUGveQGRfIHEP2ZN3IdYD1Cq0Xpfd5P9xoU1ks73o23sGr4YZvhmYc3I8jAMQcsVZxTlz17dNq7QTrhJMyIozURuj9BEpnHFWKiaFrC0_JcU0xcA0U7Zmuj4j6xdIedmsbmiTureYocv75Lf0K96n4HMch4nimOgT7MZ0KOeIsYswZLqEb_JfD-gK_Ke9ICfotxMUvzojr3fzdftQLp7vH9tmUXrhWC5NMAKtEwhg0HPpRa9szz3XynkvggXpgsdOYUCrQpC97FWokVmNRnEuZ-TyZzcCwOY9xZ1Ph83fefkBN7xTrQ
ContentType Conference Proceeding
DBID 6IE
6IH
CBEJK
RIE
RIO
DOI 10.1109/DAC56929.2023.10247756
DatabaseName IEEE Electronic Library (IEL) Conference Proceedings
IEEE Proceedings Order Plan (POP) 1998-present by volume
IEEE Xplore All Conference Proceedings
IEEE Electronic Library (IEL)
IEEE Proceedings Order Plans (POP) 1998-present
DatabaseTitleList
Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
EISBN 9798350323481
EndPage 6
ExternalDocumentID 10247756
Genre orig-research
GroupedDBID 6IE
6IH
ACM
ALMA_UNASSIGNED_HOLDINGS
CBEJK
RIE
RIO
ID FETCH-LOGICAL-a290t-7b72f892fee7fa13a2d58d1a1659aa2b8e39bafc5fbf85bb3d3d5b4f086f75113
IEDL.DBID RIE
ISICitedReferencesCount 0
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001073487300085&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
IngestDate Wed Aug 27 02:47:47 EDT 2025
IsPeerReviewed false
IsScholarly true
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-a290t-7b72f892fee7fa13a2d58d1a1659aa2b8e39bafc5fbf85bb3d3d5b4f086f75113
PageCount 6
ParticipantIDs ieee_primary_10247756
PublicationCentury 2000
PublicationDate 2023-July-9
PublicationDateYYYYMMDD 2023-07-09
PublicationDate_xml – month: 07
  year: 2023
  text: 2023-July-9
  day: 09
PublicationDecade 2020
PublicationTitle 2023 60th ACM/IEEE Design Automation Conference (DAC)
PublicationTitleAbbrev DAC
PublicationYear 2023
Publisher IEEE
Publisher_xml – name: IEEE
SSID ssib060584064
Score 2.2252448
Snippet Differentiable Neural Architecture Search (NAS) relies on aggressive weight-sharing to reduce its search cost. This leads to GPU-memory bottlenecks that hamper...
SourceID ieee
SourceType Publisher
StartPage 1
SubjectTerms Costs
Design automation
Memory architecture
Memory management
Microprocessors
Perturbation methods
Scalability
Title PertNAS: Architectural Perturbations for Memory-Efficient Neural Architecture Search
URI https://ieeexplore.ieee.org/document/10247756
WOSCitedRecordID wos001073487300085&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1NSwMxEA22ePCkYsVvcvCaukl2N4m3Uls8aClYobeSNBMQpJV1K_jvnWRbqwcP3sIkIezMLm82M2-GkGttQuGclQxynzGEAM-cjkwu5XJQ3oo8JEs_qNFIT6dmvCarJy4MAKTkM-jGYYrl--V8Fa_K8AsXuVJF2SItpVRD1tq8PDG8h-CUr1nAPDM3d71-USL8d2OL8O5m8682KglFhvv_PP-AdLZ8PDr-RppDsgOLIzIZQ1WPek-3tLcNBthXGsWryjU3cRR9UvoYs2k_2SBVi8ATaKzIgSt_7APa5B13yPNwMOnfs3WPBGaFyWqmnBJBGxEAVLBcWuEL7bnlZWGsFU6DNM6GeRFc0GgV6aUvXB7wTyYodLbkMWkvlgs4IbQEXuKczLT3-JjeWQvcB4EuQODoJ5ySTlTJ7K0pgzHbaOPsD_k52YuKT7mt5oK062oFl2R3_lG_vFdXyXhfMxOcoA
linkProvider IEEE
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LSwMxEA5aBT2pWPFtDl63brLJJvFWqqViuxSs0FtJmgkI0sq6Ffz3JtnW6sGDtyWbkMckzJfMfDMIXUvluDE6S4DZNPEqwCZGBiaXMAyE1ZS5KOm-KAo5HqvhkqweuTAAEJ3PoBU-oy3fzqeL8FTmTzhlQvB8E21xxiip6Vqr7RMMfF49sSUPmKTq5q7d4bkHAK2QJLy1av4rkUrUI929f45gHzXXjDw8_NY1B2gDZodoNISyKtpPt7i9NgfoVxyKF6Wp3-KwR6V4EPxpP5P7GC_C94BDTA5f80c7wLXncRM9d-9HnV6yzJKQaKrSKhFGUCcVdQDCaZJparm0RJOcK62pkZApo92UO-Okl0tmM8sNc_4u44SHW9kRaszmMzhGOAeS-39ZKq3107RGayDWUQ8CHPFI4QQ1w5JM3upAGJPVapz-UX6FdnqjQX_Sfygez9BuEEL0dFXnqFGVC7hA29OP6uW9vIyC_AKHVJ_n
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&rft.genre=proceeding&rft.title=2023+60th+ACM%2FIEEE+Design+Automation+Conference+%28DAC%29&rft.atitle=PertNAS%3A+Architectural+Perturbations+for+Memory-Efficient+Neural+Architecture+Search&rft.au=Ahmad%2C+Afzal&rft.au=Xie%2C+Zhiyao&rft.au=Zhang%2C+Wei&rft.date=2023-07-09&rft.pub=IEEE&rft.spage=1&rft.epage=6&rft_id=info:doi/10.1109%2FDAC56929.2023.10247756&rft.externalDocID=10247756