PertNAS: Architectural Perturbations for Memory-Efficient Neural Architecture Search

Differentiable Neural Architecture Search (NAS) relies on aggressive weight-sharing to reduce its search cost. This leads to GPU-memory bottlenecks that hamper the algorithm's scalability. To resolve these bottlenecks, we propose a perturbations-based evolutionary approach that significantly re...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:2023 60th ACM/IEEE Design Automation Conference (DAC) s. 1 - 6
Hlavní autori: Ahmad, Afzal, Xie, Zhiyao, Zhang, Wei
Médium: Konferenčný príspevok..
Jazyk:English
Vydavateľské údaje: IEEE 09.07.2023
Predmet:
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Abstract Differentiable Neural Architecture Search (NAS) relies on aggressive weight-sharing to reduce its search cost. This leads to GPU-memory bottlenecks that hamper the algorithm's scalability. To resolve these bottlenecks, we propose a perturbations-based evolutionary approach that significantly reduces the memory cost while largely maintaining the efficiency benefits of weight-sharing. Our approach makes minute changes to compact neural architectures and measures their impact on performance. In this way, it extracts high-quality motifs from the search space. We utilize these perturbations to perform NAS in compact models evolving over time to traverse the search space. Our method disentangles GPU-memory consumption from search space size, offering exceptional scalability to large search spaces. Results show competitive accuracy on multiple benchmarks, including CIFAR10, ImageNet2012, and NASBench-301. Specifically, our approach improves accuracy on ImageNet and NASBench-301 by 0.3% and 0.87%, respectively. Furthermore, the memory consumption of search is reduced by roughly 80% against state-of-the-art weight-shared differentiable NAS works while achieving a search time of only 6 GPU hours.
AbstractList Differentiable Neural Architecture Search (NAS) relies on aggressive weight-sharing to reduce its search cost. This leads to GPU-memory bottlenecks that hamper the algorithm's scalability. To resolve these bottlenecks, we propose a perturbations-based evolutionary approach that significantly reduces the memory cost while largely maintaining the efficiency benefits of weight-sharing. Our approach makes minute changes to compact neural architectures and measures their impact on performance. In this way, it extracts high-quality motifs from the search space. We utilize these perturbations to perform NAS in compact models evolving over time to traverse the search space. Our method disentangles GPU-memory consumption from search space size, offering exceptional scalability to large search spaces. Results show competitive accuracy on multiple benchmarks, including CIFAR10, ImageNet2012, and NASBench-301. Specifically, our approach improves accuracy on ImageNet and NASBench-301 by 0.3% and 0.87%, respectively. Furthermore, the memory consumption of search is reduced by roughly 80% against state-of-the-art weight-shared differentiable NAS works while achieving a search time of only 6 GPU hours.
Author Zhang, Wei
Ahmad, Afzal
Xie, Zhiyao
Author_xml – sequence: 1
  givenname: Afzal
  surname: Ahmad
  fullname: Ahmad, Afzal
  email: afzal.ahmad@connect.ust.hk
  organization: The Hong Kong University of Science and Technology
– sequence: 2
  givenname: Zhiyao
  surname: Xie
  fullname: Xie, Zhiyao
  email: eezhiyao@ust.hk
  organization: The Hong Kong University of Science and Technology
– sequence: 3
  givenname: Wei
  surname: Zhang
  fullname: Zhang, Wei
  email: eeweiz@ust.hk
  organization: The Hong Kong University of Science and Technology
BookMark eNpNj9tKw0AYhFdQUGveQGRfIHEP2ZN3IdYD1Cq0Xpfd5P9xoU1ks73o23sGr4YZvhmYc3I8jAMQcsVZxTlz17dNq7QTrhJMyIozURuj9BEpnHFWKiaFrC0_JcU0xcA0U7Zmuj4j6xdIedmsbmiTureYocv75Lf0K96n4HMch4nimOgT7MZ0KOeIsYswZLqEb_JfD-gK_Ke9ICfotxMUvzojr3fzdftQLp7vH9tmUXrhWC5NMAKtEwhg0HPpRa9szz3XynkvggXpgsdOYUCrQpC97FWokVmNRnEuZ-TyZzcCwOY9xZ1Ph83fefkBN7xTrQ
ContentType Conference Proceeding
DBID 6IE
6IH
CBEJK
RIE
RIO
DOI 10.1109/DAC56929.2023.10247756
DatabaseName IEEE Electronic Library (IEL) Conference Proceedings
IEEE Proceedings Order Plan (POP) 1998-present by volume
IEEE Xplore All Conference Proceedings
IEEE Electronic Library (IEL)
IEEE Proceedings Order Plans (POP) 1998-present
DatabaseTitleList
Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
EISBN 9798350323481
EndPage 6
ExternalDocumentID 10247756
Genre orig-research
GroupedDBID 6IE
6IH
ACM
ALMA_UNASSIGNED_HOLDINGS
CBEJK
RIE
RIO
ID FETCH-LOGICAL-a290t-7b72f892fee7fa13a2d58d1a1659aa2b8e39bafc5fbf85bb3d3d5b4f086f75113
IEDL.DBID RIE
ISICitedReferencesCount 0
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001073487300085&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
IngestDate Wed Aug 27 02:47:47 EDT 2025
IsPeerReviewed false
IsScholarly true
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-a290t-7b72f892fee7fa13a2d58d1a1659aa2b8e39bafc5fbf85bb3d3d5b4f086f75113
PageCount 6
ParticipantIDs ieee_primary_10247756
PublicationCentury 2000
PublicationDate 2023-July-9
PublicationDateYYYYMMDD 2023-07-09
PublicationDate_xml – month: 07
  year: 2023
  text: 2023-July-9
  day: 09
PublicationDecade 2020
PublicationTitle 2023 60th ACM/IEEE Design Automation Conference (DAC)
PublicationTitleAbbrev DAC
PublicationYear 2023
Publisher IEEE
Publisher_xml – name: IEEE
SSID ssib060584064
Score 2.2252448
Snippet Differentiable Neural Architecture Search (NAS) relies on aggressive weight-sharing to reduce its search cost. This leads to GPU-memory bottlenecks that hamper...
SourceID ieee
SourceType Publisher
StartPage 1
SubjectTerms Costs
Design automation
Memory architecture
Memory management
Microprocessors
Perturbation methods
Scalability
Title PertNAS: Architectural Perturbations for Memory-Efficient Neural Architecture Search
URI https://ieeexplore.ieee.org/document/10247756
WOSCitedRecordID wos001073487300085&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LSwMxEA62ePCkYsU3OXjd2s073kpt8WIpWKG3MslOQJBW1m3Bf2-y21o9ePAWQobAJJNJMt83Q8htwb0DQJEZLnR8oAgVTYrlmTLcS8W9AtUUm9DjsZnN7GRDVq-5MIhYg8-wm5p1LL9Y-lX6KosWzoTWUrVIS2vdkLW2myeF96JzEhsWcN6zdw_9gVTR_XdTifDuVvhXGZXai4wO_zn_Eens-Hh08u1pjskeLk7IdIJlNe4_39P-LhgAbzR1r0rX_MTReCelTwlN-5kN62wRcQaaMnLEkT_kkDa44w55GQ2ng8dsUyMhA2Z7VaadZsFYFhB1gJwDK6QpcsiVtADMGeTWQfAyuGCkc7zghXQixJdM0PGyxU9Je7Fc4BmhTvEgHQj0wggIaMFgbh2TLB4LUY_npJNUMn9v0mDMt9q4-KP_khwkxdfYVntF2lW5wmuy79fV60d5Uy_eF4Anm7M
linkProvider IEEE
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LSwMxEA5aBT2pWPFtDl63dvOOt1ItFdulYIXeSpKdgCCtrFvBf2-y21o9ePAWQoaEmSQzycw3g9B1Tp01BliiKJPhgcJEOFIkTYSijgvqhBF1sQmZZWoy0aMlWL3CwgBAFXwGrdisfPn53C3iV1k44YRJycUm2uKMkbSGa622T3TwBfXEljjgtK1v7jpdLoIB0IpFwlsr8l-FVCo90tv75wr2UXONyMOjb11zgDZgdojGIyjKrPN0iztrd4B5xbF7Udj6Lw4HqxQPYzztZ3Jf5YsIM-CYkyOM_EEHuI48bqLn3v2420-WVRISQ3S7TKSVxCtNPID0JqWG5FzlqUkF18YQq4Bqa7zj3nrFraU5zbllPrxlvAzmFj1Cjdl8BscIW0E9t4aBY4oZD9ooSLUlnISLIfDxBDUjS6ZvdSKM6Yobp3_0X6Gd_ng4mA4esscztBuFUEW66nPUKIsFXKBt91G-vBeXlSC_AMeqnvo
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&rft.genre=proceeding&rft.title=2023+60th+ACM%2FIEEE+Design+Automation+Conference+%28DAC%29&rft.atitle=PertNAS%3A+Architectural+Perturbations+for+Memory-Efficient+Neural+Architecture+Search&rft.au=Ahmad%2C+Afzal&rft.au=Xie%2C+Zhiyao&rft.au=Zhang%2C+Wei&rft.date=2023-07-09&rft.pub=IEEE&rft.spage=1&rft.epage=6&rft_id=info:doi/10.1109%2FDAC56929.2023.10247756&rft.externalDocID=10247756