PertNAS: Architectural Perturbations for Memory-Efficient Neural Architecture Search

Differentiable Neural Architecture Search (NAS) relies on aggressive weight-sharing to reduce its search cost. This leads to GPU-memory bottlenecks that hamper the algorithm's scalability. To resolve these bottlenecks, we propose a perturbations-based evolutionary approach that significantly re...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:2023 60th ACM/IEEE Design Automation Conference (DAC) s. 1 - 6
Hlavní autoři: Ahmad, Afzal, Xie, Zhiyao, Zhang, Wei
Médium: Konferenční příspěvek
Jazyk:angličtina
Vydáno: IEEE 09.07.2023
Témata:
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Abstract Differentiable Neural Architecture Search (NAS) relies on aggressive weight-sharing to reduce its search cost. This leads to GPU-memory bottlenecks that hamper the algorithm's scalability. To resolve these bottlenecks, we propose a perturbations-based evolutionary approach that significantly reduces the memory cost while largely maintaining the efficiency benefits of weight-sharing. Our approach makes minute changes to compact neural architectures and measures their impact on performance. In this way, it extracts high-quality motifs from the search space. We utilize these perturbations to perform NAS in compact models evolving over time to traverse the search space. Our method disentangles GPU-memory consumption from search space size, offering exceptional scalability to large search spaces. Results show competitive accuracy on multiple benchmarks, including CIFAR10, ImageNet2012, and NASBench-301. Specifically, our approach improves accuracy on ImageNet and NASBench-301 by 0.3% and 0.87%, respectively. Furthermore, the memory consumption of search is reduced by roughly 80% against state-of-the-art weight-shared differentiable NAS works while achieving a search time of only 6 GPU hours.
AbstractList Differentiable Neural Architecture Search (NAS) relies on aggressive weight-sharing to reduce its search cost. This leads to GPU-memory bottlenecks that hamper the algorithm's scalability. To resolve these bottlenecks, we propose a perturbations-based evolutionary approach that significantly reduces the memory cost while largely maintaining the efficiency benefits of weight-sharing. Our approach makes minute changes to compact neural architectures and measures their impact on performance. In this way, it extracts high-quality motifs from the search space. We utilize these perturbations to perform NAS in compact models evolving over time to traverse the search space. Our method disentangles GPU-memory consumption from search space size, offering exceptional scalability to large search spaces. Results show competitive accuracy on multiple benchmarks, including CIFAR10, ImageNet2012, and NASBench-301. Specifically, our approach improves accuracy on ImageNet and NASBench-301 by 0.3% and 0.87%, respectively. Furthermore, the memory consumption of search is reduced by roughly 80% against state-of-the-art weight-shared differentiable NAS works while achieving a search time of only 6 GPU hours.
Author Zhang, Wei
Ahmad, Afzal
Xie, Zhiyao
Author_xml – sequence: 1
  givenname: Afzal
  surname: Ahmad
  fullname: Ahmad, Afzal
  email: afzal.ahmad@connect.ust.hk
  organization: The Hong Kong University of Science and Technology
– sequence: 2
  givenname: Zhiyao
  surname: Xie
  fullname: Xie, Zhiyao
  email: eezhiyao@ust.hk
  organization: The Hong Kong University of Science and Technology
– sequence: 3
  givenname: Wei
  surname: Zhang
  fullname: Zhang, Wei
  email: eeweiz@ust.hk
  organization: The Hong Kong University of Science and Technology
BookMark eNpNj9tKw0AYhFdQUGveQGRfIHEP2ZN3IdYD1Cq0Xpfd5P9xoU1ks73o23sGr4YZvhmYc3I8jAMQcsVZxTlz17dNq7QTrhJMyIozURuj9BEpnHFWKiaFrC0_JcU0xcA0U7Zmuj4j6xdIedmsbmiTureYocv75Lf0K96n4HMch4nimOgT7MZ0KOeIsYswZLqEb_JfD-gK_Ke9ICfotxMUvzojr3fzdftQLp7vH9tmUXrhWC5NMAKtEwhg0HPpRa9szz3XynkvggXpgsdOYUCrQpC97FWokVmNRnEuZ-TyZzcCwOY9xZ1Ph83fefkBN7xTrQ
ContentType Conference Proceeding
DBID 6IE
6IH
CBEJK
RIE
RIO
DOI 10.1109/DAC56929.2023.10247756
DatabaseName IEEE Electronic Library (IEL) Conference Proceedings
IEEE Proceedings Order Plan (POP) 1998-present by volume
IEEE Xplore All Conference Proceedings
IEEE Electronic Library (IEL)
IEEE Proceedings Order Plans (POP) 1998-present
DatabaseTitleList
Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Xplore
  url: https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
EISBN 9798350323481
EndPage 6
ExternalDocumentID 10247756
Genre orig-research
GroupedDBID 6IE
6IH
ACM
ALMA_UNASSIGNED_HOLDINGS
CBEJK
RIE
RIO
ID FETCH-LOGICAL-a290t-7b72f892fee7fa13a2d58d1a1659aa2b8e39bafc5fbf85bb3d3d5b4f086f75113
IEDL.DBID RIE
ISICitedReferencesCount 0
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001073487300085&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
IngestDate Wed Aug 27 02:47:47 EDT 2025
IsPeerReviewed false
IsScholarly true
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-a290t-7b72f892fee7fa13a2d58d1a1659aa2b8e39bafc5fbf85bb3d3d5b4f086f75113
PageCount 6
ParticipantIDs ieee_primary_10247756
PublicationCentury 2000
PublicationDate 2023-July-9
PublicationDateYYYYMMDD 2023-07-09
PublicationDate_xml – month: 07
  year: 2023
  text: 2023-July-9
  day: 09
PublicationDecade 2020
PublicationTitle 2023 60th ACM/IEEE Design Automation Conference (DAC)
PublicationTitleAbbrev DAC
PublicationYear 2023
Publisher IEEE
Publisher_xml – name: IEEE
SSID ssib060584064
Score 2.2252448
Snippet Differentiable Neural Architecture Search (NAS) relies on aggressive weight-sharing to reduce its search cost. This leads to GPU-memory bottlenecks that hamper...
SourceID ieee
SourceType Publisher
StartPage 1
SubjectTerms Costs
Design automation
Memory architecture
Memory management
Microprocessors
Perturbation methods
Scalability
Title PertNAS: Architectural Perturbations for Memory-Efficient Neural Architecture Search
URI https://ieeexplore.ieee.org/document/10247756
WOSCitedRecordID wos001073487300085&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1NSwMxEB1s8eBJxYrf5OB1a5NsNom3Ulu8WApW6K0kmwkI0sq6LfjvTbKt1YMHb0lICEwS3mRm3gzAbWE8FbJng-ZmXZaXLhqaNM-C6koxdKmzTbEJOR6r2UxPNmT1xIVBxBR8ht3YTL58tyxX0VQWXjjLpRRFC1pSyoastb080b0XwCnfsIBpT9899AeiCPDfjSXCu9vFv8qoJBQZHf5z_yPo7Ph4ZPKNNMewh4sTmE6wqsf953vS3zkDzBuJw6vKNpY4EnRS8hSjaT-zYcoWEXYgMSNHmPljHZIm7rgDL6PhdPCYbWokZIbpXp1JK5lXmnlE6Q3lhjmhHDW0ENoYZhVybY0vhbdeCWu5407Y3IefjJdB2eKn0F4sF3gGBK3j4aOsozM2Z0YoVSpvFCqGPEAankMnimT-3qTBmG-lcfHH-CUcRMGn2FZ9Be26WuE17Jfr-vWjukmH9wWRDpv_
linkProvider IEEE
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LSwMxEA5aBT2pWPFtDl63Nq9N4q1US8V2KViht5JsJiBIK-tW8N-b7LZWDx68JSEhIQ_my8x8Mwhdp8YTIds2IDfrEp67qGjSLAnQlUCoEmfrZBMyy9RkokdLsnrFhQGAyvkMWrFY2fLdPF9EVVl44ZRLKdJNtCU4p6Sma62uTzTwBfHElzxg0tY3d52uSAMAaMUk4a3V8F-JVCo50tv75wr2UXPNyMOjb1lzgDZgdojGIyjKrPN0iztrc4B5xbF5UdhaF4cDKsXD6E_7mdxX8SLCDDjG5Ag9f4wDXHseN9Fz737c7SfLLAmJobpdJtJK6pWmHkB6Q5ihTihHDEmFNoZaBUxb43PhrVfCWuaYE5b78JfxMsAtdoQas_kMjhEG61j4KutojuXUCKVy5Y0CRYEFoQYnqBm3ZPpWB8KYrnbj9I_2K7TTHw8H08FD9niGduMhVJ6u-hw1ymIBF2g7_yhf3ovL6iC_AN1Xn0Y
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&rft.genre=proceeding&rft.title=2023+60th+ACM%2FIEEE+Design+Automation+Conference+%28DAC%29&rft.atitle=PertNAS%3A+Architectural+Perturbations+for+Memory-Efficient+Neural+Architecture+Search&rft.au=Ahmad%2C+Afzal&rft.au=Xie%2C+Zhiyao&rft.au=Zhang%2C+Wei&rft.date=2023-07-09&rft.pub=IEEE&rft.spage=1&rft.epage=6&rft_id=info:doi/10.1109%2FDAC56929.2023.10247756&rft.externalDocID=10247756