Quantum Neural Network Compression

Model compression, such as pruning and quantization, has been widely applied to optimize neural networks on resource-limited classical devices. Recently, there are growing interest in variational quantum circuits (VQC), that is, a type of neural network on quantum computers (a.k.a., quantum neural n...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:2022 IEEE/ACM International Conference On Computer Aided Design (ICCAD) s. 1 - 9
Hlavní autori: Hu, Zhirui, Dong, Peiyan, Wang, Zhepeng, Lin, Youzuo, Wang, Yanzhi, Jiang, Weiwen
Médium: Konferenčný príspevok..
Jazyk:English
Vydavateľské údaje: ACM 29.10.2022
Predmet:
ISSN:1558-2434
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Abstract Model compression, such as pruning and quantization, has been widely applied to optimize neural networks on resource-limited classical devices. Recently, there are growing interest in variational quantum circuits (VQC), that is, a type of neural network on quantum computers (a.k.a., quantum neural networks). It is well known that the near-term quantum devices have high noise and limited resources (i.e., quantum bits, qubits); yet, how to compress quantum neural networks has not been thoroughly studied. One might think it is straightforward to apply the classical compression techniques to quantum scenarios. However, this paper reveals that there exist differences between the compression of quantum and classical neural networks. Based on our observations, we claim that the compilation/traspilation has to be involved in the compression process. On top of this, we propose the very first systematical framework, namely CompVQC, to compress quantum neural networks (QNNs). In CompVQC, the key component is a novel compression algorithm, which is based on the alternating direction method of multipliers (ADMM) approach. Experiments demonstrate the advantage of the CompVQC, reducing the circuit depth (almost over 2.5×) with a negligible accuracy drop (<1%), which outperforms other competitors. Another promising truth is our CompVQC can indeed promote the robustness of the QNN on the near-term noisy quantum devices.
AbstractList Model compression, such as pruning and quantization, has been widely applied to optimize neural networks on resource-limited classical devices. Recently, there are growing interest in variational quantum circuits (VQC), that is, a type of neural network on quantum computers (a.k.a., quantum neural networks). It is well known that the near-term quantum devices have high noise and limited resources (i.e., quantum bits, qubits); yet, how to compress quantum neural networks has not been thoroughly studied. One might think it is straightforward to apply the classical compression techniques to quantum scenarios. However, this paper reveals that there exist differences between the compression of quantum and classical neural networks. Based on our observations, we claim that the compilation/traspilation has to be involved in the compression process. On top of this, we propose the very first systematical framework, namely CompVQC, to compress quantum neural networks (QNNs). In CompVQC, the key component is a novel compression algorithm, which is based on the alternating direction method of multipliers (ADMM) approach. Experiments demonstrate the advantage of the CompVQC, reducing the circuit depth (almost over 2.5×) with a negligible accuracy drop (<1%), which outperforms other competitors. Another promising truth is our CompVQC can indeed promote the robustness of the QNN on the near-term noisy quantum devices.
Author Jiang, Weiwen
Dong, Peiyan
Wang, Zhepeng
Lin, Youzuo
Hu, Zhirui
Wang, Yanzhi
Author_xml – sequence: 1
  givenname: Zhirui
  surname: Hu
  fullname: Hu, Zhirui
  email: zhu2@gmu.edu
  organization: George Mason University,Electrical and Computer Engineering Department,Fairfax,Virginia,United States,22030
– sequence: 2
  givenname: Peiyan
  surname: Dong
  fullname: Dong, Peiyan
  organization: Northeastern University,Department of Electrical and Computer Engineering,Boston,MA,United States,02115
– sequence: 3
  givenname: Zhepeng
  surname: Wang
  fullname: Wang, Zhepeng
  organization: George Mason University,Electrical and Computer Engineering Department,Fairfax,Virginia,United States,22030
– sequence: 4
  givenname: Youzuo
  surname: Lin
  fullname: Lin, Youzuo
  organization: Los Alamos National Laboratory,Earth and Environmental Sciences Division,NM,United States,87545
– sequence: 5
  givenname: Yanzhi
  surname: Wang
  fullname: Wang, Yanzhi
  organization: Northeastern University,Department of Electrical and Computer Engineering,Boston,MA,United States,02115
– sequence: 6
  givenname: Weiwen
  surname: Jiang
  fullname: Jiang, Weiwen
  email: wjiang8@gmu.edu
  organization: George Mason University,Electrical and Computer Engineering Department,Fairfax,Virginia,United States,22030
BookMark eNotjk1Lw0AQQFdRsNacvXgo3lNnZmd2d44S_IKiCHoum7iBaJOUfCD-ewt6eocHj3duTrq-S8ZcIqwRWW6sQLBCayusNtCRydSHgwCrhJ6PzQJFQk5s-cxk4_gJABQ8eg8Lc_06x26a29Vzmoe4O2D67oevVdG3-yGNY9N3F-a0jrsxZf9cmvf7u7fiMd-8PDwVt5s8UtApr9STqHOe6sMReVZxJdcRMWgohUsC0CpWwNWHi-ydTcQeNSJUtWOxS3P1121SStv90LRx-NkigFMnwf4C4Hc_EQ
ContentType Conference Proceeding
DBID 6IE
6IH
CBEJK
RIE
RIO
DOI 10.1145/3508352.3549382
DatabaseName IEEE Electronic Library (IEL) Conference Proceedings
IEEE Proceedings Order Plan (POP) 1998-present by volume
IEEE Xplore All Conference Proceedings
IEEE Electronic Library (IEL)
IEEE Proceedings Order Plans (POP) 1998-present
DatabaseTitleList
Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
EISBN 9781450392174
1450392172
EISSN 1558-2434
EndPage 9
ExternalDocumentID 10069658
Genre orig-research
GroupedDBID 6IE
6IF
6IH
6IL
6IN
AAWTH
ABLEC
ADZIZ
ALMA_UNASSIGNED_HOLDINGS
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CBEJK
CHZPO
FEDTE
IEGSK
IJVOP
M43
OCL
RIE
RIL
RIO
ID FETCH-LOGICAL-a289t-c972596672f350274956b4fa11898b54b2009cac04cd6a4763e24719a10cf6453
IEDL.DBID RIE
ISICitedReferencesCount 14
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000981574300139&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
IngestDate Wed Aug 27 02:46:18 EDT 2025
IsDoiOpenAccess false
IsOpenAccess true
IsPeerReviewed false
IsScholarly true
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-a289t-c972596672f350274956b4fa11898b54b2009cac04cd6a4763e24719a10cf6453
OpenAccessLink https://dl.acm.org/doi/pdf/10.1145/3508352.3549382
PageCount 9
ParticipantIDs ieee_primary_10069658
PublicationCentury 2000
PublicationDate 2022-Oct.-29
PublicationDateYYYYMMDD 2022-10-29
PublicationDate_xml – month: 10
  year: 2022
  text: 2022-Oct.-29
  day: 29
PublicationDecade 2020
PublicationTitle 2022 IEEE/ACM International Conference On Computer Aided Design (ICCAD)
PublicationTitleAbbrev ICCAD
PublicationYear 2022
Publisher ACM
Publisher_xml – name: ACM
SSID ssj0002871770
ssj0020286
Score 2.377306
Snippet Model compression, such as pruning and quantization, has been widely applied to optimize neural networks on resource-limited classical devices. Recently, there...
SourceID ieee
SourceType Publisher
StartPage 1
SubjectTerms Machine learning
Neural network compression
Neural networks
Noise measurement
Quantization (signal)
Qubit
Robustness
Title Quantum Neural Network Compression
URI https://ieeexplore.ieee.org/document/10069658
WOSCitedRecordID wos000981574300139&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1NS8QwEB3cxYNe_Kr4TRGvXbvJpGnO4uJBlhUU9rak6RQE7cra-vudpHXXiwdPDYFCpmlm5iXzXgBu8qJCWbqSsYnDBJHGSaGESXIryaA1ReUCUfhRT6f5fG5mPVk9cGGIKBSf0cg3w1l-uXSt3yrjFZ5mXqxkAAOtdUfWWm-o-NRf-5-vR1vckfVaPmNUt1KFZGMkGRB1snuby1RCLJns_XMU-xBtWHnxbB1vDmCL6kPY_SUoeATXTy1_qfY99pob9o0focg79qu-K3itI3iZ3D_fPST9LQiJZTDUJM5ohihZpkXFNjCIZERTYGUZGZi8UFj48w1nXYquzCyyvyDBEcfYceqqDJU8hmG9rOkEYkOa31WOkxLElISVQpea3WEqSElFpxB5cxcfndDF4sfSsz_6z2FHeDYAu3JhLmDYrFq6hG331bx-rq7C9HwDvZuNAg
linkProvider IEEE
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3fS8MwED50CuqLvyb-toivnW1yaZpncUycY8KEvY00TUHQTmbr3-8lrZsvPvjUECjkmubuvuS-LwA3aVYgz01O2MRgiGjjMBNMhanmVqFWWWE8UXgoR6N0OlXjlqzuuTDWWl98Znuu6c_y87mp3VYZrfAocWIl67AhEFnc0LWWWyou-Zfu92vxFnUkrZpPjOKWC59u9DhBokZ4b3Wdio8m_d1_jmMPuiteXjBeRpx9WLPlAez8khQ8hOvnmr5V_R441Q39Rg9f5h24dd-UvJZdeOnfT-4GYXsPQqgJDlWhUZJASpJIVpANBCMJ02RYaMIGKs0EZu6Ew2gTockTjeQxLKOYo3QcmSJBwY-gU85LewyBspLeFYbSEsTIMs2ZzCU5xIhZwYU9ga4zd_bRSF3Mfiw9_aP_CrYGk6fhbPgwejyDbea4AeTYmTqHTrWo7QVsmq_q9XNx6afqG0yCkEk
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&rft.genre=proceeding&rft.title=2022+IEEE%2FACM+International+Conference+On+Computer+Aided+Design+%28ICCAD%29&rft.atitle=Quantum+Neural+Network+Compression&rft.au=Hu%2C+Zhirui&rft.au=Dong%2C+Peiyan&rft.au=Wang%2C+Zhepeng&rft.au=Lin%2C+Youzuo&rft.date=2022-10-29&rft.pub=ACM&rft.eissn=1558-2434&rft.spage=1&rft.epage=9&rft_id=info:doi/10.1145%2F3508352.3549382&rft.externalDocID=10069658