DeepCABAC: A Universal Compression Algorithm for Deep Neural Networks
In the past decade deep neural networks (DNNs) have shown state-of-the-art performance on a wide range of complex machine learning tasks. Many of these results have been achieved while growing the size of DNNs, creating a demand for efficient compression and transmission of them. In this work we pre...
Uložené v:
| Vydané v: | IEEE journal of selected topics in signal processing Ročník 14; číslo 4; s. 700 - 714 |
|---|---|
| Hlavní autori: | , , , , , , , , , , , |
| Médium: | Journal Article |
| Jazyk: | English |
| Vydavateľské údaje: |
New York
IEEE
01.05.2020
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Predmet: | |
| ISSN: | 1932-4553, 1941-0484 |
| On-line prístup: | Získať plný text |
| Tagy: |
Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
|
| Abstract | In the past decade deep neural networks (DNNs) have shown state-of-the-art performance on a wide range of complex machine learning tasks. Many of these results have been achieved while growing the size of DNNs, creating a demand for efficient compression and transmission of them. In this work we present DeepCABAC, a universal compression algorithm for DNNs that is based on applying Context-based Adaptive Binary Arithmetic Coder (CABAC) to the DNN parameters. CABAC was originally designed for the H.264/AVC video coding standard and became the state-of-the-art for the lossless compression part of video compression. DeepCABAC applies a novel quantization scheme that minimizes a rate-distortion function while simultaneously taking the impact of quantization to the DNN performance into account. Experimental results show that DeepCABAC consistently attains higher compression rates than previously proposed coding techniques for DNN compression. For instance, it is able to compress the VGG16 ImageNet model by x63.6 with no loss of accuracy, thus being able to represent the entire network with merely 9 MB. The source code for encoding and decoding can be found at https://github.com/fraunhoferhhi/DeepCABAC . |
|---|---|
| AbstractList | In the past decade deep neural networks (DNNs) have shown state-of-the-art performance on a wide range of complex machine learning tasks. Many of these results have been achieved while growing the size of DNNs, creating a demand for efficient compression and transmission of them. In this work we present DeepCABAC, a universal compression algorithm for DNNs that is based on applying Context-based Adaptive Binary Arithmetic Coder (CABAC) to the DNN parameters. CABAC was originally designed for the H.264/AVC video coding standard and became the state-of-the-art for the lossless compression part of video compression. DeepCABAC applies a novel quantization scheme that minimizes a rate-distortion function while simultaneously taking the impact of quantization to the DNN performance into account. Experimental results show that DeepCABAC consistently attains higher compression rates than previously proposed coding techniques for DNN compression. For instance, it is able to compress the VGG16 ImageNet model by x63.6 with no loss of accuracy, thus being able to represent the entire network with merely 9 MB. The source code for encoding and decoding can be found at https://github.com/fraunhoferhhi/DeepCABAC . In the past decade deep neural networks (DNNs) have shown state-of-the-art performance on a wide range of complex machine learning tasks. Many of these results have been achieved while growing the size of DNNs, creating a demand for efficient compression and transmission of them. In this work we present DeepCABAC, a universal compression algorithm for DNNs that is based on applying Context-based Adaptive Binary Arithmetic Coder (CABAC) to the DNN parameters. CABAC was originally designed for the H.264/AVC video coding standard and became the state-of-the-art for the lossless compression part of video compression. DeepCABAC applies a novel quantization scheme that minimizes a rate-distortion function while simultaneously taking the impact of quantization to the DNN performance into account. Experimental results show that DeepCABAC consistently attains higher compression rates than previously proposed coding techniques for DNN compression. For instance, it is able to compress the VGG16 ImageNet model by x63.6 with no loss of accuracy, thus being able to represent the entire network with merely 9 MB. The source code for encoding and decoding can be found at https://github.com/fraunhoferhhi/DeepCABAC . |
| Author | Marban, Arturo Neumann, David Nguyen, Tung Haase, Paul Wiedemann, Simon Matlage, Stefan Marpe, Detlev Samek, Wojciech Schwarz, Heiko Kirchhoffer, Heiner Marinc, Talmaj Wiegand, Thomas |
| Author_xml | – sequence: 1 givenname: Simon orcidid: 0000-0001-5144-3758 surname: Wiedemann fullname: Wiedemann, Simon email: simon.wiedemann@hhi.fraunhofer.de organization: Fraunhofer Heinrich Hertz Institute, Berlin, Germany – sequence: 2 givenname: Heiner surname: Kirchhoffer fullname: Kirchhoffer, Heiner email: heiner.kirchhoffer@hhi.fraunhofer.de organization: Fraunhofer Heinrich Hertz Institute, Berlin, Germany – sequence: 3 givenname: Stefan surname: Matlage fullname: Matlage, Stefan email: stefan.matlage@hhi.fraunhofer.de organization: Fraunhofer Heinrich Hertz Institute, Berlin, Germany – sequence: 4 givenname: Paul orcidid: 0000-0002-0273-4564 surname: Haase fullname: Haase, Paul email: paul.haase@hhi.fraunhofer.de organization: Fraunhofer Heinrich Hertz Institute, Berlin, Germany – sequence: 5 givenname: Arturo surname: Marban fullname: Marban, Arturo email: arturo.marban@hhi-extern.fraunhofer.de organization: Fraunhofer Heinrich Hertz Institute, Berlin, Germany – sequence: 6 givenname: Talmaj surname: Marinc fullname: Marinc, Talmaj email: talmaj.marinc@hhi.fraunhofer.de organization: Fraunhofer Heinrich Hertz Institute, Berlin, Germany – sequence: 7 givenname: David surname: Neumann fullname: Neumann, David email: david.neumann@hhi.fraunhofer.de organization: Fraunhofer Heinrich Hertz Institute, Berlin, Germany – sequence: 8 givenname: Tung orcidid: 0000-0002-7429-4932 surname: Nguyen fullname: Nguyen, Tung email: tung.nguyen@hhi.fraunhofer.de organization: Fraunhofer Heinrich Hertz Institute, Berlin, Germany – sequence: 9 givenname: Heiko orcidid: 0000-0002-7136-0041 surname: Schwarz fullname: Schwarz, Heiko email: heiko.schwarz@hhi.fraunhofer.de organization: Fraunhofer Heinrich Hertz Institute, Berlin, Germany – sequence: 10 givenname: Thomas surname: Wiegand fullname: Wiegand, Thomas email: Thomas.Wiegand@hhi.fraunhofer.de organization: Fraunhofer Heinrich Hertz Institute, Berlin, Germany – sequence: 11 givenname: Detlev orcidid: 0000-0002-5391-3247 surname: Marpe fullname: Marpe, Detlev email: detlev.marpe@hhi.fraunhofer.de organization: Fraunhofer Heinrich Hertz Institute, Berlin, Germany – sequence: 12 givenname: Wojciech orcidid: 0000-0002-6283-3265 surname: Samek fullname: Samek, Wojciech email: wojciech.samek@hhi.fraunhofer.de organization: Fraunhofer Heinrich Hertz Institute, Berlin, Germany |
| BookMark | eNp9kMtOwzAQRS0EEm3hB2ATiXWKn7HNLoTyUlWQ2q6tNB1DShoXOwXx9yS0YsGC1czinjua00eHtasBoTOCh4Rgffk4nU2fhxRTPKQ60ULwA9QjmpMYc8UPu53RmAvBjlE_hBXGQiaE99DoBmCTpddpdhWl0bwuP8CHvIoyt954CKF0dZRWL86Xzes6ss5HHRBNYOvb1ASaT-ffwgk6snkV4HQ_B2h-O5pl9_H46e4hS8dxwbFoYrbkAnLCrVSKJ6SwWgqllgnghV6yhFrF-GKRU13wRIIWiVIWJMuFoJJbJdkAXex6N969byE0ZuW2vm5PGsoZJ1QS0aXULlV4F4IHa4qyyZv2lcbnZWUINp008yPNdNLMXlqL0j_oxpfr3H_9D53voBIAfgGlJaaas2_d1Xhv |
| CODEN | IJSTGY |
| CitedBy_id | crossref_primary_10_1109_JSTSP_2023_3312914 crossref_primary_10_1109_TCSVT_2023_3302858 crossref_primary_10_1145_3634686 crossref_primary_10_1155_2022_4886586 crossref_primary_10_3390_electronics12194042 crossref_primary_10_1016_j_cosrev_2023_100568 crossref_primary_10_1109_TCSVT_2024_3467124 crossref_primary_10_1371_journal_pone_0303462 crossref_primary_10_1007_s00500_023_09109_5 crossref_primary_10_3390_app11083725 crossref_primary_10_1145_3588436 crossref_primary_10_1016_j_asoc_2024_111901 crossref_primary_10_1016_j_ins_2024_120367 crossref_primary_10_1109_ACCESS_2025_3582850 crossref_primary_10_1088_1361_6579_acb24d crossref_primary_10_1109_JIOT_2023_3340858 crossref_primary_10_1145_3607139 crossref_primary_10_1016_j_procs_2022_12_239 crossref_primary_10_1016_j_neucom_2025_130359 crossref_primary_10_1109_JSTSP_2022_3142678 crossref_primary_10_1088_1742_6596_2216_1_012078 crossref_primary_10_1109_TSC_2023_3336980 crossref_primary_10_3390_make3020020 crossref_primary_10_1109_JIOT_2022_3197317 crossref_primary_10_1109_TNNLS_2020_3015958 crossref_primary_10_23919_JSEE_2023_000159 crossref_primary_10_1038_s41467_020_19203_z crossref_primary_10_1109_LSP_2021_3052097 crossref_primary_10_3390_drones8120707 crossref_primary_10_1109_TII_2022_3192882 crossref_primary_10_3390_s22229003 crossref_primary_10_1186_s40537_021_00444_8 crossref_primary_10_1016_j_displa_2024_102880 crossref_primary_10_1109_TNSE_2021_3081748 crossref_primary_10_1016_j_comcom_2024_07_012 crossref_primary_10_1109_TMC_2022_3190260 crossref_primary_10_1109_ACCESS_2024_3467375 crossref_primary_10_1109_TCSVT_2022_3229079 crossref_primary_10_1109_LSP_2025_3578291 crossref_primary_10_1109_TMM_2024_3357198 crossref_primary_10_1109_TCSVT_2021_3095970 crossref_primary_10_1109_TNNLS_2021_3063265 crossref_primary_10_1109_TWC_2024_3388329 |
| Cites_doi | 10.1109/TC.2019.2914438 10.1038/s41928-018-0059-3 10.1109/MSP.2017.2765695 10.1561/2000000010 10.1002/j.1538-7305.1948.tb01338.x 10.1109/ICIP.2003.1246667 10.1109/TNNLS.2019.2910073 10.1109/IJCNN.2019.8852172 10.1007/978-3-319-06895-4 10.1109/JRPROC.1952.273898 10.1145/3092831 10.1038/s41467-019-08987-4 10.7551/mitpress/4643.001.0001 10.1109/CVPR.2019.01099 10.1038/nature14539 10.1109/ICCV.2019.00141 10.1016/0020-0190(78)90024-8 10.1109/TCSVT.2003.815173 10.1007/978-3-319-46493-0_32 10.1109/IJCNN.2017.7966159 10.1109/ICNN.1993.298572 10.1109/CVPR.2019.01166 10.1109/TNNLS.2019.2944481 10.1109/IJCNN.2019.8852119 |
| ContentType | Journal Article |
| Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2020 |
| Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2020 |
| DBID | 97E ESBDL RIA RIE AAYXX CITATION 7SP 8FD H8D L7M |
| DOI | 10.1109/JSTSP.2020.2969554 |
| DatabaseName | IEEE Xplore (IEEE) IEEE Xplore Open Access Journals IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library (IEL) CrossRef Electronics & Communications Abstracts Technology Research Database Aerospace Database Advanced Technologies Database with Aerospace |
| DatabaseTitle | CrossRef Aerospace Database Technology Research Database Advanced Technologies Database with Aerospace Electronics & Communications Abstracts |
| DatabaseTitleList | Aerospace Database |
| Database_xml | – sequence: 1 dbid: RIE name: IEEE Electronic Library (IEL) url: https://ieeexplore.ieee.org/ sourceTypes: Publisher |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Engineering |
| EISSN | 1941-0484 |
| EndPage | 714 |
| ExternalDocumentID | 10_1109_JSTSP_2020_2969554 8970294 |
| Genre | orig-research |
| GrantInformation_xml | – fundername: Berlin Center for Machine Learning grantid: 01IS18037I – fundername: German Ministry for Education through the Berlin Big Data Center grantid: 01IS14013A |
| GroupedDBID | -~X 0R~ 29I 4.4 5GY 5VS 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABQJQ ABVLG ACIWK AENEX AETIX AGQYO AGSQL AHBIQ AKJIK AKQYR ALMA_UNASSIGNED_HOLDINGS ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 EBS EJD ESBDL F5P HZ~ IFIPE IPLJI JAVBF LAI M43 O9- OCL RIA RIE RNS AAYXX CITATION 7SP 8FD H8D L7M |
| ID | FETCH-LOGICAL-c405t-3d45ea14f788461cf97588d6e0b9d362f834bba29c467e95688fe73a55274f873 |
| IEDL.DBID | RIE |
| ISICitedReferencesCount | 75 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000565858900009&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 1932-4553 |
| IngestDate | Mon Jun 30 10:19:44 EDT 2025 Sat Nov 29 04:10:33 EST 2025 Tue Nov 18 22:42:23 EST 2025 Wed Aug 27 02:32:37 EDT 2025 |
| IsDoiOpenAccess | true |
| IsOpenAccess | true |
| IsPeerReviewed | true |
| IsScholarly | true |
| Issue | 4 |
| Language | English |
| License | https://creativecommons.org/licenses/by/4.0/legalcode |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c405t-3d45ea14f788461cf97588d6e0b9d362f834bba29c467e95688fe73a55274f873 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| ORCID | 0000-0002-7136-0041 0000-0001-5144-3758 0000-0002-7429-4932 0000-0002-0273-4564 0000-0002-6283-3265 0000-0002-5391-3247 |
| OpenAccessLink | https://ieeexplore.ieee.org/document/8970294 |
| PQID | 2434127157 |
| PQPubID | 75721 |
| PageCount | 15 |
| ParticipantIDs | crossref_citationtrail_10_1109_JSTSP_2020_2969554 ieee_primary_8970294 proquest_journals_2434127157 crossref_primary_10_1109_JSTSP_2020_2969554 |
| PublicationCentury | 2000 |
| PublicationDate | 2020-05-01 |
| PublicationDateYYYYMMDD | 2020-05-01 |
| PublicationDate_xml | – month: 05 year: 2020 text: 2020-05-01 day: 01 |
| PublicationDecade | 2020 |
| PublicationPlace | New York |
| PublicationPlace_xml | – name: New York |
| PublicationTitle | IEEE journal of selected topics in signal processing |
| PublicationTitleAbbrev | JSTSP |
| PublicationYear | 2020 |
| Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| References | ref13 guo (ref29) 2016 ref15 ref14 hinton (ref44) 2015 zhou (ref39) 2017 ref11 ref10 howard (ref4) 2017 choi (ref36) 2018 ref17 ref19 ref18 ref51 cheng (ref9) 2017 louizos (ref33) 0 (ref26) 2019 (ref46) 2019 han (ref32) 2015 ref48 ref47 (ref25) 2019 wiegand (ref16) 2011; 4 ref49 han (ref28) 0 ref8 ref7 chetlur (ref20) 2014 mcmahan (ref6) 2016 ref5 courbariaux (ref37) 2016 molchanov (ref22) 0 sattler (ref50) 2019 ref34 xiaowei (ref2) 2018; 1 achille (ref23) 2017 ref38 horowitz (ref3) 0 devlin (ref42) 2018 lan (ref43) 2019 choi (ref35) 2016 ref24 redmon (ref41) 2018 lecun (ref1) 2015; 521 zhu (ref31) 0 ref21 ref27 rastegari (ref30) 2016 (ref12) 2019 simonyan (ref40) 2014 tan (ref45) 2019; abs 1907 9595 |
| References_xml | – year: 2017 ident: ref39 article-title: Incremental network quantization: Towards lossless CNNs with low-precision weights – year: 2015 ident: ref32 article-title: Deep compression: Compressing deep neural network with pruning, trained quantization and huffman coding – year: 2018 ident: ref41 article-title: Yolov3: An incremental improvement – year: 2019 ident: ref46 article-title: Working draft 2 of compression of neural networks for multimedia content description and analysis (n18784) – ident: ref51 doi: 10.1109/TC.2019.2914438 – year: 2017 ident: ref9 article-title: A survey of model compression and acceleration for deep neural networks – volume: 1 start-page: 216 year: 2018 ident: ref2 article-title: Scaling for edge inference of deep neural networks publication-title: Nature Electron doi: 10.1038/s41928-018-0059-3 – year: 2014 ident: ref20 article-title: cuDNN: Efficient primitives for deep learning – ident: ref10 doi: 10.1109/MSP.2017.2765695 – volume: 4 start-page: 1 year: 2011 ident: ref16 article-title: Source coding: Part 1 of fundamentals of source and video coding publication-title: Found Trends Signal Process doi: 10.1561/2000000010 – ident: ref15 doi: 10.1002/j.1538-7305.1948.tb01338.x – ident: ref18 doi: 10.1109/ICIP.2003.1246667 – ident: ref11 doi: 10.1109/TNNLS.2019.2910073 – ident: ref7 doi: 10.1109/IJCNN.2019.8852172 – year: 0 ident: ref31 article-title: Trained ternary quantization publication-title: Proc Int Conf Learn Representations – ident: ref13 doi: 10.1007/978-3-319-06895-4 – ident: ref17 doi: 10.1109/JRPROC.1952.273898 – year: 2016 ident: ref6 article-title: Federated learning of deep networks using model averaging – ident: ref5 doi: 10.1145/3092831 – ident: ref49 doi: 10.1038/s41467-019-08987-4 – year: 2018 ident: ref42 article-title: BERT: Pre-training of deep bidirectional transformers for language understanding – ident: ref19 doi: 10.7551/mitpress/4643.001.0001 – start-page: 10 year: 0 ident: ref3 article-title: 1.1 computing's energy problem (and what we can do about it) publication-title: Proc IEEE Int Solid-State Circuits Conf Dig Tech Papers – year: 2019 ident: ref26 article-title: Test model 2 of compression of neural networks for multimedia content description and analysis (n18785) – year: 2019 ident: ref12 article-title: Updated call for proposals on neural network compression (n18129) – start-page: 2498 year: 0 ident: ref22 article-title: Variational dropout sparsifies deep neural networks publication-title: Proc Int Conf Mach Learn – ident: ref47 doi: 10.1109/CVPR.2019.01099 – year: 2019 ident: ref50 article-title: Clustered federated learning: Model-agnostic distributed multi-task optimization under privacy constraints – volume: 521 start-page: 436 year: 2015 ident: ref1 article-title: Deep learning publication-title: Nature doi: 10.1038/nature14539 – start-page: 1135 year: 0 ident: ref28 article-title: Learning both weights and connections for efficient neural networks publication-title: Proc Adv Neural Inf Process Syst – volume: abs 1907 9595 year: 2019 ident: ref45 article-title: Mixconv: Mixed depthwise convolutional kernels publication-title: CoRR – year: 2016 ident: ref37 article-title: Binarynet: Training deep neural networks with weights and activations constrained to +1 or ?1 – year: 2017 ident: ref4 article-title: Mobilenets: Efficient convolutional neural networks for mobile vision applications – ident: ref24 doi: 10.1109/ICCV.2019.00141 – start-page: 3288 year: 0 ident: ref33 article-title: Bayesian Compression for Deep Learning publication-title: Proc Adv Neural Inf Process Syst – ident: ref21 doi: 10.1016/0020-0190(78)90024-8 – year: 2014 ident: ref40 article-title: Very deep convolutional networks for large-scale image recognition – ident: ref14 doi: 10.1109/TCSVT.2003.815173 – year: 2016 ident: ref30 article-title: XNOR-Net: Imagenet classification using binary convolutional neural networks doi: 10.1007/978-3-319-46493-0_32 – year: 2019 ident: ref43 article-title: Albert: A lite BERT for self-supervised learning of language representations – ident: ref38 doi: 10.1109/IJCNN.2017.7966159 – year: 2016 ident: ref35 article-title: Towards the limit of network quantization – ident: ref27 doi: 10.1109/ICNN.1993.298572 – ident: ref48 doi: 10.1109/CVPR.2019.01166 – ident: ref8 doi: 10.1109/TNNLS.2019.2944481 – year: 2017 ident: ref23 article-title: Critical learning periods in deep neural networks – year: 2018 ident: ref36 article-title: Universal deep neural network compression – year: 2015 ident: ref44 article-title: Distilling the knowledge in a neural network – year: 2016 ident: ref29 article-title: Dynamic network surgery for efficient dnns – year: 2019 ident: ref25 article-title: Description of core experiments on compression of neural networks for multimedia content description and analysis (n18782) – ident: ref34 doi: 10.1109/IJCNN.2019.8852119 |
| SSID | ssj0057614 |
| Score | 2.5858557 |
| Snippet | In the past decade deep neural networks (DNNs) have shown state-of-the-art performance on a wide range of complex machine learning tasks. Many of these results... |
| SourceID | proquest crossref ieee |
| SourceType | Aggregation Database Enrichment Source Index Database Publisher |
| StartPage | 700 |
| SubjectTerms | Algorithms arithmetic coding Artificial neural networks Coding Coding standards Cognitive tasks Compression algorithms Decoding Deep learning efficient representation Machine learning Measurement neural network compression Neural networks Quantization (signal) rate-distortion quantization Source code Source coding Task complexity Video compression |
| Title | DeepCABAC: A Universal Compression Algorithm for Deep Neural Networks |
| URI | https://ieeexplore.ieee.org/document/8970294 https://www.proquest.com/docview/2434127157 |
| Volume | 14 |
| WOSCitedRecordID | wos000565858900009&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVIEE databaseName: IEEE Electronic Library (IEL) customDbUrl: eissn: 1941-0484 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0057614 issn: 1932-4553 databaseCode: RIE dateStart: 20070101 isFulltext: true titleUrlDefault: https://ieeexplore.ieee.org/ providerName: IEEE |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3NT8IwFH9B4kEPfqERRdODNx2sXbe23iZCPBhCoibcltG1SoKDwPDvt-0G0WhMvO3w3tL82r33e-v7ALjSzFcSZ4a5hSn3aEq0x6nIPMs2MGEiVKmbWvLIBgM-GolhDW42tTBKKZd8ptr20d3lZzO5sr_KOlwwnwi6BVuMRWWt1trqGtqMqxtk4tEwDNYFMr7omCP-NDShIPHbREQiDOk3J-Smqvwwxc6_9Pf_t7ID2Kt4JIrLjT-EmsqPYPdLd8EG9O6Vmnfju7h7i2JUZWAYHWsCyuzXHMXT19liUry9I0NekVVAtl2HkRqU-eHLY3jp9567D141NcGThnwVXpBRAzCm2gS3NMJSCxMS8CxS_lhkxl1pHtDxOCVCGhupbLUg14oFqW3FRjVnwQnU81muTgFFWkfKMC5MXds3yqXQUSppJnSqNMmagNcwJrJqKW4nW0wTF1r4InHQJxb6pIK-CdcbnXnZUONP6YYFeyNZ4dyE1nq3kuqbWyaEmkUShkN29rvWOezYd5fpii2oF4uVuoBt-VFMlotLd5w-AWP5xTY |
| linkProvider | IEEE |
| linkToHtml | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3NT8IwFH9BNFEPfhtR1B286WDruq31NhGCERcSMeG2jK5VEhwEhn-_bbcRjcbE2w7vZc2v3Xu_t74PgCvhW5zZiWRubkxMHCNhEkwTU7ENG_nU5bGeWtLzw5AMh7RfgZtVLQznXCef8YZ61Hf5yZQt1a-yJqG-hSheg3U1OcvNq7VKuyuJs13cISMTu65TlshYtCkP-XNfBoPIaiDqUdfF39yQnqvywxhrD9PZ_d_a9mCnYJJGkG_9PlR4egDbX_oLHkL7nvNZK7gLWrdGYBQ5GFJHGYE8_zU1gsnrdD7O3t4NSV8NpWCohh1SKswzxBdH8NJpD1pds5ibYDJJvzLTSbCE2MZChrfYs5mgMiggicetEU2kwxLEwaNRjCiTVpKrekEiuO_EqhkbFsR3jqGaTlN-AoYnhMcl57KxbvyGCaPCixlOqIi5QEkN7BLGiBVNxdVsi0mkgwuLRhr6SEEfFdDX4HqlM8tbavwpfajAXkkWONegXu5WVHx1iwhhuUjk265_-rvWJWx2B0-9qPcQPp7BlnpPnrxYh2o2X_Jz2GAf2Xgxv9BH6xPdD8iB |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=DeepCABAC%3A+A+Universal+Compression+Algorithm+for+Deep+Neural+Networks&rft.jtitle=IEEE+journal+of+selected+topics+in+signal+processing&rft.au=Wiedemann%2C+Simon&rft.au=Kirchhoffer%2C+Heiner&rft.au=Matlage%2C+Stefan&rft.au=Haase%2C+Paul&rft.date=2020-05-01&rft.issn=1932-4553&rft.eissn=1941-0484&rft.volume=14&rft.issue=4&rft.spage=700&rft.epage=714&rft_id=info:doi/10.1109%2FJSTSP.2020.2969554&rft.externalDBID=n%2Fa&rft.externalDocID=10_1109_JSTSP_2020_2969554 |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1932-4553&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1932-4553&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1932-4553&client=summon |