Tuning DNN Model Compression to Resource and Data Availability in Cooperative Training
Model compression is a fundamental tool to execute machine learning (ML) tasks on the diverse set of devices populating current-and next-generation networks, thereby exploiting their resources and data. At the same time, how much and when to compress ML models are very complex decisions, as they hav...
Uloženo v:
| Vydáno v: | IEEE/ACM transactions on networking Ročník 32; číslo 2; s. 1 - 16 |
|---|---|
| Hlavní autoři: | , , , , |
| Médium: | Journal Article |
| Jazyk: | angličtina |
| Vydáno: |
New York
IEEE
01.04.2024
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Témata: | |
| ISSN: | 1063-6692, 1558-2566 |
| On-line přístup: | Získat plný text |
| Tagy: |
Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
|
| Abstract | Model compression is a fundamental tool to execute machine learning (ML) tasks on the diverse set of devices populating current-and next-generation networks, thereby exploiting their resources and data. At the same time, how much and when to compress ML models are very complex decisions, as they have to jointly account for such aspects as the model being used, the resources (e.g., computational) and local datasets available at each node, as well as network latencies. In this work, we address the multi-dimensional problem of adapting the model compression, data selection, and node allocation decisions to each other: our objective is to perform the DNN training at the minimum energy cost, subject to learning quality and time constraints. To this end, we propose an algorithmic framework called PACT, combining a time-expanded graph representation of the training process, a dynamic programming solution strategy, and a data-driven approach to the estimation of the loss evolution. We prove that PACT's complexity is polynomial, and its decisions can get arbitrarily close to the optimum. Through our numerical evaluation, we further show how PACT can consistently outperform state-of-the-art alternatives and closely matches the optimal energy consumption. |
|---|---|
| AbstractList | Model compression is a fundamental tool to execute machine learning (ML) tasks on the diverse set of devices populating current- and next-generation networks, thereby exploiting their resources and data. At the same time, how much and when to compress ML models are very complex decisions, as they have to jointly account for such aspects as the model being used, the resources (e.g., computational) and local datasets available at each node, as well as network latencies. In this work, we address the multi-dimensional problem of adapting the model compression, data selection, and node allocation decisions to each other: our objective is to perform the DNN training at the minimum energy cost, subject to learning quality and time constraints. To this end, we propose an algorithmic framework called PACT, combining a time-expanded graph representation of the training process, a dynamic programming solution strategy, and a data-driven approach to the estimation of the loss evolution. We prove that PACT’s complexity is polynomial, and its decisions can get arbitrarily close to the optimum. Through our numerical evaluation, we further show how PACT can consistently outperform state-of-the-art alternatives and closely matches the optimal energy consumption. |
| Author | Karamzade, Armin Levorato, Marco di Giacomo, Giuseppe Malandrino, Francesco Chiasserini, Carla Fabiana |
| Author_xml | – sequence: 1 givenname: Francesco surname: Malandrino fullname: Malandrino, Francesco organization: CNR-IEIIT and CNIT, Turin, Italy – sequence: 2 givenname: Giuseppe orcidid: 0000-0002-9990-512X surname: di Giacomo fullname: di Giacomo, Giuseppe organization: Politecnico di Torino, Turin, Italy – sequence: 3 givenname: Armin surname: Karamzade fullname: Karamzade, Armin organization: Computer Science Department, University of California at Irvine, Irvine, CA, USA – sequence: 4 givenname: Marco orcidid: 0000-0002-6920-4189 surname: Levorato fullname: Levorato, Marco organization: Computer Science Department, University of California at Irvine, Irvine, CA, USA – sequence: 5 givenname: Carla Fabiana orcidid: 0000-0003-1410-660X surname: Chiasserini fullname: Chiasserini, Carla Fabiana organization: CNR-IEIIT and CNIT, Turin, Italy |
| BookMark | eNp9kEtLAzEUhYNUsK3-AMFFwPXUPCaZdlna-oBaQUa3Q2bmRlKmyZhMC_33ZmgX4sLVuYvznXs4IzSwzgJCt5RMKCWzh3yzyieMMD7hnPGoF2hIhZgmTEg5iDeRPJFyxq7QKIQtITSa5BB95ntr7Bdebjb41dXQ4IXbtR5CMM7izuF3CG7vK8DK1nipOoXnB2UaVZrGdEdsbARcC1515gA498r0edfoUqsmwM1Zx-jjcZUvnpP129PLYr5OKs6zLuF1WmVaawIzSXlFhSqZSBUvs2mphE6VFlUFQFJdl1ldCyXLUmfAUjYjikLNx-j-lNt6972H0BXb2NbGlwUnKRWMEcmiKzu5Ku9C8KCLynSxsLNd7NsUlBT9iEU_YtGPWJxHjCT9Q7be7JQ__svcnRgDAL_8bCpTkfIf0PKAOg |
| CODEN | IEANEP |
| CitedBy_id | crossref_primary_10_1109_TITS_2025_3548467 |
| Cites_doi | 10.1109/TNET.2022.3222640 10.1137/0205006 10.1109/ICInfA.2018.8812321 10.1109/ACCESS.2020.3039714 10.1109/JSAC.2019.2904348 10.1145/3477114.3488760 10.1016/0377-2217(92)90077-M 10.1109/INFOCOM53939.2023.10229027 10.1145/3386263.3407650 10.1109/ISCAS.2017.8050588 10.1109/TII.2019.2953106 10.1109/JSAIT.2020.2991332 10.1007/978-3-030-01234-2_48 10.1109/INFOCOM42981.2021.9488723 10.1109/INFOCOM.2019.8737602 10.1109/MNET.011.2100075 10.1109/MCOM.001.2001016 10.1109/ICECS46596.2019.8965157 10.1109/ICPR48806.2021.9412599 10.1109/ITSC.2017.8317913 10.1109/ACCESS.2021.3055523 10.1609/aaai.v31i1.10733 10.1109/ICASSP43922.2022.9746093 10.1109/INFOCOM53939.2023.10229076 10.1109/WoWMoM54355.2022.00034 10.1109/CVPRW53098.2021.00356 10.1109/tnnls.2022.3162241 10.1109/TNNLS.2020.2966745 10.1109/TVT.2021.3051378 10.12720/jait.13.3.295-300 10.1109/TCCN.2021.3084406 10.1145/3527155 10.1145/3373376.3378534 10.1109/icccn58024.2023.10230190 10.1007/978-3-030-01237-3_12 10.1109/JIOT.2021.3111723 10.1002/gamm.202100008 10.1016/j.softx.2021.100907 10.1109/TPDS.2021.3090331 10.1007/s11263-021-01453-z 10.1109/ICMLA51294.2020.00185 10.1109/TPDS.2022.3144994 10.1109/IROS51168.2021.9636206 10.1109/DySPAN53946.2021.9677132 |
| ContentType | Journal Article |
| Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024 |
| Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024 |
| DBID | 97E ESBDL RIA RIE AAYXX CITATION 7SC 7SP 8FD JQ2 L7M L~C L~D |
| DOI | 10.1109/TNET.2023.3323023 |
| DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005–Present IEEE Xplore Open Access Journals IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library (IEL) CrossRef Computer and Information Systems Abstracts Electronics & Communications Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional |
| DatabaseTitle | CrossRef Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Professional |
| DatabaseTitleList | Technology Research Database |
| Database_xml | – sequence: 1 dbid: RIE name: IEEE Electronic Library (IEL) url: https://ieeexplore.ieee.org/ sourceTypes: Publisher |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Engineering |
| EISSN | 1558-2566 |
| EndPage | 16 |
| ExternalDocumentID | 10_1109_TNET_2023_3323023 10286454 |
| Genre | orig-research |
| GrantInformation_xml | – fundername: European Union’s NextGenerationEU Instrument grantid: 0000DONOTUSETHIS0000.3 – fundername: enlarged partnership “Telecommunications of the Future” grantid: PE00000001 – fundername: Program “RESTART” – fundername: Framework of the Horizon Europe Project CENTRIC grantid: 101096379 – fundername: NSF grantid: CNS 2134567; CNS 2003237 |
| GroupedDBID | -DZ -~X .DC 0R~ 29I 4.4 5GY 5VS 6IK 85S 8US 97E AAJGR AAKMM AALFJ AARMG AASAJ AAWTH AAWTV ABAZT ABPPZ ABQJQ ABVLG ACGFS ACGOD ACIWK ACM ADBCU ADL AEBYY AEFXT AEJOY AENSD AETEA AFWIH AFWXC AGQYO AHBIQ AIKLT AKJIK AKQYR AKRVB ALMA_UNASSIGNED_HOLDINGS ATWAV BDXCO BEFXN BFFAM BGNUA BKEBE BPEOZ CCLIF CS3 D0L EBS ESBDL FEDTE GUFHI HGAVV HZ~ I07 IEDLZ IES IFIPE IPLJI JAVBF LAI LHSKQ M43 O9- OCL P1C P2P PQQKQ RIA RIE RNS TN5 UPT YR2 ZCA 9M8 AAYXX AETIX AGSQL AI. AIBXA ALLEH CITATION EJD HF~ H~9 ICLAB IFJZH MVM ROL UQL VH1 XOL 7SC 7SP 8FD JQ2 L7M L~C L~D |
| ID | FETCH-LOGICAL-c337t-3d4c7fff0e9613c15ab254a3b78ba5f4af5ccee04fdb7dd5a6bbf7e24290a1ed3 |
| IEDL.DBID | RIE |
| ISICitedReferencesCount | 2 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001091120900001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 1063-6692 |
| IngestDate | Mon Jun 30 06:01:00 EDT 2025 Sat Nov 29 03:05:27 EST 2025 Tue Nov 18 22:32:08 EST 2025 Wed Aug 27 02:13:06 EDT 2025 |
| IsDoiOpenAccess | true |
| IsOpenAccess | true |
| IsPeerReviewed | true |
| IsScholarly | true |
| Issue | 2 |
| Language | English |
| License | https://creativecommons.org/licenses/by-nc-nd/4.0 |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c337t-3d4c7fff0e9613c15ab254a3b78ba5f4af5ccee04fdb7dd5a6bbf7e24290a1ed3 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| ORCID | 0000-0002-6920-4189 0000-0002-9990-512X 0000-0003-1410-660X |
| OpenAccessLink | https://ieeexplore.ieee.org/document/10286454 |
| PQID | 3041522062 |
| PQPubID | 32020 |
| PageCount | 16 |
| ParticipantIDs | proquest_journals_3041522062 crossref_citationtrail_10_1109_TNET_2023_3323023 ieee_primary_10286454 crossref_primary_10_1109_TNET_2023_3323023 |
| PublicationCentury | 2000 |
| PublicationDate | 2024-04-01 |
| PublicationDateYYYYMMDD | 2024-04-01 |
| PublicationDate_xml | – month: 04 year: 2024 text: 2024-04-01 day: 01 |
| PublicationDecade | 2020 |
| PublicationPlace | New York |
| PublicationPlace_xml | – name: New York |
| PublicationTitle | IEEE/ACM transactions on networking |
| PublicationTitleAbbrev | TNET |
| PublicationYear | 2024 |
| Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| References | ref57 ref12 Chopra (ref56) 2021 ref14 ref58 ref53 ref52 ref11 ref55 ref10 ref54 Yvinec (ref19) 2021 Saxe (ref26) 2013 ref18 Rahbar (ref76) 2020 Gao (ref39) 2021 ref50 Krizhevsky (ref33) ref48 ref47 ref42 Simonyan (ref15) 2014 ref43 Zeulin (ref73) 2021 Krizhevsky (ref16) 2009 ref8 ref7 Allen-Zhu (ref27) ref9 ref3 ref6 (ref36) 2021 ref5 Coates (ref71) ref40 Saglietti (ref41) 2020 Netzer (ref67) Chen (ref74) Konečný (ref4) 2015 McMahan (ref37) Hong (ref44) ref31 ref32 ref2 ref1 ref38 (ref35) 2021 Tan (ref46) Park (ref49) 2021 ref70 Li (ref30) Paszke (ref17); 32 ref72 ref24 ref68 ref23 Wen (ref13) ref25 ref20 ref64 ref63 ref22 ref66 ref21 ref65 (ref34) 2021 Phuong (ref75) Marfoq (ref59) ref29 Tan (ref69) Hestness (ref28) 2017 Menzies (ref51) 2018 ref60 ref62 Frankle (ref45) Abdelmoniem (ref61) 2021 |
| References_xml | – ident: ref68 doi: 10.1109/TNET.2022.3222640 – ident: ref32 doi: 10.1137/0205006 – ident: ref42 doi: 10.1109/ICInfA.2018.8812321 – start-page: 1273 volume-title: Proc. 20th Int. Conf. Artif. Intell. Statist. ident: ref37 article-title: Communication-efficient learning of deep networks from decentralized data – year: 2015 ident: ref4 article-title: Federated optimization: Distributed optimization beyond the datacenter publication-title: arXiv:1511.03575 – start-page: 9356 volume-title: Proc. ICML ident: ref46 article-title: DropNet: Reducing neural network complexity via iterative pruning – ident: ref54 doi: 10.1109/ACCESS.2020.3039714 – year: 2021 ident: ref73 article-title: Dynamic network-assisted D2D-aided coded distributed learning publication-title: arXiv:2111.14789 – ident: ref60 doi: 10.1109/JSAC.2019.2904348 – start-page: 1 volume-title: Proc. NIPS ident: ref67 article-title: Reading digits in natural images with unsupervised feature learning – ident: ref65 doi: 10.1145/3477114.3488760 – start-page: 1 volume-title: Proc. Adv. Neural Inf. Process. Syst. ident: ref33 article-title: ImageNet classification with deep convolutional neural networks – year: 2020 ident: ref76 article-title: Analysis of knowledge transfer in kernel regime publication-title: arXiv:2003.13438 – ident: ref31 doi: 10.1016/0377-2217(92)90077-M – ident: ref5 doi: 10.1109/INFOCOM53939.2023.10229027 – year: 2021 ident: ref49 article-title: Prune your model before distill it publication-title: arXiv:2109.14960 – ident: ref20 doi: 10.1145/3386263.3407650 – year: 2021 ident: ref39 article-title: KnowRU: Knowledge reusing via knowledge distillation in multi-agent reinforcement learning publication-title: arXiv:2103.14891 – ident: ref24 doi: 10.1109/ISCAS.2017.8050588 – volume-title: NVIDIA A100 Datasheet year: 2021 ident: ref34 – ident: ref11 doi: 10.1109/TII.2019.2953106 – start-page: 6105 volume-title: Proc. ICML ident: ref69 article-title: EfficientNet: Rethinking model scaling for convolutional neural networks – volume: 32 start-page: 1 volume-title: Proc. Adv. Neural Inf. Process. Syst. ident: ref17 article-title: PyTorch: An imperative style, high-performance deep learning library – ident: ref25 doi: 10.1109/JSAIT.2020.2991332 – ident: ref52 doi: 10.1007/978-3-030-01234-2_48 – ident: ref9 doi: 10.1109/INFOCOM42981.2021.9488723 – ident: ref72 doi: 10.1109/INFOCOM.2019.8737602 – start-page: 1 volume-title: Proc. ICLR ident: ref45 article-title: The lottery ticket hypothesis: Finding sparse, trainable neural networks – start-page: 1 volume-title: Proc. Adv. Neural Inf. Process. Syst. ident: ref27 article-title: Learning and generalization in overparameterized neural networks, going beyond two layers – ident: ref7 doi: 10.1109/MNET.011.2100075 – ident: ref62 doi: 10.1109/MCOM.001.2001016 – ident: ref21 doi: 10.1109/ICECS46596.2019.8965157 – year: 2014 ident: ref15 article-title: Very deep convolutional networks for large-scale image recognition publication-title: arXiv:1409.1556 – ident: ref10 doi: 10.1109/ICPR48806.2021.9412599 – ident: ref29 doi: 10.1109/ITSC.2017.8317913 – ident: ref22 doi: 10.1109/ACCESS.2021.3055523 – start-page: 1 volume-title: Proc. ICLR ident: ref74 article-title: How much over-parameterization is sufficient to learn deep ReLU networks? – year: 2009 ident: ref16 article-title: Learning multiple layers of features from tiny images – year: 2021 ident: ref56 article-title: AdaSplit: Adaptive trade-offs for resource-constrained distributed deep learning publication-title: arXiv:2112.01637 – ident: ref43 doi: 10.1609/aaai.v31i1.10733 – ident: ref70 doi: 10.1109/ICASSP43922.2022.9746093 – start-page: 1 volume-title: Proc. Adv. Neural Inf. Process. Syst. ident: ref13 article-title: Learning structured sparsity in deep neural networks – ident: ref1 doi: 10.1109/INFOCOM53939.2023.10229076 – year: 2017 ident: ref28 article-title: Deep learning scaling is predictable, empirically publication-title: arXiv:1712.00409 – start-page: 1 volume-title: Proc. ICLR ident: ref30 article-title: On the convergence of FedAvg on non-IID data – ident: ref8 doi: 10.1109/WoWMoM54355.2022.00034 – year: 2018 ident: ref51 article-title: Generalization of teacher–student network and CNN pruning – ident: ref50 doi: 10.1109/CVPRW53098.2021.00356 – ident: ref40 doi: 10.1109/tnnls.2022.3162241 – ident: ref38 doi: 10.1109/TNNLS.2020.2966745 – start-page: 1 volume-title: Proc. ICLR ident: ref44 article-title: Collaborative inter-agent knowledge distillation for reinforcement learning – ident: ref2 doi: 10.1109/TVT.2021.3051378 – ident: ref53 doi: 10.12720/jait.13.3.295-300 – ident: ref63 doi: 10.1109/TCCN.2021.3084406 – ident: ref58 doi: 10.1145/3527155 – ident: ref48 doi: 10.1145/3373376.3378534 – start-page: 15070 volume-title: Proc. ICML ident: ref59 article-title: Personalized federated learning through local memorization – year: 2013 ident: ref26 article-title: Exact solutions to the nonlinear dynamics of learning in deep linear neural networks publication-title: arXiv:1312.6120 – ident: ref57 doi: 10.1109/icccn58024.2023.10230190 – year: 2021 ident: ref61 article-title: Resource-efficient federated learning publication-title: arXiv:2111.01108 – ident: ref47 doi: 10.1007/978-3-030-01237-3_12 – start-page: 215 volume-title: Proc. 14th Int. Conf. Artif. Intell. Statist. ident: ref71 article-title: An analysis of single-layer networks in unsupervised feature learning – start-page: 5142 volume-title: Proc. Int. Conf. Mach. Learn. ident: ref75 article-title: Towards understanding knowledge distillation – ident: ref6 doi: 10.1109/JIOT.2021.3111723 – year: 2021 ident: ref19 article-title: RED++: Data-free pruning of deep neural networks via input splitting and output merging publication-title: arXiv:2110.01397 – year: 2020 ident: ref41 article-title: Solvable model for inheriting the regularization through knowledge distillation publication-title: arXiv:2012.00194 – volume-title: NVIDIA RTX A4000 Datasheet year: 2021 ident: ref35 – ident: ref23 doi: 10.1002/gamm.202100008 – ident: ref18 doi: 10.1016/j.softx.2021.100907 – ident: ref55 doi: 10.1109/TPDS.2021.3090331 – ident: ref14 doi: 10.1007/s11263-021-01453-z – ident: ref64 doi: 10.1109/ICMLA51294.2020.00185 – volume-title: Broadcom VideoCore VI Technical Details year: 2021 ident: ref36 – ident: ref66 doi: 10.1109/TPDS.2022.3144994 – ident: ref12 doi: 10.1109/IROS51168.2021.9636206 – ident: ref3 doi: 10.1109/DySPAN53946.2021.9677132 |
| SSID | ssj0013026 |
| Score | 2.4473646 |
| Snippet | Model compression is a fundamental tool to execute machine learning (ML) tasks on the diverse set of devices populating current-and next-generation networks,... Model compression is a fundamental tool to execute machine learning (ML) tasks on the diverse set of devices populating current- and next-generation networks,... |
| SourceID | proquest crossref ieee |
| SourceType | Aggregation Database Enrichment Source Index Database Publisher |
| StartPage | 1 |
| SubjectTerms | Adaptation models Alternative energy sources Availability Complexity Computational modeling Costs Data compression Data models Decisions Distributed learning Dynamic programming Energy consumption Energy costs Graph representations Graphical representations Machine learning model pruning Network latency network support to machine learning Optimization Polynomials Switches Task analysis Training |
| Title | Tuning DNN Model Compression to Resource and Data Availability in Cooperative Training |
| URI | https://ieeexplore.ieee.org/document/10286454 https://www.proquest.com/docview/3041522062 |
| Volume | 32 |
| WOSCitedRecordID | wos001091120900001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVIEE databaseName: IEEE Electronic Library (IEL) customDbUrl: eissn: 1558-2566 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0013026 issn: 1063-6692 databaseCode: RIE dateStart: 19930101 isFulltext: true titleUrlDefault: https://ieeexplore.ieee.org/ providerName: IEEE |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3PS8MwGA1ueNCDPydOp-TgSeiWNk2zHofb8FQ8VNmt5CcMRju2buB_b5JmOhEFbz0kpeRr-vq-L997ADzEhAikrPKxwQNDUHAUpDzkgSVAjJEE80g7swmaZcPZLH3xzequF0Yp5Q6fqb69dLV8WYmNTZUNLBhaBaoWaFGaNM1aXyUD5LzVDMXBQZKkkS9hhigd5Nkk71uf8D7GkXXJ-QZCzlXlx6fY4cv09J9PdgZO_I8kHDWRPwcHqrwAx3vygpfgLd_YrAccZxm0nmcLaHd_c_C1hHUFd7l7yEoJx6xmcLRl80Uj3f0O56WZUC1VIw4Oc-8m0QGv00n-9Bx4H4VAYEzrAMtYUK01UqkBbxESxg0tZJjTIWdEx0wTYbASxVpyKiVhCeeaKgPeKWKhkvgKtMuqVNcAyoRpzBgVCgvDleQwNaNjTpDWRGLGuwDtFrYQXmTcel0sCkc2UFrYWBQ2FoWPRRc8fk5ZNgobfw3u2MXfG9isexf0duEr_CZcF9jKD0QRSqKbX6bdgiNzd38Spwfa9Wqj7sCh2Nbz9erevV8fwkvNig |
| linkProvider | IEEE |
| linkToHtml | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1dS8MwFL3oFNQHPydOp-bBJ6Fb1jTt-ijOoajFhyq-lXzCYHRDu4H_3iTN_EAUfOtDQktu09Nzb-45AGcRpQIrq3xs8MAQFBIGKe_xwBIgxmhMeKid2USSZf3n5_TBN6u7XhillDt8pjr20tXy5UTMbKqsa8HQKlAtwwqNohDX7VqfRQPs3NUMySFBHKehL2L2cNrNs6u8Y53CO4SE1ifnGww5X5UfH2OHMMOtfz7bNmz6X0l0Ucd-B5ZUuQsbXwQG9-Apn9m8BxpkGbKuZ2Nk93999LVE1QQtsveIlRINWMXQxZyNxrV49xsalWbCZKpqeXCUez-JJjwOr_LL68A7KQSCkKQKiIxEorXGKjXwLXqUcUMMGeFJnzOqI6apMGiJIy15IiVlMec6UQa-U8x6SpJ9aJSTUh0AkjHThLFEKCIMW5L91IyOOMVaU0kYbwFeLGwhvMy4dbsYF45u4LSwsShsLAofixacf0yZ1hobfw1u2sX_MrBe9xa0F-Er_DZ8LYgVIAhDHIeHv0w7hbXr_P6uuLvJbo9g3dzJn8tpQ6N6maljWBXzavT6cuLetXebSNDR |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Tuning+DNN+Model+Compression+to+Resource+and+Data+Availability+in+Cooperative+Training&rft.jtitle=IEEE%2FACM+transactions+on+networking&rft.au=Malandrino%2C+Francesco&rft.au=Giuseppe+di+Giacomo&rft.au=Karamzade%2C+Armin&rft.au=Levorato%2C+Marco&rft.date=2024-04-01&rft.pub=The+Institute+of+Electrical+and+Electronics+Engineers%2C+Inc.+%28IEEE%29&rft.issn=1063-6692&rft.eissn=1558-2566&rft.volume=32&rft.issue=2&rft.spage=1600&rft_id=info:doi/10.1109%2FTNET.2023.3323023&rft.externalDBID=NO_FULL_TEXT |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1063-6692&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1063-6692&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1063-6692&client=summon |