Slashing Communication Traffic in Federated Learning by Transmitting Clustered Model Updates

Federated Learning (FL) is an emerging decentralized learning framework through which multiple clients can collaboratively train a learning model. However, a major obstacle that impedes the wide deployment of FL lies in massive communication traffic. To train high dimensional machine learning models...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:IEEE journal on selected areas in communications Ročník 39; číslo 8; s. 2572 - 2589
Hlavní autori: Cui, Laizhong, Su, Xiaoxin, Zhou, Yipeng, Pan, Yi
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: New York IEEE 01.08.2021
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Predmet:
ISSN:0733-8716, 1558-0008
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Abstract Federated Learning (FL) is an emerging decentralized learning framework through which multiple clients can collaboratively train a learning model. However, a major obstacle that impedes the wide deployment of FL lies in massive communication traffic. To train high dimensional machine learning models (such as CNN models), heavy communication traffic can be incurred by exchanging model updates via the Internet between clients and the parameter server (PS), implying that the network resource can be easily exhausted. Compressing model updates is an effective way to reduce the traffic amount. However, a flexible unbiased compression algorithm applicable for both uplink and downlink compression in FL is still absent from existing works. In this work, we devise the Model Update Compression by Soft Clustering (MUCSC) algorithm to compress model updates transmitted between clients and the PS. In MUCSC, it is only necessary to transmit cluster centroids and the cluster ID of each model update. Moreover, we prove that: 1) The compressed model updates are unbiased estimation of their original values so that the convergence rate by transmitting compressed model updates is unchanged; 2) MUCSC can guarantee that the influence of the compression error on the model accuracy is minimized. Then, we further propose the boosted MUCSC (B-MUCSC) algorithm, a biased compression algorithm that can achieve an extremely high compression rate by grouping insignificant model updates into a super cluster. B-MUCSC is suitable for scenarios with very scarce network resource. Ultimately, we conduct extensive experiments with the CIFAR-10 and FEMNIST datasets to demonstrate that our algorithms can not only substantially reduce the volume of communication traffic in FL, but also improve the training efficiency in practical networks.
AbstractList Federated Learning (FL) is an emerging decentralized learning framework through which multiple clients can collaboratively train a learning model. However, a major obstacle that impedes the wide deployment of FL lies in massive communication traffic. To train high dimensional machine learning models (such as CNN models), heavy communication traffic can be incurred by exchanging model updates via the Internet between clients and the parameter server (PS), implying that the network resource can be easily exhausted. Compressing model updates is an effective way to reduce the traffic amount. However, a flexible unbiased compression algorithm applicable for both uplink and downlink compression in FL is still absent from existing works. In this work, we devise the Model Update Compression by Soft Clustering (MUCSC) algorithm to compress model updates transmitted between clients and the PS. In MUCSC, it is only necessary to transmit cluster centroids and the cluster ID of each model update. Moreover, we prove that: 1) The compressed model updates are unbiased estimation of their original values so that the convergence rate by transmitting compressed model updates is unchanged; 2) MUCSC can guarantee that the influence of the compression error on the model accuracy is minimized. Then, we further propose the boosted MUCSC (B-MUCSC) algorithm, a biased compression algorithm that can achieve an extremely high compression rate by grouping insignificant model updates into a super cluster. B-MUCSC is suitable for scenarios with very scarce network resource. Ultimately, we conduct extensive experiments with the CIFAR-10 and FEMNIST datasets to demonstrate that our algorithms can not only substantially reduce the volume of communication traffic in FL, but also improve the training efficiency in practical networks.
Author Cui, Laizhong
Pan, Yi
Su, Xiaoxin
Zhou, Yipeng
Author_xml – sequence: 1
  givenname: Laizhong
  orcidid: 0000-0003-1991-290X
  surname: Cui
  fullname: Cui, Laizhong
  email: cuilz@szu.edu.cn
  organization: College of Computer Science and Software Engineering, Shenzhen University, Shenzhen, China
– sequence: 2
  givenname: Xiaoxin
  orcidid: 0000-0001-9514-1102
  surname: Su
  fullname: Su, Xiaoxin
  email: suxiaoxin2016@163.com
  organization: College of Computer Science and Software Engineering, Shenzhen University, Shenzhen, China
– sequence: 3
  givenname: Yipeng
  orcidid: 0000-0003-1533-0865
  surname: Zhou
  fullname: Zhou, Yipeng
  email: yipeng.zhou@mq.edu.au
  organization: Department of Computing, Faculty of Science and Engineering, Macquarie University, Sydney, NSW, Australia
– sequence: 4
  givenname: Yi
  orcidid: 0000-0002-2766-3096
  surname: Pan
  fullname: Pan, Yi
  email: yi.pan@siat.ac.cn
  organization: Faculty of Computer Science and Control Engineering, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
BookMark eNp9kL1OwzAURi1UJNrCAyCWSMwp13acOGNVUX5UxNB2Q7Kc2AFXiVNsZ-jbk7QVAwPTXc65n3QmaGRbqxG6xTDDGPKH1_V8MSNA8IwCz0hKLtAYM8ZjAOAjNIaM0phnOL1CE-93ADhJOBmjj3Ut_Zexn9GibZrOmlIG09po42RVmTIyNlpqpZ0MWkUrLZ0d2OIwANY3JoSjW3c-aNcjb63SdbTdq17w1-iykrXXN-c7Rdvl42bxHK_en14W81VckpyGWJaEKFLmjBVJTvM80bzQpSqYrnBKZMYVUSpLeSFpxTlNK0g4MAV5ArzAvKBTdH_6u3ftd6d9ELu2c7afFIQxglMAlvYUPlGla713uhJ7ZxrpDgKDGCKKIaIYIopzxN7J_jilCcdCwUlT_2venUyjtf5dyvvqmGH6A2Z_gUM
CODEN ISACEM
CitedBy_id crossref_primary_10_1016_j_sysarc_2023_102927
crossref_primary_10_1109_TBDATA_2024_3404104
crossref_primary_10_1109_JSAC_2023_3310102
crossref_primary_10_1109_TII_2022_3161517
crossref_primary_10_1016_j_comnet_2025_111233
crossref_primary_10_1007_s11432_021_3532_8
crossref_primary_10_1109_JSAC_2022_3213345
crossref_primary_10_1109_TPDS_2023_3265588
crossref_primary_10_1109_JSTSP_2024_3381373
crossref_primary_10_1109_TC_2024_3477971
crossref_primary_10_1109_TNET_2024_3363916
crossref_primary_10_1016_j_comnet_2024_110886
crossref_primary_10_1109_TCAD_2023_3307459
crossref_primary_10_1007_s13369_025_10533_y
crossref_primary_10_1016_j_comnet_2024_110883
crossref_primary_10_1109_TON_2024_3520530
crossref_primary_10_1088_3049_477X_add26a
crossref_primary_10_1109_TNNLS_2023_3294295
crossref_primary_10_1109_TNET_2022_3168939
crossref_primary_10_1109_JSAC_2024_3431568
crossref_primary_10_1109_TPDS_2023_3240883
crossref_primary_10_1109_TPDS_2024_3447221
crossref_primary_10_1109_TBDATA_2022_3222971
crossref_primary_10_1109_TDSC_2025_3559108
crossref_primary_10_1007_s10586_025_05310_3
crossref_primary_10_1109_TMC_2024_3504284
crossref_primary_10_1016_j_jfranklin_2022_12_053
Cites_doi 10.1109/GLOCOM.2018.8647927
10.1016/j.ijmedinf.2018.01.007
10.1109/TNNLS.2019.2944481
10.1109/MC.2017.3641638
10.1109/ICC40277.2020.9148987
10.1109/ICC40277.2020.9148862
10.1109/ICDCS.2019.00099
10.1145/3298981
10.1109/ICC.2019.8761315
10.1109/JSAC.2014.2328154
10.1109/VCIP.2018.8698609
10.1109/5.726791
10.24963/ijcai.2019/473
10.1109/INFOCOM.2019.8737464
10.3389/fams.2018.00062
10.1109/GLOCOM.2018.8647616
10.1109/GLOBECOM38437.2019.9013160
10.1109/COMST.2020.2986024
10.1609/aaai.v33i01.33015693
10.1109/ICC40277.2020.9148586
10.1109/JSAC.2019.2904348
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021
DBID 97E
RIA
RIE
AAYXX
CITATION
7SP
8FD
L7M
DOI 10.1109/JSAC.2021.3087262
DatabaseName IEEE All-Society Periodicals Package (ASPP) 2005–Present
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Electronic Library (IEL)
CrossRef
Electronics & Communications Abstracts
Technology Research Database
Advanced Technologies Database with Aerospace
DatabaseTitle CrossRef
Technology Research Database
Advanced Technologies Database with Aerospace
Electronics & Communications Abstracts
DatabaseTitleList Technology Research Database

Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
EISSN 1558-0008
EndPage 2589
ExternalDocumentID 10_1109_JSAC_2021_3087262
9448151
Genre orig-research
GrantInformation_xml – fundername: Shenzhen Science and Technology Program
  grantid: RCYX20200714114645048; JCYJ20190808142207420; GJHZ20190822095416463
– fundername: National Key Research and Development Plan of China
  grantid: 2018YFB1800302; 2018YFB1800805
  funderid: 10.13039/501100012166
– fundername: Project of ”FANet: PCL Future Greater-Bay Area Network Facilities for Large-scale Experiments and Applications
  grantid: LZC0019
– fundername: Australia Research Council
  grantid: DE180100950
  funderid: 10.13039/501100000923
– fundername: National Natural Science Foundation of China
  grantid: 61772345
  funderid: 10.13039/501100001809
– fundername: Pearl River Young Scholars funding of Shenzhen University
  funderid: 10.13039/501100009019
GroupedDBID -~X
.DC
0R~
29I
3EH
4.4
41~
5GY
5VS
6IK
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABQJQ
ABVLG
ACGFO
ACGFS
ACIWK
ACNCT
ADRHT
AENEX
AETIX
AGQYO
AGSQL
AHBIQ
AI.
AIBXA
AKJIK
AKQYR
ALLEH
ALMA_UNASSIGNED_HOLDINGS
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CS3
DU5
EBS
EJD
HZ~
H~9
IBMZZ
ICLAB
IES
IFIPE
IFJZH
IPLJI
JAVBF
LAI
M43
O9-
OCL
P2P
RIA
RIE
RNS
TN5
VH1
AAYXX
CITATION
7SP
8FD
L7M
ID FETCH-LOGICAL-c293t-ac22d2c955b493994e8becdb5ef162a78d2dd768ba3f8836f04805d09408b18b3
IEDL.DBID RIE
ISICitedReferencesCount 39
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000673624000023&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 0733-8716
IngestDate Sun Sep 07 07:11:14 EDT 2025
Sat Nov 29 03:23:03 EST 2025
Tue Nov 18 22:12:13 EST 2025
Wed Aug 27 02:40:51 EDT 2025
IsPeerReviewed true
IsScholarly true
Issue 8
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c293t-ac22d2c955b493994e8becdb5ef162a78d2dd768ba3f8836f04805d09408b18b3
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ORCID 0000-0003-1991-290X
0000-0002-2766-3096
0000-0001-9514-1102
0000-0003-1533-0865
PQID 2552160056
PQPubID 85481
PageCount 18
ParticipantIDs ieee_primary_9448151
crossref_primary_10_1109_JSAC_2021_3087262
proquest_journals_2552160056
crossref_citationtrail_10_1109_JSAC_2021_3087262
PublicationCentury 2000
PublicationDate 2021-08-01
PublicationDateYYYYMMDD 2021-08-01
PublicationDate_xml – month: 08
  year: 2021
  text: 2021-08-01
  day: 01
PublicationDecade 2020
PublicationPlace New York
PublicationPlace_xml – name: New York
PublicationTitle IEEE journal on selected areas in communications
PublicationTitleAbbrev J-SAC
PublicationYear 2021
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References ref13
mcmahan (ref17) 2017; 54
ref14
li (ref18) 2014
ref10
fikri aji (ref45) 2017
ref19
caldas (ref47) 2018
tobias springenberg (ref9) 2014
mishchenko (ref40) 2019
reinsel (ref1) 2018
ref51
kone?ný (ref15) 2018; 4
li (ref8) 2014
alistarh (ref32) 2017
ref48
alistarh (ref12) 2016
stich (ref35) 2018
ref41
hard (ref22) 2018
suresh (ref33) 2017
dinh (ref42) 2019
verma (ref21) 2018
ref49
ref4
ref3
krizhevsky (ref46) 2009
ref6
ref5
kone?ný (ref16) 2016
mcmahan (ref2) 2017
alistarh (ref38) 2018
ref37
bernstein (ref50) 2018
li (ref23) 2019
lin (ref44) 2017
ref24
ref26
ref25
bonawitz (ref7) 2019
ref20
(ref43) 2019
han (ref36) 2020
ref28
wang (ref34) 2018
ref27
ref29
shi (ref39) 2019
wen (ref30) 2017
kone?ný (ref11) 2016
wangni (ref31) 2018
References_xml – ident: ref6
  doi: 10.1109/GLOCOM.2018.8647927
– year: 2016
  ident: ref12
  article-title: QSGD: Communication-efficient SGD via gradient quantization and encoding
  publication-title: arXiv 1610 02132
– year: 2014
  ident: ref9
  article-title: Striving for simplicity: The all convolutional net
  publication-title: arXiv 1412 6806
– ident: ref20
  doi: 10.1016/j.ijmedinf.2018.01.007
– year: 2019
  ident: ref43
  publication-title: Speedtest Market Report
– ident: ref13
  doi: 10.1109/TNNLS.2019.2944481
– start-page: 1509
  year: 2017
  ident: ref30
  article-title: Terngrad: Ternary gradients to reduce communication in distributed deep learning
  publication-title: Proc Adv Neural Inf Process Syst
– ident: ref10
  doi: 10.1109/MC.2017.3641638
– ident: ref14
  doi: 10.1109/ICC40277.2020.9148987
– ident: ref29
  doi: 10.1109/ICC40277.2020.9148862
– year: 2017
  ident: ref44
  article-title: Deep gradient compression: Reducing the communication bandwidth for distributed training
  publication-title: arXiv 1712 01887
– start-page: 583
  year: 2014
  ident: ref18
  article-title: Scaling distributed machine learning with the parameter server
  publication-title: Proc Int Conf Big Data Sci Comput BigDataScience
– start-page: 28
  year: 2018
  ident: ref1
  publication-title: The Digitization of the World From Edge to Core
– ident: ref37
  doi: 10.1109/ICDCS.2019.00099
– ident: ref4
  doi: 10.1145/3298981
– year: 2019
  ident: ref39
  article-title: Layer-wise adaptive gradient sparsification for distributed deep learning with convergence guarantees
  publication-title: arXiv 1911 08727
– start-page: 3329
  year: 2017
  ident: ref33
  article-title: Distributed mean estimation with limited communication
  publication-title: Proc Int Conf Mach Learn
– ident: ref3
  doi: 10.1109/ICC.2019.8761315
– year: 2018
  ident: ref21
  article-title: Federated AI for building AI solutions across multiple agencies
  publication-title: arXiv 1809 10036
– start-page: 19
  year: 2014
  ident: ref8
  article-title: Communication efficient distributed machine learning with the parameter server
  publication-title: Proc Adv Neural Inf Process Syst
– start-page: 1299
  year: 2018
  ident: ref31
  article-title: Gradient sparsification for communication-efficient distributed optimization
  publication-title: Proc Adv Neural Inf Process Syst
– year: 2009
  ident: ref46
  article-title: Learning multiple layers of features from tiny images
– year: 2016
  ident: ref11
  article-title: Federated learning: Strategies for improving communication efficiency
  publication-title: arXiv 1610 05492
– year: 2017
  ident: ref2
  publication-title: Federated learning Collaborative machine learning without centralized training data
– year: 2020
  ident: ref36
  article-title: Adaptive gradient sparsification for efficient federated learning: An online learning approach
  publication-title: arXiv 2001 04756
– ident: ref49
  doi: 10.1109/JSAC.2014.2328154
– year: 2017
  ident: ref45
  article-title: Sparse communication for distributed gradient descent
  publication-title: arXiv 1704 05021
– start-page: 1709
  year: 2017
  ident: ref32
  article-title: QSGD: Communication-efficient SGD via gradient quantization and encoding
  publication-title: Proc Adv Neural Inf Process Syst
– start-page: 4447
  year: 2018
  ident: ref35
  article-title: Sparsified SGD with memory
  publication-title: Proc Adv Neural Inf Process Syst
– ident: ref28
  doi: 10.1109/VCIP.2018.8698609
– year: 2019
  ident: ref23
  article-title: On the convergence of FedAvg on non-IID data
  publication-title: arXiv 1907 02189
– year: 2018
  ident: ref47
  article-title: LEAF: A benchmark for federated settings
  publication-title: arXiv 1812 01097
– year: 2018
  ident: ref22
  article-title: Federated learning for mobile keyboard prediction
  publication-title: arXiv 1811 03604
– year: 2019
  ident: ref42
  article-title: Federated learning over wireless networks: Convergence analysis and resource allocation
  publication-title: arXiv 1910 13067
– ident: ref48
  doi: 10.1109/5.726791
– ident: ref51
  doi: 10.24963/ijcai.2019/473
– ident: ref25
  doi: 10.1109/INFOCOM.2019.8737464
– start-page: 5973
  year: 2018
  ident: ref38
  article-title: The convergence of sparsified gradient methods
  publication-title: Proc Adv Neural Inf Process Syst
– volume: 4
  start-page: 62
  year: 2018
  ident: ref15
  article-title: Randomized distributed mean estimation: Accuracy vs. communication
  publication-title: Frontiers Appl Math Statist
  doi: 10.3389/fams.2018.00062
– year: 2019
  ident: ref7
  article-title: Towards federated learning at scale: System design
  publication-title: arXiv 1902 01046
– ident: ref5
  doi: 10.1109/GLOCOM.2018.8647616
– ident: ref26
  doi: 10.1109/GLOBECOM38437.2019.9013160
– year: 2016
  ident: ref16
  article-title: Federated optimization: Distributed machine learning for on-device intelligence
  publication-title: arXiv 1610 02527
– start-page: 9850
  year: 2018
  ident: ref34
  article-title: ATOMO: Communication-efficient learning via atomic sparsification
  publication-title: Proc Adv Neural Inf Process Syst
– ident: ref19
  doi: 10.1109/COMST.2020.2986024
– year: 2019
  ident: ref40
  article-title: Distributed learning with compressed gradient differences
  publication-title: arXiv 1901 09269
– ident: ref41
  doi: 10.1609/aaai.v33i01.33015693
– ident: ref27
  doi: 10.1109/ICC40277.2020.9148586
– ident: ref24
  doi: 10.1109/JSAC.2019.2904348
– volume: 54
  start-page: 1273
  year: 2017
  ident: ref17
  article-title: Communication-efficient learning of deep networks from decentralized data
  publication-title: Proc Int Conf Artif Intell Statist (AISTATS)
– year: 2018
  ident: ref50
  article-title: SignSGD: Compressed optimisation for non-convex problems
  publication-title: arXiv 1802 04434
SSID ssj0014482
Score 2.553327
Snippet Federated Learning (FL) is an emerging decentralized learning framework through which multiple clients can collaboratively train a learning model. However, a...
SourceID proquest
crossref
ieee
SourceType Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 2572
SubjectTerms Adaptation models
Algorithms
Centroids
Clients
Clustering
Communication
Communications traffic
Compressing
Compression algorithms
Computational modeling
Convergence
convergence rate
Data models
Federated learning
Machine learning
Model accuracy
model update compression
Traffic models
Training
Transmission
Uplink
Title Slashing Communication Traffic in Federated Learning by Transmitting Clustered Model Updates
URI https://ieeexplore.ieee.org/document/9448151
https://www.proquest.com/docview/2552160056
Volume 39
WOSCitedRecordID wos000673624000023&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVIEE
  databaseName: IEEE Electronic Library (IEL)
  customDbUrl:
  eissn: 1558-0008
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0014482
  issn: 0733-8716
  databaseCode: RIE
  dateStart: 19830101
  isFulltext: true
  titleUrlDefault: https://ieeexplore.ieee.org/
  providerName: IEEE
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LSwMxEA61eNCDrypWq-TgSUzdzb6SYykWESlCrfQgLJvHSmHdlj4E_72ZbFosiuBtYSdhyZfNzGRmvkHoKvMET8wpSLiUjIS-jIjgPCdCJDIzFkoQ2qzKl8ek32ejEX-qoZt1LYzW2iaf6TY82li-msglXJXd8hCoRYyvs5UkcVWrtY4YmFc2YpAEAQEnwEUwfY_fPgw6XeMJUr8N9Hc0phs6yDZV-XESW_XS2__fhx2gPWdG4k6F-yGq6fII7X4jF2yg10FRNUrCG0Ug2GgnoI3A4xL3gEnCGJsKO5rVNyw-sVVf72ObEI27xRKoFIwIdE0r8HAKVwTzYzTs3T1374nrpUCkUegLkklKFZU8ikTIjVESambQUyLSuR_TLGGKKmVcD5EFOWNBnEOteaSAXY8Jn4ngBNXLSalPEQZG-1hJX-s4gGhwZuaguQippp7wItFE3mp1U-mIxqHfRZFah8PjKQCSAiCpA6SJrtdDphXLxl_CDUBgLegWv4laKwhT9x_OU-MwUT8GvtOz30edox2Yu0rpa6H6YrbUF2hbfizG89ml3WJfDSfN5g
linkProvider IEEE
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LS8NAEB6KCurBVxWrVffgSUybbJ57LMVStRahrfQghOwjUqht6UPw37uzSYtFEbwFMpuE_TY7Mzsz3wBcJzZnod4FLSZEZHmO8C3OWGpxHopEWyiuZ7IqX1phux31--y5ALerWhillEk-UxW8NLF8ORYLPCqrMg-pRbSvs4mds_ysWmsVM9A3TcwgdF0L3YA8hunYrPrQqdW1L0idChLg0YCuaSHTVuXHXmwUTGP_f592AHu5IUlqGfKHUFCjI9j9Ri9YhNfOMGuVRNbKQIjWT0gcQQYj0kAuCW1uSpITrb4R_kmMAnsfmJRoUh8ukExBi2DftCHpTfCQYHYMvcZdt9608m4KltAqfW4lglJJBfN97jFtlngq0vhJ7qvUCWgSRpJKqZ0PnrhpFLlBitXmvkR-vYg7EXdPYGM0HqlTIMhpH0jhKBW4GA9O9DNoyj2qqM1tn5fAXs5uLHKqcex4MYyNy2GzGAGJEZA4B6QEN6shk4xn4y_hIiKwEswnvwTlJYRx_ifOYu0yUSdAxtOz30ddwXaz-9SKW_ftx3PYwfdkCX5l2JhPF-oCtsTHfDCbXprl9gX2O9Ex
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Slashing+Communication+Traffic+in+Federated+Learning+by+Transmitting+Clustered+Model+Updates&rft.jtitle=IEEE+journal+on+selected+areas+in+communications&rft.au=Cui%2C+Laizhong&rft.au=Su%2C+Xiaoxin&rft.au=Zhou%2C+Yipeng&rft.au=Pan%2C+Yi&rft.date=2021-08-01&rft.pub=The+Institute+of+Electrical+and+Electronics+Engineers%2C+Inc.+%28IEEE%29&rft.issn=0733-8716&rft.eissn=1558-0008&rft.volume=39&rft.issue=8&rft.spage=2572&rft_id=info:doi/10.1109%2FJSAC.2021.3087262&rft.externalDBID=NO_FULL_TEXT
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0733-8716&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0733-8716&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0733-8716&client=summon