CMFL: Mitigating Communication Overhead for Federated Learning

Federated Learning enables mobile users to collaboratively learn a global prediction model by aggregating their individual updates without sharing the privacy-sensitive data. As mobile devices usually have limited data plan and slow network connections to the central server where the global model is...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:Proceedings of the International Conference on Distributed Computing Systems s. 954 - 964
Hlavní autori: WANG, Luping, WANG, Wei, LI, Bo
Médium: Konferenčný príspevok..
Jazyk:English
Vydavateľské údaje: IEEE 01.07.2019
Predmet:
ISSN:2575-8411
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Abstract Federated Learning enables mobile users to collaboratively learn a global prediction model by aggregating their individual updates without sharing the privacy-sensitive data. As mobile devices usually have limited data plan and slow network connections to the central server where the global model is maintained, mitigating the communication overhead is of paramount importance. While existing works mainly focus on reducing the total bits transferred in each update via data compression, we study an orthogonal approach that identifies irrelevant updates made by clients and precludes them from being uploaded for reduced network footprint. Following this idea, we propose Communication-Mitigated Federated Learning (CMFL) in this paper. CMFL provides clients with feedback information regarding the global tendency of model updating. Each client checks if its update aligns with this global tendency and is relevant enough to model improvement. By avoiding uploading those irrelevant updates to the server, CMFL can substantially reduce the communication overhead while still guaranteeing the learning convergence. CMFL is shown to achieve general improvement in communication efficiency for almost all of the existing federated learning schemes. We evaluate CMFL through extensive simulations and EC2 emulations. Compared with vanilla Federated Learning, CMFL yields 13.97x communication efficiency in terms of the reduction of network footprint. When applied to Federated Multi-Task Learning, CMFL improves the communication efficiency by 5.7x with 4% higher prediction accuracy.
AbstractList Federated Learning enables mobile users to collaboratively learn a global prediction model by aggregating their individual updates without sharing the privacy-sensitive data. As mobile devices usually have limited data plan and slow network connections to the central server where the global model is maintained, mitigating the communication overhead is of paramount importance. While existing works mainly focus on reducing the total bits transferred in each update via data compression, we study an orthogonal approach that identifies irrelevant updates made by clients and precludes them from being uploaded for reduced network footprint. Following this idea, we propose Communication-Mitigated Federated Learning (CMFL) in this paper. CMFL provides clients with feedback information regarding the global tendency of model updating. Each client checks if its update aligns with this global tendency and is relevant enough to model improvement. By avoiding uploading those irrelevant updates to the server, CMFL can substantially reduce the communication overhead while still guaranteeing the learning convergence. CMFL is shown to achieve general improvement in communication efficiency for almost all of the existing federated learning schemes. We evaluate CMFL through extensive simulations and EC2 emulations. Compared with vanilla Federated Learning, CMFL yields 13.97x communication efficiency in terms of the reduction of network footprint. When applied to Federated Multi-Task Learning, CMFL improves the communication efficiency by 5.7x with 4% higher prediction accuracy.
Author WANG, Wei
WANG, Luping
LI, Bo
Author_xml – sequence: 1
  givenname: Luping
  surname: WANG
  fullname: WANG, Luping
  organization: Hong Kong University of Science and Technology
– sequence: 2
  givenname: Wei
  surname: WANG
  fullname: WANG, Wei
  organization: Hong Kong University of Science and Technology
– sequence: 3
  givenname: Bo
  surname: LI
  fullname: LI, Bo
  organization: Hong Kong University of Science and Technology
BookMark eNotjE1Lw0AUAFdRsK29C17yBxLf22S_PAiSGi2k9KCey2b3bV0xiSRR8N8b0NMwMMySnXV9R4xdIWSIYG625aZ8zjigyQDAmBO2Nkqj4hq5QAOnbMGFEqkuEC_Ychzf50xomS_YXbmr6ttkF6d4tFPsjknZt-1XF91sfZfsv2l4I-uT0A9JRZ4GO5FParJDN9eX7DzYj5HW_1yx1-rhpXxK6_3jtryv08ilmVIrnQelSQYtC3BokRriDihHp0PjnUJljWxyBc76ohG58loGVzTchQCYr9j13zcS0eFziK0dfg5aawGiyH8BowFKxQ
CODEN IEEPAD
ContentType Conference Proceeding
DBID 6IE
6IH
CBEJK
RIE
RIO
DOI 10.1109/ICDCS.2019.00099
DatabaseName IEEE Electronic Library (IEL) Conference Proceedings
IEEE Proceedings Order Plan (POP) 1998-present by volume
IEEE Xplore All Conference Proceedings
IEEE Electronic Library (IEL)
IEEE Proceedings Order Plans (POP) 1998-present
DatabaseTitleList
Database_xml – sequence: 1
  dbid: RIE
  name: IEEE/IET Electronic Library (IEL) (UW System Shared)
  url: https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Computer Science
EISBN 9781728125190
1728125197
EISSN 2575-8411
EndPage 964
ExternalDocumentID 8885054
Genre orig-research
GroupedDBID 23M
29G
29P
6IE
6IF
6IH
6IK
6IL
6IM
6IN
AAJGR
AAWTH
ABLEC
ACGFS
ADZIZ
ALMA_UNASSIGNED_HOLDINGS
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CBEJK
CHZPO
IEGSK
IJVOP
IPLJI
M43
OCL
RIE
RIL
RIO
RNS
ID FETCH-LOGICAL-i269t-a6cd078e6f8640c1a1ebe2c0e31c8fbdc717a96b370cad4b537d86fc4b2cff013
IEDL.DBID RIE
ISICitedReferencesCount 404
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000565234200090&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
IngestDate Wed Aug 27 02:40:42 EDT 2025
IsPeerReviewed false
IsScholarly true
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-i269t-a6cd078e6f8640c1a1ebe2c0e31c8fbdc717a96b370cad4b537d86fc4b2cff013
PageCount 11
ParticipantIDs ieee_primary_8885054
PublicationCentury 2000
PublicationDate 2019-07-01
PublicationDateYYYYMMDD 2019-07-01
PublicationDate_xml – month: 07
  year: 2019
  text: 2019-07-01
  day: 01
PublicationDecade 2010
PublicationTitle Proceedings of the International Conference on Distributed Computing Systems
PublicationTitleAbbrev ICDSC
PublicationYear 2019
Publisher IEEE
Publisher_xml – name: IEEE
SSID ssj0005863
Score 2.5841947
Snippet Federated Learning enables mobile users to collaboratively learn a global prediction model by aggregating their individual updates without sharing the...
SourceID ieee
SourceType Publisher
StartPage 954
SubjectTerms Communication efficiency
Computational modeling
Convergence
Data models
Federated Learning
Network
Optimization
Servers
Training
Training data
Title CMFL: Mitigating Communication Overhead for Federated Learning
URI https://ieeexplore.ieee.org/document/8885054
WOSCitedRecordID wos000565234200090&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV09T8MwED21FQNTgRbxLQ-MmCZxsB0GlkAEEi2VAKlb5ZztqkuKSsrvx07SFiQWNsuLpTv73p3v7h3AZcxQGmf3qNbCBSjGWqryWFD0gITe4c9VNWxCjEZyMknGLbja9MIYY6riM3Ptl1UuXy9w5b_KBi5ac4Adt6EthKh7tbblHJKzdRoySAZP6X366iu3PB1l4JldfwxPqbAj6_7v1D3ob5vwyHgDL_vQMsUBdNdTGEjzKHtwlw6z51synNdsGcWM_Gr6IC_usjqLq4lzT0nmuSOce6lJQ6w668N79vCWPtJmKgKdRzwpqeKoHa4bbiWPAwxV6PQQYWBYiNLmGl2AphKeMxGg0nF-w4SW3GKcR2it8_gOoVMsCnMEBEMbacNZxSKvFEs0t4pbB1AmlFLLY-h5cUw_auKLaSOJk7-3T2HXy7uuZT2DTrlcmXPYwa9y_rm8qLT1DTk2l4U
linkProvider IEEE
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LTwIxEJ4gmugJFYxve_Doyj5Kt-vBy-oGIiCJmHAj3WlL9rIYBH-_7e4Cmnjx1vTSZKadb6Yz8w3ALQ2QK2P3HClDE6AorR2R0tBBC0hoHf5UFMMmwuGQTybRqAZ3m14YpVRRfKbu7bLI5cs5ruxXWdtEawaw6Q7sdij1vbJba1vQwVmwTkS6UbsXP8VvtnbLElK6ltv1x_iUAj2Sxv_OPYTWtg2PjDYAcwQ1lR9DYz2HgVTPsgmP8SDpP5BBVvJl5DPyq-2DvJrramyuJMZBJYlljzAOpiQVteqsBe_J8zjuOtVcBCfzWbR0BENpkF0xzRl10ROe0YSPrgo85DqVaEI0EbE0CF0UkqadIJScaaSpj1obn-8E6vk8V6dA0NO-VCwoeOSFCCLJtGDaQJTyOJf8DJpWHNOPkvpiWkni_O_tG9jvjgf9ab83fLmAAyv7srL1EurLxUpdwR5-LbPPxXWhuW9KcJrM
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&rft.genre=proceeding&rft.title=Proceedings+of+the+International+Conference+on+Distributed+Computing+Systems&rft.atitle=CMFL%3A+Mitigating+Communication+Overhead+for+Federated+Learning&rft.au=WANG%2C+Luping&rft.au=WANG%2C+Wei&rft.au=LI%2C+Bo&rft.date=2019-07-01&rft.pub=IEEE&rft.eissn=2575-8411&rft.spage=954&rft.epage=964&rft_id=info:doi/10.1109%2FICDCS.2019.00099&rft.externalDocID=8885054