Optimal Power Control for Over-the-Air Federated Learning with Gradient Compression

Federated Learning (FL) has emerged as a transformative approach in distributed machine learning, enabling the collaborative training of models using decentralized datasets from diverse sources such as mobile edge devices. This paradigm not only enhances data privacy but also significantly reduces t...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Proceedings - International Conference on Parallel and Distributed Systems S. 326 - 333
Hauptverfasser: Ruan, Mengzhe, Li, Yunhe, Zhang, Weizhou, Song, Linqi, Xu, Weitao
Format: Tagungsbericht
Sprache:Englisch
Veröffentlicht: IEEE 10.10.2024
Schlagworte:
ISSN:2690-5965
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Abstract Federated Learning (FL) has emerged as a transformative approach in distributed machine learning, enabling the collaborative training of models using decentralized datasets from diverse sources such as mobile edge devices. This paradigm not only enhances data privacy but also significantly reduces the communication burden typically associated with centralized data aggregation. In wireless networks, Over-the-air Federated Learning (OTA-FL) has been developed as a communication-efficient solution, allowing for the simultaneous transmission and aggregation of model updates from numerous edge devices across the available bandwidth. Gradient compression techniques are necessary to further enhance the communication efficiency of FL, particularly in bandwidth-constrained wireless environments. Despite these advancements, OTA-FL with gradient compression encounters substantial challenges, including learning performance degradation due to compression errors, non-uniform channel fading, and noise interference. Existing power control strategies have yet to fully address these issues, leaving a significant gap in optimizing OTA-FL performance under gradient compression. This paper introduces a novel power control strategy that coordinately integrates gradient compression to optimize OTAFL performance by minimizing the impact of channel fading and noise. Our approach employs linear approximations to complex terms, ensuring the stability and effectiveness of each gradient descent iteration. Numerical results demonstrate that our strategy significantly enhances convergence rates compared to traditional methods like channel inversion and uniform power transmission. This research advances the OTA-FL field and opens new avenues for performance tuning in communication-efficient federated learning systems.
AbstractList Federated Learning (FL) has emerged as a transformative approach in distributed machine learning, enabling the collaborative training of models using decentralized datasets from diverse sources such as mobile edge devices. This paradigm not only enhances data privacy but also significantly reduces the communication burden typically associated with centralized data aggregation. In wireless networks, Over-the-air Federated Learning (OTA-FL) has been developed as a communication-efficient solution, allowing for the simultaneous transmission and aggregation of model updates from numerous edge devices across the available bandwidth. Gradient compression techniques are necessary to further enhance the communication efficiency of FL, particularly in bandwidth-constrained wireless environments. Despite these advancements, OTA-FL with gradient compression encounters substantial challenges, including learning performance degradation due to compression errors, non-uniform channel fading, and noise interference. Existing power control strategies have yet to fully address these issues, leaving a significant gap in optimizing OTA-FL performance under gradient compression. This paper introduces a novel power control strategy that coordinately integrates gradient compression to optimize OTAFL performance by minimizing the impact of channel fading and noise. Our approach employs linear approximations to complex terms, ensuring the stability and effectiveness of each gradient descent iteration. Numerical results demonstrate that our strategy significantly enhances convergence rates compared to traditional methods like channel inversion and uniform power transmission. This research advances the OTA-FL field and opens new avenues for performance tuning in communication-efficient federated learning systems.
Author Li, Yunhe
Zhang, Weizhou
Song, Linqi
Ruan, Mengzhe
Xu, Weitao
Author_xml – sequence: 1
  givenname: Mengzhe
  surname: Ruan
  fullname: Ruan, Mengzhe
  email: cs.mzr@my.cityu.edu.hk
  organization: City University of Hong Kong Shenzhen Research Institute
– sequence: 2
  givenname: Yunhe
  surname: Li
  fullname: Li, Yunhe
  email: yunheli4-c@my.cityu.edu.hk
  organization: City University of Hong Kong Shenzhen Research Institute
– sequence: 3
  givenname: Weizhou
  surname: Zhang
  fullname: Zhang, Weizhou
  email: weizhouz@u.nus.edu
  organization: Institute of Operations Research and Analytics, National University of Singapore
– sequence: 4
  givenname: Linqi
  surname: Song
  fullname: Song, Linqi
  email: linqi.song@cityu.edu.hk
  organization: City University of Hong Kong Shenzhen Research Institute
– sequence: 5
  givenname: Weitao
  surname: Xu
  fullname: Xu, Weitao
  email: weitaoxu@cityu.edu.hk
  organization: City University of Hong Kong Shenzhen Research Institute
BookMark eNotj11LwzAYRqMouM39A5H8gc43SfN1Waqbg8IG0-vRNW9dZEtLGhz--xX06nluzoEzJXehC0jIM4MFY2Bf1uW2eN0pISQsOPB8AQASbsjcamuEYJJJq9QtmXBlIRu_fCDTYfgG4DAyE7Lb9Mmf6xPddheMtOxCit2Jtl2kmx-MWTpiVvhIl-gw1gkdrbCOwYcvevHpSFexdh5DGslzH3EYfBceyX1bnwac_--MfC7fPsr3rNqs1mVRZZ5plTJUaB2iaG0LKIHlSiuXC2VMw407NGOUbUXDhGSqcfYAmhkttDNa5-zgQMzI05_XI-K-j2NH_N0z0EpozsUV4ptSzA
CODEN IEEPAD
ContentType Conference Proceeding
DBID 6IE
6IL
CBEJK
RIE
RIL
DOI 10.1109/ICPADS63350.2024.00050
DatabaseName IEEE Electronic Library (IEL) Conference Proceedings
IEEE Proceedings Order Plan All Online (POP All Online) 1998-present by volume
IEEE Xplore All Conference Proceedings
IEEE Electronic Library (IEL)
IEEE Proceedings Order Plans (POP All) 1998-Present
DatabaseTitleList
Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Computer Science
EISBN 9798331515966
EISSN 2690-5965
EndPage 333
ExternalDocumentID 10763722
Genre orig-research
GrantInformation_xml – fundername: Innovation and Technology Commission
  funderid: 10.13039/501100003452
– fundername: National Natural Science Foundation of China
  funderid: 10.13039/501100001809
GroupedDBID 29O
6IE
6IF
6IH
6IK
6IL
6IM
6IN
AAJGR
AAWTH
ABLEC
ADZIZ
ALMA_UNASSIGNED_HOLDINGS
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CBEJK
CHZPO
IEGSK
IPLJI
OCL
RIE
RIL
RNS
ID FETCH-LOGICAL-i176t-e6e9dee3f9f0e5014676d43688c28dbc3359f3c13516cd9b0718737d87741bd03
IEDL.DBID RIE
ISICitedReferencesCount 1
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001481011800040&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
IngestDate Wed Aug 27 01:59:30 EDT 2025
IsPeerReviewed false
IsScholarly true
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-i176t-e6e9dee3f9f0e5014676d43688c28dbc3359f3c13516cd9b0718737d87741bd03
PageCount 8
ParticipantIDs ieee_primary_10763722
PublicationCentury 2000
PublicationDate 2024-Oct.-10
PublicationDateYYYYMMDD 2024-10-10
PublicationDate_xml – month: 10
  year: 2024
  text: 2024-Oct.-10
  day: 10
PublicationDecade 2020
PublicationTitle Proceedings - International Conference on Parallel and Distributed Systems
PublicationTitleAbbrev ICPADS
PublicationYear 2024
Publisher IEEE
Publisher_xml – name: IEEE
SSID ssj0020350
Score 2.2773945
Snippet Federated Learning (FL) has emerged as a transformative approach in distributed machine learning, enabling the collaborative training of models using...
SourceID ieee
SourceType Publisher
StartPage 326
SubjectTerms Fading channels
Federated learning
Gradient Compression
Linear approximation
Noise
Numerical stability
Over-the-Air Computation
Power control
Power transmission
Stability analysis
Tuning
Wireless networks
Title Optimal Power Control for Over-the-Air Federated Learning with Gradient Compression
URI https://ieeexplore.ieee.org/document/10763722
WOSCitedRecordID wos001481011800040&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV09T8MwELUAMTCVjyK-5YHVYMdt7IxVocDSRipI3arEvqBKkKKQ8vu5c9LCwsBmeYilc8737Lt7j7FrKaHnTT8XkcqIVFsrYbNCC6VMgQEISKo8iE2Y8djOZknaNquHXhgACMVncEPDkMv3S7eipzL0cPQGE-GJu22MaZq1NrcrSpG1LcBKJrdPw3RwN401zuItMCKObEnN9b80VEIIGXX-ufg-6_404_F0E2YO2BaUh6yzVmPgrXMesekEvf89e-MpCZ_xYVODzhGU8gn-rwKRnhgsKj4i-ghEmJ633KqvnB5j-UMVqr9qTp9uqmPLLnsZ3T8PH0UrmSAWysS1gBgSD6CLpJBAKcPYxJ5I5q2LrM8dmiMptCNZvtj5JEeAYY023iIKVLmX-pjtlMsSThhHVzfe2yLKZdaDHJEEDpzyEPVzg8jilHXJSPOPhhVjvrbP2R_z52yP9oHOfSUv2E5dreCS7bqvevFZXYW9_AYrq58N
linkProvider IEEE
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV09T8MwELVQQYKpfBTxjQdWg52kcTJWhdKK0kZqkbpVcXxBlSBFIeX3c5ekhYWBzfKQSOdc7tm-9x5jN1KCZ3XbCEfFJKrtKhHEqSuU0ikWICCr8tJsQo9GwWwWRjVZveTCAEDZfAa3NCzv8u0yWdFRGWY4ZoN28I-73fY8R1V0rc3-ii7JahKwkuHdoBt17ie-i7O4D3RIJVsSvf6Xi0pZRHrNf75-n7V-6Hg82hSaA7YF2SFrrv0YeJ2eR2wyxvx_j994RNZnvFt1oXOEpXyMX6xArCc6i5z3SEACMabltbrqK6fjWP6Yl_1fBadHV_2xWYu99B6m3b6oTRPEQmm_EOBDaAHcNEwl0KWhr31LMvNB4gTWJBiOMHUTMubzExsahBiBdrUNEAcqY6V7zBrZMoMTxjHZtbVB6hgZe2AQS-AgURacttGILU5Zi4I0_6h0Mebr-Jz9MX_NdvvT5-F8OBg9nbM9WhOqAkpesEaRr-CS7SRfxeIzvyrX9RuhG6JU
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&rft.genre=proceeding&rft.title=Proceedings+-+International+Conference+on+Parallel+and+Distributed+Systems&rft.atitle=Optimal+Power+Control+for+Over-the-Air+Federated+Learning+with+Gradient+Compression&rft.au=Ruan%2C+Mengzhe&rft.au=Li%2C+Yunhe&rft.au=Zhang%2C+Weizhou&rft.au=Song%2C+Linqi&rft.date=2024-10-10&rft.pub=IEEE&rft.eissn=2690-5965&rft.spage=326&rft.epage=333&rft_id=info:doi/10.1109%2FICPADS63350.2024.00050&rft.externalDocID=10763722