FedPD: A Federated Learning Framework With Adaptivity to Non-IID Data

Federated Learning (FL) is popular for communication-efficient learning from distributed data. To utilize data at different clients without moving them to the cloud, algorithms such as the Federated Averaging (FedAvg) have adopted a computation then aggregation model, in which multiple local updates...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on signal processing Vol. 69; pp. 6055 - 6070
Main Authors: Zhang, Xinwei, Hong, Mingyi, Dhople, Sairaj, Yin, Wotao, Liu, Yang
Format: Journal Article
Language:English
Published: New York IEEE 2021
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects:
ISSN:1053-587X, 1941-0476
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Abstract Federated Learning (FL) is popular for communication-efficient learning from distributed data. To utilize data at different clients without moving them to the cloud, algorithms such as the Federated Averaging (FedAvg) have adopted a computation then aggregation model, in which multiple local updates are performed using local data before aggregation. These algorithms fail to work when faced with practical challenges, e.g., the local data being non-identically independently distributed. In this paper, we first characterize the behavior of the FedAvg algorithm, and show that without strong and unrealistic assumptions on the problem structure, it can behave erratically. Aiming at designing FL algorithms that are provably fast and require as few assumptions as possible, we propose a new algorithm design strategy from the primal-dual optimization perspective. Our strategy yields algorithms that can deal with non-convex objective functions, achieves the best possible optimization and communication complexity (in a well-defined sense), and accommodates full-batch and mini-batch local computation models. Importantly, the proposed algorithms are communication efficient , in that the communication effort can be reduced when the level of heterogeneity among the local data also reduces. In the extreme case where the local data becomes homogeneous, only <inline-formula><tex-math notation="LaTeX">\mathcal {O}(1)</tex-math></inline-formula> communication is required among the agents. To the best of our knowledge, this is the first algorithmic framework for FL that achieves all the above properties.
AbstractList Federated Learning (FL) is popular for communication-efficient learning from distributed data. To utilize data at different clients without moving them to the cloud, algorithms such as the Federated Averaging (FedAvg) have adopted a computation then aggregation model, in which multiple local updates are performed using local data before aggregation. These algorithms fail to work when faced with practical challenges, e.g., the local data being non-identically independently distributed. In this paper, we first characterize the behavior of the FedAvg algorithm, and show that without strong and unrealistic assumptions on the problem structure, it can behave erratically. Aiming at designing FL algorithms that are provably fast and require as few assumptions as possible, we propose a new algorithm design strategy from the primal-dual optimization perspective. Our strategy yields algorithms that can deal with non-convex objective functions, achieves the best possible optimization and communication complexity (in a well-defined sense), and accommodates full-batch and mini-batch local computation models. Importantly, the proposed algorithms are communication efficient , in that the communication effort can be reduced when the level of heterogeneity among the local data also reduces. In the extreme case where the local data becomes homogeneous, only <inline-formula><tex-math notation="LaTeX">\mathcal {O}(1)</tex-math></inline-formula> communication is required among the agents. To the best of our knowledge, this is the first algorithmic framework for FL that achieves all the above properties.
Federated Learning (FL) is popular for communication-efficient learning from distributed data. To utilize data at different clients without moving them to the cloud, algorithms such as the Federated Averaging (FedAvg) have adopted a computation then aggregation model, in which multiple local updates are performed using local data before aggregation. These algorithms fail to work when faced with practical challenges, e.g., the local data being non-identically independently distributed. In this paper, we first characterize the behavior of the FedAvg algorithm, and show that without strong and unrealistic assumptions on the problem structure, it can behave erratically. Aiming at designing FL algorithms that are provably fast and require as few assumptions as possible, we propose a new algorithm design strategy from the primal-dual optimization perspective. Our strategy yields algorithms that can deal with non-convex objective functions, achieves the best possible optimization and communication complexity (in a well-defined sense), and accommodates full-batch and mini-batch local computation models. Importantly, the proposed algorithms are communication efficient , in that the communication effort can be reduced when the level of heterogeneity among the local data also reduces. In the extreme case where the local data becomes homogeneous, only [Formula Omitted] communication is required among the agents. To the best of our knowledge, this is the first algorithmic framework for FL that achieves all the above properties.
Author Dhople, Sairaj
Liu, Yang
Hong, Mingyi
Zhang, Xinwei
Yin, Wotao
Author_xml – sequence: 1
  givenname: Xinwei
  surname: Zhang
  fullname: Zhang, Xinwei
  email: zhan6234@umn.edu
  organization: University of Minnesota, Minneapolis, MN, USA
– sequence: 2
  givenname: Mingyi
  orcidid: 0000-0003-1263-9365
  surname: Hong
  fullname: Hong, Mingyi
  email: mhong@umn.edu
  organization: University of Minnesota, Minneapolis, MN, USA
– sequence: 3
  givenname: Sairaj
  orcidid: 0000-0002-1180-1415
  surname: Dhople
  fullname: Dhople, Sairaj
  email: sdhople@umn.edu
  organization: University of Minnesota, Minneapolis, MN, USA
– sequence: 4
  givenname: Wotao
  orcidid: 0000-0001-6697-9731
  surname: Yin
  fullname: Yin, Wotao
  email: wotaoyin@math.ucla.edu
  organization: University of California, Los Angeles, CA, USA
– sequence: 5
  givenname: Yang
  orcidid: 0000-0003-3800-3533
  surname: Liu
  fullname: Liu, Yang
  email: liuy03@air.tsinghua.edu.cn
  organization: Institute for AI Industry Research, Tsinghua University, Beijing, China
BookMark eNp9kE1PAjEURRuDiYDuTdw0cT3YdtoOdUf4UBKiJGJ017zpdLQIU-wUDf_eIRAXLly9u7jnvuR0UKvylUXokpIepUTdLJ7mPUYY7aWUCiXYCWpTxWlCeCZbTSYiTUQ_ez1DnbpeEkI5V7KNxhNbzEe3eICbYANEW-CZhVC56g1PAqzttw8f-MXFdzwoYBPdl4s7HD1-8FUynY7wCCKco9MSVrW9ON4uep6MF8P7ZPZ4Nx0OZolhisYkzUxusn6ZgzQmL1U_LwVVQBTJrCgMcMOKlEsJkkoDtChKYLksjeBNpCxPu-j6sLsJ_nNr66iXfhuq5qVmQsmsTwQnTUseWib4ug621MZFiM5XMYBbaUr0XplulOm9Mn1U1oDkD7gJbg1h9x9ydUCctfa3roSQQqj0B-mid_g
CODEN ITPRED
CitedBy_id crossref_primary_10_1016_j_ins_2024_120956
crossref_primary_10_1109_TSP_2023_3295734
crossref_primary_10_1109_JSAC_2024_3443759
crossref_primary_10_1109_TMC_2025_3551759
crossref_primary_10_1109_TPAMI_2023_3300886
crossref_primary_10_1109_TSP_2024_3351469
crossref_primary_10_3390_electronics13173538
crossref_primary_10_1109_JIOT_2025_3546624
crossref_primary_10_1109_TSP_2023_3239799
crossref_primary_10_1007_s10115_024_02117_3
crossref_primary_10_1007_s10489_024_05956_3
crossref_primary_10_1109_TCOMM_2023_3277878
crossref_primary_10_1109_TMC_2025_3569887
crossref_primary_10_1109_TPAMI_2025_3546659
crossref_primary_10_1109_TVT_2024_3497219
crossref_primary_10_1007_s10618_022_00912_6
crossref_primary_10_1109_TSP_2023_3333658
crossref_primary_10_1109_TAI_2024_3446759
crossref_primary_10_1016_j_iot_2024_101144
crossref_primary_10_1109_TNSE_2024_3409755
crossref_primary_10_1109_TPAMI_2023_3243080
crossref_primary_10_3390_app15147843
crossref_primary_10_1016_j_neunet_2024_106772
crossref_primary_10_1109_JIOT_2023_3285937
crossref_primary_10_1038_s41598_025_99385_y
crossref_primary_10_1016_j_neunet_2025_107501
crossref_primary_10_1016_j_knosys_2024_112484
crossref_primary_10_1109_TSP_2022_3168490
crossref_primary_10_1109_TSP_2024_3514802
crossref_primary_10_1109_JIOT_2024_3376552
crossref_primary_10_1109_JIOT_2024_3464239
crossref_primary_10_1109_TSP_2024_3408631
crossref_primary_10_1109_JSTSP_2024_3513692
crossref_primary_10_1109_TSP_2023_3262181
crossref_primary_10_1109_TSP_2024_3368751
crossref_primary_10_1109_TSP_2023_3241768
crossref_primary_10_3390_math12081148
crossref_primary_10_1016_j_future_2022_05_003
crossref_primary_10_3390_math12172661
crossref_primary_10_1016_j_patrec_2023_05_030
crossref_primary_10_1088_2631_8695_ad6ca3
crossref_primary_10_1109_TCE_2023_3330501
crossref_primary_10_1109_TBDATA_2024_3423693
crossref_primary_10_1016_j_procs_2025_03_151
crossref_primary_10_1109_TSP_2022_3214122
crossref_primary_10_1016_j_ifacol_2025_07_065
crossref_primary_10_1109_TII_2024_3468446
crossref_primary_10_1109_JIOT_2024_3352280
crossref_primary_10_3390_s23167235
crossref_primary_10_1109_JIOT_2024_3490855
crossref_primary_10_1109_TON_2025_3548277
crossref_primary_10_1109_TAC_2024_3383271
crossref_primary_10_1109_TNSE_2022_3185672
crossref_primary_10_1145_3718363
crossref_primary_10_1109_TSP_2023_3240083
crossref_primary_10_1109_TSP_2025_3561720
crossref_primary_10_1016_j_neunet_2024_107084
crossref_primary_10_1109_JIOT_2024_3376548
crossref_primary_10_1109_JIOT_2023_3312852
crossref_primary_10_3390_iot3020016
crossref_primary_10_1016_j_neucom_2024_127906
crossref_primary_10_1109_JIOT_2025_3534982
crossref_primary_10_1137_22M1475648
crossref_primary_10_1109_JIOT_2022_3222191
crossref_primary_10_1109_JIOT_2024_3408149
crossref_primary_10_3390_app14072720
crossref_primary_10_3390_electronics13142739
crossref_primary_10_1109_TSP_2023_3316588
crossref_primary_10_1007_s40747_023_01110_7
crossref_primary_10_1109_OJCOMS_2025_3566464
crossref_primary_10_3390_app13095270
crossref_primary_10_1109_TII_2024_3380742
crossref_primary_10_1109_ACCESS_2025_3591839
crossref_primary_10_1109_TMC_2024_3438285
crossref_primary_10_1007_s10586_024_04900_x
crossref_primary_10_1109_TSP_2023_3268845
crossref_primary_10_1007_s40304_022_00332_4
crossref_primary_10_1007_s13369_025_10043_x
crossref_primary_10_1016_j_comnet_2023_109657
crossref_primary_10_3390_math11081972
crossref_primary_10_1109_TSP_2023_3292497
crossref_primary_10_1109_TSP_2025_3540655
crossref_primary_10_3390_electronics14071452
crossref_primary_10_1109_TMC_2024_3510135
crossref_primary_10_1007_s10589_025_00685_w
crossref_primary_10_1016_j_procs_2022_11_292
crossref_primary_10_1016_j_jfranklin_2022_12_053
crossref_primary_10_1109_OJCOMS_2024_3449563
Cites_doi 10.1007/s10463-009-0242-4
10.1109/MSP.2020.2975749
10.1109/MLSP.2016.7738878
10.1109/IEEECONF44664.2019.9049023
10.1007/s10107-019-01406-y
10.1109/TSP.2019.2943230
10.1609/aaai.v33i01.33015693
10.1007/978-1-4419-8853-9
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021
DBID 97E
RIA
RIE
AAYXX
CITATION
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
DOI 10.1109/TSP.2021.3115952
DatabaseName IEEE Xplore (IEEE)
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Electronic Library (IEL)
CrossRef
Computer and Information Systems Abstracts
Electronics & Communications Abstracts
Technology Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
DatabaseTitle CrossRef
Technology Research Database
Computer and Information Systems Abstracts – Academic
Electronics & Communications Abstracts
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts Professional
DatabaseTitleList
Technology Research Database
Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
EISSN 1941-0476
EndPage 6070
ExternalDocumentID 10_1109_TSP_2021_3115952
9556559
Genre orig-research
GrantInformation_xml – fundername: Air Force Office of Scientific Research; AFOSR
  grantid: 19RT0424
  funderid: 10.13039/100000181
– fundername: ARO
  grantid: W911NF-19-1-0247
GroupedDBID -~X
.DC
0R~
29I
3EH
4.4
53G
5GY
5VS
6IK
85S
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABFSI
ABQJQ
ABVLG
ACGFO
ACIWK
ACKIV
ACNCT
AENEX
AETIX
AGQYO
AGSQL
AHBIQ
AI.
AIBXA
AJQPL
AKJIK
AKQYR
ALLEH
ALMA_UNASSIGNED_HOLDINGS
ASUFR
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CS3
E.L
EBS
EJD
F5P
HZ~
H~9
ICLAB
IFIPE
IFJZH
IPLJI
JAVBF
LAI
MS~
O9-
OCL
P2P
RIA
RIE
RNS
TAE
TN5
VH1
AAYXX
CITATION
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
ID FETCH-LOGICAL-c291t-37cbc78fba6ccbf98bf519a0907e5dca4c2d3466a616ca1ddfa2b6fc54ddf12b3
IEDL.DBID RIE
ISICitedReferencesCount 110
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000717754200003&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 1053-587X
IngestDate Mon Jun 30 10:15:23 EDT 2025
Sat Nov 29 04:10:54 EST 2025
Tue Nov 18 20:53:12 EST 2025
Wed Aug 27 02:38:52 EDT 2025
IsPeerReviewed true
IsScholarly true
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
https://doi.org/10.15223/policy-029
https://doi.org/10.15223/policy-037
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c291t-37cbc78fba6ccbf98bf519a0907e5dca4c2d3466a616ca1ddfa2b6fc54ddf12b3
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ORCID 0000-0001-6697-9731
0000-0003-3800-3533
0000-0003-1263-9365
0000-0002-1180-1415
PQID 2596780540
PQPubID 85478
PageCount 16
ParticipantIDs ieee_primary_9556559
crossref_citationtrail_10_1109_TSP_2021_3115952
crossref_primary_10_1109_TSP_2021_3115952
proquest_journals_2596780540
PublicationCentury 2000
PublicationDate 20210000
2021-00-00
20210101
PublicationDateYYYYMMDD 2021-01-01
PublicationDate_xml – year: 2021
  text: 20210000
PublicationDecade 2020
PublicationPlace New York
PublicationPlace_xml – name: New York
PublicationTitle IEEE transactions on signal processing
PublicationTitleAbbrev TSP
PublicationYear 2021
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References wang (ref6) 2021; 22
liang (ref15) 2019
krizhevsky (ref28) 2009
bagdasaryan (ref3) 2020
caldas (ref27) 2018
urban (ref7) 2019
yu (ref8) 0
ref19
scaman (ref21) 2017
yuan (ref14) 2020; 33
zhao (ref2) 2020
cen (ref16) 2019
wei (ref4) 2020
li (ref10) 2019
sahu (ref17) 2018
ref24
ref23
ref26
ref20
ref22
(ref1) 2016
pathak (ref12) 2020
ref9
acar (ref25) 2021
khaled (ref13) 2019
ref5
karimireddy (ref11) 2020
hanzely (ref18) 2020
mcmahan (ref29) 2017
References_xml – ident: ref26
  doi: 10.1007/s10463-009-0242-4
– year: 2019
  ident: ref7
  article-title: Local SGD converges fast and communicates little
  publication-title: Proc Int Conf Learn Representations
– start-page: 5132
  year: 2020
  ident: ref11
  article-title: Scaffold: Stochastic controlled averaging for on-device federated learning
  publication-title: Proc Int Conf Mach Learn
– year: 2019
  ident: ref16
  article-title: Convergence of distributed stochastic variance reduced methods without sampling extra data
– ident: ref9
  doi: 10.1109/MSP.2020.2975749
– ident: ref23
  doi: 10.1109/MLSP.2016.7738878
– year: 2019
  ident: ref15
  article-title: Variance reduced local SGD with lower communication complexity
– volume: 22
  start-page: 1
  year: 2021
  ident: ref6
  article-title: Cooperative SGD: A unified framework for the design and analysis of communication-efficient SGD algorithms
  publication-title: J Mach Learn Res
– year: 2020
  ident: ref12
  article-title: Fedsplit: An algorithmic framework for fast federated optimization
– start-page: 7184
  year: 0
  ident: ref8
  article-title: On the linear speedup analysis of communication efficient momentum SGD for distributed non-convex optimization
  publication-title: Proc 36th Int Conf Mach Learn
– year: 2016
  ident: ref1
  article-title: Federated learning: Strategies for improving communication efficiency
– ident: ref24
  doi: 10.1109/IEEECONF44664.2019.9049023
– year: 2019
  ident: ref10
  article-title: On the convergence of fedavg on non-iid data
  publication-title: Proc Int Conf Learn Representations
– volume: 33
  year: 2020
  ident: ref14
  article-title: Federated accelerated stochastic gradient descent
  publication-title: Adv Neural Inf Process Syst
– year: 2020
  ident: ref4
  article-title: A framework for evaluating gradient leakage attacks in federated learning
– year: 2020
  ident: ref2
  article-title: iDLG: Improved Deep Leakage From Gradients
– year: 2021
  ident: ref25
  article-title: Federated learning based on dynamic regularization
  publication-title: Proc Int Conf Learn Representations
– ident: ref20
  doi: 10.1007/s10107-019-01406-y
– year: 2019
  ident: ref13
  article-title: First analysis of local GD on heterogeneous data
– start-page: 3027
  year: 2017
  ident: ref21
  article-title: Optimal algorithms for smooth and strongly convex distributed optimization in networks
  publication-title: Proc Int Conf Mach Learn
– year: 2009
  ident: ref28
  article-title: Learning multiple layers of features from tiny images
– ident: ref22
  doi: 10.1109/TSP.2019.2943230
– start-page: 2938
  year: 2020
  ident: ref3
  article-title: How to backdoor federated learning
  publication-title: Proc Int Conf Artif Intell Statist
– ident: ref5
  doi: 10.1609/aaai.v33i01.33015693
– ident: ref19
  doi: 10.1007/978-1-4419-8853-9
– year: 2018
  ident: ref27
  article-title: Leaf: A benchmark for federated settings
– year: 2020
  ident: ref18
  article-title: Federated learning of a mixture of global and local models
– start-page: 1273
  year: 2017
  ident: ref29
  article-title: Communication-efficient learning of deep networks from decentralized data
  publication-title: Proc Artif Intell Statist
– year: 2018
  ident: ref17
  article-title: On the convergence of federated optimization in heterogeneous networks
SSID ssj0014496
Score 2.687632
Snippet Federated Learning (FL) is popular for communication-efficient learning from distributed data. To utilize data at different clients without moving them to the...
SourceID proquest
crossref
ieee
SourceType Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 6055
SubjectTerms Agglomeration
Algorithms
Communication
Complexity theory
Computational modeling
convergence analysis
data heterogeneity
Data models
Design optimization
Distributed algorithms
Distributed databases
Federated learning
Heterogeneity
machine learning algorithms
Servers
Signal processing algorithms
Title FedPD: A Federated Learning Framework With Adaptivity to Non-IID Data
URI https://ieeexplore.ieee.org/document/9556559
https://www.proquest.com/docview/2596780540
Volume 69
WOSCitedRecordID wos000717754200003&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVIEE
  databaseName: IEEE Electronic Library (IEL)
  customDbUrl:
  eissn: 1941-0476
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0014496
  issn: 1053-587X
  databaseCode: RIE
  dateStart: 19910101
  isFulltext: true
  titleUrlDefault: https://ieeexplore.ieee.org/
  providerName: IEEE
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3PS8MwFH5sw4Me_DXF6ZQcvAh2a9MmbbwNZ3EgY-DU3UqaHzqQdWydf79J2w1FEbzlkJTyvTbve8l73wO4THkgPEpCB6cMO8bjEyeNBHEk15hx7VO_SPl_fgiHw2gyYaMaXG9qYZRSRfKZ6thhcZcvM7GyR2VdRgz9IKwO9TCkZa3W5sYgCIpeXIYu-A6Jwsn6StJl3fHjyASC2OtYZRlG8DcXVPRU-bERF94l3vvfe-3DbsUiUa80-wHU1OwQdr5oCzbhLlZy1L9BPRRbvQhDKSWqxFRfUbxOyUIv0_wN9SSfl10kUJ6hYTZzBoM-6vOcH8FTfDe-vXeqngmOwMzLzX4hUhFGOuVUiFSzKNWGo3HXxMCKSGEsg6UfUMqpRwX3pNQc23ofEpihh1P_GBqzbKZOABmu4XMaYhpJGjCtI2M3rKT2ZRgIV6kWdNcwJqISFLd9Ld6TIrBwWWKATyzwSQV8C642K-almMYfc5sW6M28CuMWtNeWSqq_bZmYEI7a3gyBe_r7qjPYts8uj07a0MgXK3UOW-Ijny4XF8WH9AlP5sSi
linkProvider IEEE
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1bS8MwFD7MC6gP3sV5zYMvgnVt2qSNb8NZHM4xcOreSpqLDmSTrfP3m3RtURTBtzwktHynzTknOef7AM5SHgiPktDBKcOO8fjESSNBHMk1Zlz71M9L_p86YbcbDQasV4OLqhdGKZUXn6lLO8zv8uVYzOxRWYMRE34QtgBLVjmr6Naq7gyCIFfjMgGD75AoHJSXki5r9B96JhXE3qXllmEEf3NCuarKj6049y_xxv_ebBPWizgSNeeG34KaGm3D2hd2wR24iZXsta5QE8WWMcIElRIVdKovKC6LstDzMHtFTcnf5zoSKBuj7njktNst1OIZ34XH-KZ_fesUqgmOwMzLzI4hUhFGOuVUiFSzKNUmSuOuyYIVkcLYBks_oJRTjwruSak5th0_JDBDD6f-HiyOxiO1D8hEGz6nIaaRpAHTOjKWw0pqX4aBcJWqQ6OEMREFpbhVtnhL8tTCZYkBPrHAJwXwdTivVrzP6TT-mLtjga7mFRjX4ai0VFL8b9PEJHHUqjME7sHvq05h5bZ_30k67e7dIaza58wPUo5gMZvM1DEsi49sOJ2c5B_VJ7uYx-s
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=FedPD%3A+A+Federated+Learning+Framework+With+Adaptivity+to+Non-IID+Data&rft.jtitle=IEEE+transactions+on+signal+processing&rft.au=Zhang%2C+Xinwei&rft.au=Hong%2C+Mingyi&rft.au=Dhople%2C+Sairaj&rft.au=Yin%2C+Wotao&rft.date=2021&rft.pub=IEEE&rft.issn=1053-587X&rft.volume=69&rft.spage=6055&rft.epage=6070&rft_id=info:doi/10.1109%2FTSP.2021.3115952&rft.externalDocID=9556559
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1053-587X&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1053-587X&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1053-587X&client=summon