2D-THA-ADMM: communication efficient distributed ADMM algorithm framework based on two-dimensional torus hierarchical AllReduce

Model synchronization refers to the communication process involved in large-scale distributed machine learning tasks. As the cluster scales up, the synchronization of model parameters becomes a challenging task that has to be coordinated among thousands of workers. Firstly, this study proposes a h i...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:International journal of machine learning and cybernetics Jg. 15; H. 2; S. 207 - 226
Hauptverfasser: Wang, Guozheng, Lei, Yongmei, Zhang, Zeyu, Peng, Cunlu
Format: Journal Article
Sprache:Englisch
Veröffentlicht: Berlin/Heidelberg Springer Berlin Heidelberg 01.02.2024
Schlagworte:
ISSN:1868-8071, 1868-808X
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Abstract Model synchronization refers to the communication process involved in large-scale distributed machine learning tasks. As the cluster scales up, the synchronization of model parameters becomes a challenging task that has to be coordinated among thousands of workers. Firstly, this study proposes a h ierarchical A llReduce algorithm structured on a two - d imensional t orus (2D-THA), which utilizes a hierarchical structure to synchronize model parameters and maximize bandwidth utilization. Secondly, this study introduces a distributed consensus algorithm called 2D-THA-ADMM, which combines the 2D-THA synchronization algorithm with the alternating direction method of multipliers (ADMM). Thirdly, we evaluate the model parameter synchronization performance of 2D-THA and the scalability of 2D-THA-ADMM on the Tianhe-2 supercomputing platform using real public datasets. Our experiments demonstrate that 2D-THA significantly reduces synchronization time by 63.447 % compared to MPI_Allreduce. Furthermore, the proposed 2D-THA-ADMM algorithm exhibits excellent scalability, with a training speed increase of over 3 × compared to the state-of-the-art methods, while maintaining high accuracy and computational efficiency.
AbstractList Model synchronization refers to the communication process involved in large-scale distributed machine learning tasks. As the cluster scales up, the synchronization of model parameters becomes a challenging task that has to be coordinated among thousands of workers. Firstly, this study proposes a h ierarchical A llReduce algorithm structured on a two - d imensional t orus (2D-THA), which utilizes a hierarchical structure to synchronize model parameters and maximize bandwidth utilization. Secondly, this study introduces a distributed consensus algorithm called 2D-THA-ADMM, which combines the 2D-THA synchronization algorithm with the alternating direction method of multipliers (ADMM). Thirdly, we evaluate the model parameter synchronization performance of 2D-THA and the scalability of 2D-THA-ADMM on the Tianhe-2 supercomputing platform using real public datasets. Our experiments demonstrate that 2D-THA significantly reduces synchronization time by 63.447 % compared to MPI_Allreduce. Furthermore, the proposed 2D-THA-ADMM algorithm exhibits excellent scalability, with a training speed increase of over 3 × compared to the state-of-the-art methods, while maintaining high accuracy and computational efficiency.
Author Lei, Yongmei
Peng, Cunlu
Zhang, Zeyu
Wang, Guozheng
Author_xml – sequence: 1
  givenname: Guozheng
  orcidid: 0000-0002-5260-3458
  surname: Wang
  fullname: Wang, Guozheng
  organization: School of Computer Engineering and Science, Shanghai University
– sequence: 2
  givenname: Yongmei
  surname: Lei
  fullname: Lei, Yongmei
  email: Lei@shu.edu.cn
  organization: School of Computer Engineering and Science, Shanghai University
– sequence: 3
  givenname: Zeyu
  surname: Zhang
  fullname: Zhang, Zeyu
  organization: School of Computer Engineering and Science, Shanghai University
– sequence: 4
  givenname: Cunlu
  surname: Peng
  fullname: Peng, Cunlu
  organization: School of Computer Engineering and Science, Shanghai University
BookMark eNp9kE9LwzAYh4MoOOe-gKd8gWj-tE3rrWzqhA1BJngraZpsmW0jScrw5Fc328SDh-WS8L6_5wd5rsB5b3sFwA3BtwRjfucJwwlFmDKESYEZKs7AiORZjnKcv5__vTm5BBPvtzieDDOG6Qh80xlazUtUzpbLeyht1w29kSIY20OltZFG9QE2xgdn6iGoBu6TULRr60zYdFA70amddR-wFj6uIxd2FjWmU72PLaKFwbrBw41RTji5ie0tLNv2VTWDVNfgQovWq8nvPQZvjw-r6RwtXp6ep-UCSVqQgISmjEqaqiKrc041xTQONOGqSHKlpWgkaeqEpzxL0ppkBaklZ6ngmhUkbTI2BvmxVzrrvVO6kiYcvhmcMG1FcLV3WR1dVtFldXBZFRGl_9BPZzrhvk5D7Aj5GO7XylVbO7how5-ifgB2ZYnm
CitedBy_id crossref_primary_10_3390_rs16091509
Cites_doi 10.1109/MNET.011.2000530
10.1016/j.jpdc.2008.09.002
10.1109/TNNLS.2021.3051638
10.1016/j.asoc.2022.109051
10.1007/s11227-020-03590-7
10.1109/JPROC.2020.3022687
10.1109/TNET.2021.3117042
10.1177/1094342005051521
10.1016/j.jpdc.2021.05.012
10.1109/TCOMM.2020.3026398
10.1007/s00607-021-00968-0
10.1109/TIFS.2021.3113768
10.1142/S0129626407002880
10.1109/CAHPC.2018.8645857
10.1109/ICASSP.2014.6854796
10.1145/329366.301116
10.1109/TNNLS.2022.3192346
10.1007/978-3-030-30709-7_27
10.1109/ISCA52012.2021.00023
10.1145/1273496.1273567
10.1145/2640087.2644155
10.1145/3448016.3450571
10.1109/CLUSTR.2004.1392611
10.1145/3126908.3126954
10.1109/DAC18072.2020.9218538
10.1109/TPWRS.2022.3162329
10.1109/TPAMI.2023.3243080
10.1007/978-3-642-03770-2_41
ContentType Journal Article
Copyright The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2023. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
Copyright_xml – notice: The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2023. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
DBID AAYXX
CITATION
DOI 10.1007/s13042-023-01903-9
DatabaseName CrossRef
DatabaseTitle CrossRef
DatabaseTitleList
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
Sciences (General)
EISSN 1868-808X
EndPage 226
ExternalDocumentID 10_1007_s13042_023_01903_9
GroupedDBID -EM
06D
0R~
0VY
1N0
203
29~
2JY
2VQ
30V
4.4
406
408
409
40D
96X
AACDK
AAHNG
AAIAL
AAJBT
AAJKR
AANZL
AARHV
AARTL
AASML
AATNV
AATVU
AAUYE
AAWCG
AAYIU
AAYQN
AAYTO
AAYZH
AAZMS
ABAKF
ABBXA
ABDZT
ABECU
ABFTD
ABFTV
ABHQN
ABJCF
ABJNI
ABJOX
ABKCH
ABMQK
ABQBU
ABSXP
ABTEG
ABTHY
ABTKH
ABTMW
ABULA
ABWNU
ABXPI
ACAOD
ACDTI
ACGFS
ACHSB
ACKNC
ACMLO
ACOKC
ACPIV
ACZOJ
ADHHG
ADHIR
ADINQ
ADKNI
ADKPE
ADRFC
ADTPH
ADURQ
ADYFF
ADZKW
AEBTG
AEFQL
AEGNC
AEJHL
AEJRE
AEMSY
AENEX
AEOHA
AEPYU
AESKC
AETCA
AEVLU
AEXYK
AFBBN
AFKRA
AFLOW
AFQWF
AFWTZ
AFZKB
AGAYW
AGDGC
AGJBK
AGMZJ
AGQEE
AGQMX
AGRTI
AGWZB
AGYKE
AHAVH
AHBYD
AHKAY
AHSBF
AHYZX
AIAKS
AIGIU
AIIXL
AILAN
AITGF
AJBLW
AJRNO
AJZVZ
AKLTO
ALFXC
ALMA_UNASSIGNED_HOLDINGS
AMKLP
AMXSW
AMYLF
AMYQR
ANMIH
ARAPS
AUKKA
AXYYD
AYJHY
BENPR
BGLVJ
BGNMA
CCPQU
CSCUP
DNIVK
DPUIP
EBLON
EBS
EIOEI
EJD
ESBYG
FERAY
FIGPU
FINBP
FNLPD
FRRFC
FSGXE
FYJPI
GGCAI
GGRSB
GJIRD
GQ6
GQ7
GQ8
H13
HCIFZ
HMJXF
HQYDN
HRMNR
HZ~
I0C
IKXTQ
IWAJR
IXD
IZIGR
J-C
J0Z
JBSCW
JCJTX
JZLTJ
K7-
KOV
LLZTM
M4Y
M7S
NPVJJ
NQJWS
NU0
O9-
O93
O9J
P2P
P9P
PT4
PTHSS
QOS
R89
R9I
RLLFE
ROL
RSV
S27
S3B
SEG
SHX
SISQX
SJYHP
SNE
SNPRN
SNX
SOHCF
SOJ
SPISZ
SRMVM
SSLCW
STPWE
T13
TSG
U2A
UG4
UOJIU
UTJUX
UZXMN
VC2
VFIZW
W48
WK8
Z45
Z7X
Z83
Z88
ZMTXR
~A9
AAYXX
ABBRH
ABDBE
ABFSG
ABRTQ
ACSTC
ADKFA
AEZWR
AFDZB
AFFHD
AFHIU
AFOHR
AHPBZ
AHWEU
AIXLP
ATHPR
AYFIA
CITATION
PHGZM
PHGZT
PQGLB
ID FETCH-LOGICAL-c291t-af232c25e96b872f202232f17e948efcadc1db4757645b1691bc735a7f3915d63
IEDL.DBID RSV
ISICitedReferencesCount 1
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001021304600002&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 1868-8071
IngestDate Sat Nov 29 05:59:44 EST 2025
Tue Nov 18 21:50:19 EST 2025
Fri Feb 21 02:41:24 EST 2025
IsPeerReviewed true
IsScholarly true
Issue 2
Keywords Hierarchical AllReduce
Two-dimensional torus
Synchronization algorithm
ADMM
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c291t-af232c25e96b872f202232f17e948efcadc1db4757645b1691bc735a7f3915d63
ORCID 0000-0002-5260-3458
PageCount 20
ParticipantIDs crossref_citationtrail_10_1007_s13042_023_01903_9
crossref_primary_10_1007_s13042_023_01903_9
springer_journals_10_1007_s13042_023_01903_9
PublicationCentury 2000
PublicationDate 20240200
2024-02-00
PublicationDateYYYYMMDD 2024-02-01
PublicationDate_xml – month: 2
  year: 2024
  text: 20240200
PublicationDecade 2020
PublicationPlace Berlin/Heidelberg
PublicationPlace_xml – name: Berlin/Heidelberg
PublicationTitle International journal of machine learning and cybernetics
PublicationTitleAbbrev Int. J. Mach. Learn. & Cyber
PublicationYear 2024
Publisher Springer Berlin Heidelberg
Publisher_xml – name: Springer Berlin Heidelberg
References Cho, Finkler, Kung, Hunter (CR29) 2019; 1
CR19
CR18
CR17
CR39
CR38
Liu, Wu, Tian, Ling (CR10) 2021; 33
CR37
CR36
Shang, Xu, Liu, Liu, Shen, Gong (CR3) 2021; 16
CR12
CR34
Shi, Tang, Chu, Liu, Wang, Li (CR13) 2020; 35
CR33
Thakur, Rabenseifner, Gropp (CR14) 2005; 19
CR32
CR31
Patarasuk, Yuan (CR16) 2009; 69
Ryabinin, Gorbunov, Plokhotnyuk, Pekhimenko (CR35) 2021; 34
Graham, Barrett, Shipman, Woodall, Bosilca (CR15) 2007; 17
Wang, Venkataraman, Phanishayee, Devanur, Thelin, Stoica (CR30) 2020; 2
CR2
Liu, Xu (CR8) 2022; 124
Elgabli, Park, Bedi, Issaid, Bennis, Aggarwal (CR6) 2020; 69
Wang, Lei, Xie, Wang (CR7) 2021; 77
CR5
Wang, Geng, Li (CR11) 2021; 30
CR28
CR9
CR27
Boyd, Parikh, Chu (CR4) 2011
CR26
CR25
CR24
CR23
Gu, Qi, Wu, Wang, Xu, Yuan, Huang (CR1) 2021; 156
França, Bento (CR20) 2020; 108
CR21
Wang, Lei, Zhou (CR22) 2021; 103
1903_CR27
1903_CR28
S Shi (1903_CR13) 2020; 35
S Wang (1903_CR11) 2021; 30
1903_CR21
R Gu (1903_CR1) 2021; 156
D Wang (1903_CR22) 2021; 103
1903_CR23
1903_CR24
G França (1903_CR20) 2020; 108
1903_CR25
1903_CR2
1903_CR26
1903_CR5
Y Liu (1903_CR10) 2021; 33
1903_CR9
1903_CR38
1903_CR17
1903_CR39
1903_CR18
1903_CR19
R Thakur (1903_CR14) 2005; 19
1903_CR31
1903_CR32
RL Graham (1903_CR15) 2007; 17
1903_CR33
D Wang (1903_CR7) 2021; 77
1903_CR12
G Wang (1903_CR30) 2020; 2
1903_CR34
M Ryabinin (1903_CR35) 2021; 34
1903_CR36
1903_CR37
M Cho (1903_CR29) 2019; 1
S Boyd (1903_CR4) 2011
Z Liu (1903_CR8) 2022; 124
F Shang (1903_CR3) 2021; 16
A Elgabli (1903_CR6) 2020; 69
P Patarasuk (1903_CR16) 2009; 69
References_xml – volume: 2
  start-page: 172
  year: 2020
  end-page: 186
  ident: CR30
  article-title: Blink: fast and generic collectives for distributed ml
  publication-title: Proc Mach Learn Syst
– year: 2011
  ident: CR4
  publication-title: Distributed optimization and statistical learning via the alternating direction method of multipliers
– ident: CR18
– volume: 35
  start-page: 230
  issue: 3
  year: 2020
  end-page: 237
  ident: CR13
  article-title: A quantitative survey of communication optimizations in distributed deep learning
  publication-title: IEEE Netw
  doi: 10.1109/MNET.011.2000530
– volume: 69
  start-page: 117
  issue: 2
  year: 2009
  end-page: 124
  ident: CR16
  article-title: Bandwidth optimal all-reduce algorithms for clusters of workstations
  publication-title: J Parall Distrib Comput
  doi: 10.1016/j.jpdc.2008.09.002
– ident: CR39
– ident: CR2
– volume: 33
  start-page: 3290
  issue: 8
  year: 2021
  end-page: 3304
  ident: CR10
  article-title: Dqc-admm: decentralized dynamic admm with quantized and censored communications”
  publication-title: IEEE Trans Neural Netw Learn Syst
  doi: 10.1109/TNNLS.2021.3051638
– ident: CR37
– volume: 124
  year: 2022
  ident: CR8
  article-title: Multi-task nonparallel support vector machine for classification
  publication-title: Appl Soft Comput
  doi: 10.1016/j.asoc.2022.109051
– ident: CR12
– volume: 34
  start-page: 18195
  year: 2021
  end-page: 18211
  ident: CR35
  article-title: Moshpit sgd: communication-efficient decentralized training on heterogeneous unreliable devices
  publication-title: Adv Neural Inf Process Syst
– ident: CR33
– volume: 77
  start-page: 8111
  year: 2021
  end-page: 8134
  ident: CR7
  article-title: HSAC-ALADMM: an asynchronous lazy ADMM algorithm based on hierarchical sparse allreduce communication
  publication-title: J Supercomput
  doi: 10.1007/s11227-020-03590-7
– ident: CR25
– volume: 108
  start-page: 1939
  issue: 11
  year: 2020
  end-page: 1952
  ident: CR20
  article-title: Distributed optimization, averaging via admm, and network topology
  publication-title: Proc IEEE
  doi: 10.1109/JPROC.2020.3022687
– ident: CR27
– volume: 30
  start-page: 572
  issue: 2
  year: 2021
  end-page: 585
  ident: CR11
  article-title: Impact of synchronization topology on dml performance: both logical topology and physical topology
  publication-title: IEEE/ACM Trans Netw
  doi: 10.1109/TNET.2021.3117042
– volume: 19
  start-page: 49
  issue: 1
  year: 2005
  end-page: 66
  ident: CR14
  article-title: Optimization of collective communication operations in mpich
  publication-title: Int J High Perform Comput Appl
  doi: 10.1177/1094342005051521
– ident: CR23
– volume: 156
  start-page: 132
  year: 2021
  end-page: 147
  ident: CR1
  article-title: Sparkdq: efficient generic big data quality management on distributed data-parallel computation
  publication-title: J Parall Distrib Comput
  doi: 10.1016/j.jpdc.2021.05.012
– ident: CR21
– ident: CR19
– volume: 69
  start-page: 164
  issue: 1
  year: 2020
  end-page: 181
  ident: CR6
  article-title: Q-GADMM: quantized group ADMM for communication efficient decentralized machine learning
  publication-title: IEEE Trans Commun
  doi: 10.1109/TCOMM.2020.3026398
– ident: CR38
– volume: 1
  start-page: 241
  year: 2019
  end-page: 251
  ident: CR29
  article-title: Blueconnect: decomposing all-reduce for deep learning on heterogeneous network hierarchy
  publication-title: Proc Mach Learn Syst
– ident: CR17
– ident: CR31
– ident: CR9
– volume: 103
  start-page: 2737
  issue: 12
  year: 2021
  end-page: 2762
  ident: CR22
  article-title: Hybrid mpi/openmp parallel asynchronous distributed alternating direction method of multipliers
  publication-title: Computing
  doi: 10.1007/s00607-021-00968-0
– ident: CR32
– ident: CR34
– ident: CR36
– volume: 16
  start-page: 4733
  year: 2021
  end-page: 4745
  ident: CR3
  article-title: Differentially private ADMM algorithms for machine learning
  publication-title: IEEE Trans Inf Forens Secur
  doi: 10.1109/TIFS.2021.3113768
– ident: CR5
– ident: CR28
– ident: CR26
– ident: CR24
– volume: 17
  start-page: 79
  issue: 01
  year: 2007
  end-page: 88
  ident: CR15
  article-title: Open mpi: a high performance, flexible implementation of mpi point-to-point communications
  publication-title: Parall Process Lett
  doi: 10.1142/S0129626407002880
– volume-title: Distributed optimization and statistical learning via the alternating direction method of multipliers
  year: 2011
  ident: 1903_CR4
– volume: 69
  start-page: 164
  issue: 1
  year: 2020
  ident: 1903_CR6
  publication-title: IEEE Trans Commun
  doi: 10.1109/TCOMM.2020.3026398
– volume: 35
  start-page: 230
  issue: 3
  year: 2020
  ident: 1903_CR13
  publication-title: IEEE Netw
  doi: 10.1109/MNET.011.2000530
– ident: 1903_CR21
  doi: 10.1109/CAHPC.2018.8645857
– ident: 1903_CR12
  doi: 10.1109/ICASSP.2014.6854796
– ident: 1903_CR38
– volume: 69
  start-page: 117
  issue: 2
  year: 2009
  ident: 1903_CR16
  publication-title: J Parall Distrib Comput
  doi: 10.1016/j.jpdc.2008.09.002
– ident: 1903_CR19
– ident: 1903_CR17
– ident: 1903_CR31
  doi: 10.1145/329366.301116
– volume: 17
  start-page: 79
  issue: 01
  year: 2007
  ident: 1903_CR15
  publication-title: Parall Process Lett
  doi: 10.1142/S0129626407002880
– ident: 1903_CR26
  doi: 10.1109/TNNLS.2022.3192346
– ident: 1903_CR23
  doi: 10.1007/978-3-030-30709-7_27
– volume: 19
  start-page: 49
  issue: 1
  year: 2005
  ident: 1903_CR14
  publication-title: Int J High Perform Comput Appl
  doi: 10.1177/1094342005051521
– volume: 1
  start-page: 241
  year: 2019
  ident: 1903_CR29
  publication-title: Proc Mach Learn Syst
– volume: 156
  start-page: 132
  year: 2021
  ident: 1903_CR1
  publication-title: J Parall Distrib Comput
  doi: 10.1016/j.jpdc.2021.05.012
– ident: 1903_CR27
  doi: 10.1109/ISCA52012.2021.00023
– ident: 1903_CR36
  doi: 10.1145/1273496.1273567
– ident: 1903_CR25
  doi: 10.1145/2640087.2644155
– ident: 1903_CR2
  doi: 10.1145/3448016.3450571
– volume: 34
  start-page: 18195
  year: 2021
  ident: 1903_CR35
  publication-title: Adv Neural Inf Process Syst
– volume: 77
  start-page: 8111
  year: 2021
  ident: 1903_CR7
  publication-title: J Supercomput
  doi: 10.1007/s11227-020-03590-7
– volume: 30
  start-page: 572
  issue: 2
  year: 2021
  ident: 1903_CR11
  publication-title: IEEE/ACM Trans Netw
  doi: 10.1109/TNET.2021.3117042
– volume: 16
  start-page: 4733
  year: 2021
  ident: 1903_CR3
  publication-title: IEEE Trans Inf Forens Secur
  doi: 10.1109/TIFS.2021.3113768
– ident: 1903_CR5
– volume: 124
  year: 2022
  ident: 1903_CR8
  publication-title: Appl Soft Comput
  doi: 10.1016/j.asoc.2022.109051
– ident: 1903_CR37
  doi: 10.1109/CLUSTR.2004.1392611
– ident: 1903_CR39
– volume: 2
  start-page: 172
  year: 2020
  ident: 1903_CR30
  publication-title: Proc Mach Learn Syst
– ident: 1903_CR34
– ident: 1903_CR33
  doi: 10.1145/3126908.3126954
– ident: 1903_CR18
  doi: 10.1109/DAC18072.2020.9218538
– ident: 1903_CR24
  doi: 10.1109/TPWRS.2022.3162329
– ident: 1903_CR9
  doi: 10.1109/TPAMI.2023.3243080
– volume: 103
  start-page: 2737
  issue: 12
  year: 2021
  ident: 1903_CR22
  publication-title: Computing
  doi: 10.1007/s00607-021-00968-0
– ident: 1903_CR32
  doi: 10.1007/978-3-642-03770-2_41
– volume: 33
  start-page: 3290
  issue: 8
  year: 2021
  ident: 1903_CR10
  publication-title: IEEE Trans Neural Netw Learn Syst
  doi: 10.1109/TNNLS.2021.3051638
– ident: 1903_CR28
– volume: 108
  start-page: 1939
  issue: 11
  year: 2020
  ident: 1903_CR20
  publication-title: Proc IEEE
  doi: 10.1109/JPROC.2020.3022687
SSID ssj0000603302
ssib031263576
ssib033405570
Score 2.3052497
Snippet Model synchronization refers to the communication process involved in large-scale distributed machine learning tasks. As the cluster scales up, the...
SourceID crossref
springer
SourceType Enrichment Source
Index Database
Publisher
StartPage 207
SubjectTerms Artificial Intelligence
Complex Systems
Computational Intelligence
Control
Engineering
Mechatronics
Original Article
Pattern Recognition
Robotics
Systems Biology
Title 2D-THA-ADMM: communication efficient distributed ADMM algorithm framework based on two-dimensional torus hierarchical AllReduce
URI https://link.springer.com/article/10.1007/s13042-023-01903-9
Volume 15
WOSCitedRecordID wos001021304600002&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVPQU
  databaseName: Advanced Technologies & Aerospace Database
  customDbUrl:
  eissn: 1868-808X
  dateEnd: 20241214
  omitProxy: false
  ssIdentifier: ssj0000603302
  issn: 1868-8071
  databaseCode: P5Z
  dateStart: 20101201
  isFulltext: true
  titleUrlDefault: https://search.proquest.com/hightechjournals
  providerName: ProQuest
– providerCode: PRVPQU
  databaseName: Computer Science Database
  customDbUrl:
  eissn: 1868-808X
  dateEnd: 20241214
  omitProxy: false
  ssIdentifier: ssj0000603302
  issn: 1868-8071
  databaseCode: K7-
  dateStart: 20101201
  isFulltext: true
  titleUrlDefault: http://search.proquest.com/compscijour
  providerName: ProQuest
– providerCode: PRVPQU
  databaseName: Engineering Database
  customDbUrl:
  eissn: 1868-808X
  dateEnd: 20241214
  omitProxy: false
  ssIdentifier: ssj0000603302
  issn: 1868-8071
  databaseCode: M7S
  dateStart: 20101201
  isFulltext: true
  titleUrlDefault: http://search.proquest.com
  providerName: ProQuest
– providerCode: PRVPQU
  databaseName: ProQuest Central
  customDbUrl:
  eissn: 1868-808X
  dateEnd: 20241214
  omitProxy: false
  ssIdentifier: ssj0000603302
  issn: 1868-8071
  databaseCode: BENPR
  dateStart: 20101201
  isFulltext: true
  titleUrlDefault: https://www.proquest.com/central
  providerName: ProQuest
– providerCode: PRVAVX
  databaseName: SpringerLINK Contemporary 1997-Present
  customDbUrl:
  eissn: 1868-808X
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0000603302
  issn: 1868-8071
  databaseCode: RSV
  dateStart: 20101201
  isFulltext: true
  titleUrlDefault: https://link.springer.com/search?facet-content-type=%22Journal%22
  providerName: Springer Nature
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV07T8MwELZ4DTDwKCDKo_LAAAJLTew4MVsFVCytEC2ILUr8gEqlRU0KI38dn5NQkFAlyBidleh8zp3j774PoeMAwH5NoYgJKSdM-CERHGBihtkC2l40bTqxibDbjR4fxW3ZFJZVaPfqSNJ9qWfNbrDzJjbHEOh_pkQsomUgLwHdgrveQxVF1AN-lVmSpZQ5nqmvPy9Nbu8VYMSIR8DG65XdNL8_5mfG-nlc6rJQe-N_77-J1suqE7eKMNlCC3pUQ2vfuAhraKtc5Rk-KamoT7fRh39F-jct0rrqdC6w_N5MgrUjn7A5Cysg3wXdLK0wWOJk-DSeDPLnF2wq7BeGdKmwHZe_j4kCTYGCDwTbTf80wyDJ7Q41JLwnbIuVDboddN--7l_ekFKygUhfeDlJjK3QpB9owdMo9I1vSwTqGy_UgkXayERJT6UstPPDghSIelIZ0iAJDRDVK0530dJoPNJ7CLMmVYGXSOssxXiqU2OLGx4YLhIT2Kq2jrxqWmJZ8pmDrMYwnjExg8dj6_HYeTwWdXT2Nea1YPOYa31ezWRcruxsjvn-38wP0Kr1DisQ4IdoKZ9M9RFakW_5IJs0AIPaa7jA_gQSn-pt
linkProvider Springer Nature
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnZ3dT9swEMBPfElsD4wy0AoM_MDDJmapiR0n5q2iVEW0FYIO9S1K_AFIXTs1YXvcv44vHy1IE9KWx-isROdz7hzf_Q7gJMBkv5bU1IZMUC79kEqBaWKWuwDaXSxtFc0mwuEwGo_ldVUUltXZ7vWRZPGlXha74c6bOh9Dsf6ZUbkK69z3GNr1ze1dbUXMQ77K0skyxgvO1OLPS0u4e2UyYiQipPF6VTXN3x_z2mO9Pi4tvFD3w_-9_zZsVVEnaZdm0oAVM92B9y9YhDvQqFZ5Rr5UKOqvH-GP36GjXpu2O4PBGVEvi0mIKeATzmcRjfBd7JtlNEFJkkzuZ_PH_OEHsXXuF0F3qYkbl_-eUY09BUoeCHGb_qeMYEvu4lBD4Xvitlg7o9uF792L0XmPVi0bqPKll9PEughN-YGRIo1C3_ouRGC-9UIjeWSsSrTydMpDNz88SBHUk6qQBUloEVSvBduDtelsaj4B4S2mAy9RTlmai9Sk1gU3IrBCJjZwUW0TvHpaYlXxzLGtxiRekphR47HTeFxoPJZNOF2M-VnSPN6U_lbPZFyt7OwN8f1_Ez-Gzd5o0I_7l8OrA3jnNMXLbPBDWMvnT-YzbKhf-WM2PyrM-xmRF-xZ
linkToPdf http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpZ1LS8QwEIAHX4gefItvc_CgaNhtk6aNt8V1UdRF8IG30uahgu7KbtWjf91MH-4KIog9lgltJxNm0sx8A7ATYLJfXWpqQyYol35IpcA0MctdAO0ultbzZhNhux3d3cnLoSr-PNu9OpIsahqQ0tTJai_a1gaFb7gLp87fUKyFZlSOwjhHGhru169uK4tiHrJWBg6XMZ4zp77-wtSFu1ckJkYiQjKvV1bW_PyY797r-9Fp7pFas___ljmYKaNR0ijMZx5GTGcBpocYhQswX67-PtktEdV7i_DhN-n1SYM2mhcXh0QNF5kQk0Mp3POJRigv9tMymqAkSZ7uu73H7OGZ2ConjKAb1cSNy967VGOvgYITQrJu77VPsFV3ftih8D1xu6ydMS7BTev4-uiElq0cqPKll9HEushN-YGRIo1C3_oudGC-9UIjeWSsSrTydMpDN1c8SBHgk6qQBUloEWCvBVuGsU63Y1aA8DrTgZcopyzNRWpS64IeEVghExu4aHcVvGqKYlVyzrHdxlM8IDSjxmOn8TjXeCxXYf9rzEtB-fhV-qCa1bhc8f1fxNf-Jr4Nk5fNVnx-2j5bhymnKF4kiW_AWNZ7NZswod6yx35vK7f0T5Sa9T0
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=2D-THA-ADMM%3A+communication+efficient+distributed+ADMM+algorithm+framework+based+on+two-dimensional+torus+hierarchical+AllReduce&rft.jtitle=International+journal+of+machine+learning+and+cybernetics&rft.au=Wang%2C+Guozheng&rft.au=Lei%2C+Yongmei&rft.au=Zhang%2C+Zeyu&rft.au=Peng%2C+Cunlu&rft.date=2024-02-01&rft.pub=Springer+Berlin+Heidelberg&rft.issn=1868-8071&rft.eissn=1868-808X&rft.volume=15&rft.issue=2&rft.spage=207&rft.epage=226&rft_id=info:doi/10.1007%2Fs13042-023-01903-9&rft.externalDocID=10_1007_s13042_023_01903_9
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1868-8071&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1868-8071&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1868-8071&client=summon