Distributed Graph Computation Meets Machine Learning

TuX 2 is a new distributed graph engine that bridges graph computation and distributed machine learning. TuX 2 inherits the benefits of elegant graph computation model, efficient graph layout, and balanced parallelism to scale to billion-edge graphs, while extended and optimized for distributed mach...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:IEEE transactions on parallel and distributed systems Ročník 31; číslo 7; s. 1588 - 1604
Hlavní autori: Xiao, Wencong, Xue, Jilong, Miao, Youshan, Li, Zhen, Chen, Cheng, Wu, Ming, Li, Wei, Zhou, Lidong
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: New York IEEE 01.07.2020
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Predmet:
ISSN:1045-9219, 1558-2183
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Abstract TuX 2 is a new distributed graph engine that bridges graph computation and distributed machine learning. TuX 2 inherits the benefits of elegant graph computation model, efficient graph layout, and balanced parallelism to scale to billion-edge graphs, while extended and optimized for distributed machine learning to support heterogeneity in data model, Stale Synchronous Parallel in scheduling, and a new Mini-batch, Exchange, GlobalSync, and Apply ( MEGA ) model for programming. TuX 2 further introduces a hybrid vertex-cut graph optimization and supports various consistency models in fault tolerance for machine learning. We have developed a set of representative distributed machine learning algorithms in TuX 2 , covering both supervised and unsupervised learning. Compared to the implementations on distributed machine learning platforms, writing those algorithms in TuX 2 takes only about 25 percent of the code: our graph computation model hides the detailed management of data layout, partitioning, and parallelism from developers. The extensive evaluation of TuX 2 , using large datasets with up to 64 billion of edges, shows that TuX 2 outperforms PowerGraph/PowerLyra, the state-of-the-art distributed graph engines, by an order of magnitude, while beating two state-of-the-art distributed machine learning systems by at least 60 percent.
AbstractList TuX2 is a new distributed graph engine that bridges graph computation and distributed machine learning. TuX2 inherits the benefits of elegant graph computation model, efficient graph layout, and balanced parallelism to scale to billion-edge graphs, while extended and optimized for distributed machine learning to support heterogeneity in data model, Stale Synchronous Parallel in scheduling, and a new Mini-batch, Exchange, GlobalSync, and Apply ( MEGA ) model for programming. TuX2 further introduces a hybrid vertex-cut graph optimization and supports various consistency models in fault tolerance for machine learning. We have developed a set of representative distributed machine learning algorithms in TuX2 , covering both supervised and unsupervised learning. Compared to the implementations on distributed machine learning platforms, writing those algorithms in TuX2 takes only about 25 percent of the code: our graph computation model hides the detailed management of data layout, partitioning, and parallelism from developers. The extensive evaluation of TuX2 , using large datasets with up to 64 billion of edges, shows that TuX2 outperforms PowerGraph/PowerLyra, the state-of-the-art distributed graph engines, by an order of magnitude, while beating two state-of-the-art distributed machine learning systems by at least 60 percent.
TuX 2 is a new distributed graph engine that bridges graph computation and distributed machine learning. TuX 2 inherits the benefits of elegant graph computation model, efficient graph layout, and balanced parallelism to scale to billion-edge graphs, while extended and optimized for distributed machine learning to support heterogeneity in data model, Stale Synchronous Parallel in scheduling, and a new Mini-batch, Exchange, GlobalSync, and Apply ( MEGA ) model for programming. TuX 2 further introduces a hybrid vertex-cut graph optimization and supports various consistency models in fault tolerance for machine learning. We have developed a set of representative distributed machine learning algorithms in TuX 2 , covering both supervised and unsupervised learning. Compared to the implementations on distributed machine learning platforms, writing those algorithms in TuX 2 takes only about 25 percent of the code: our graph computation model hides the detailed management of data layout, partitioning, and parallelism from developers. The extensive evaluation of TuX 2 , using large datasets with up to 64 billion of edges, shows that TuX 2 outperforms PowerGraph/PowerLyra, the state-of-the-art distributed graph engines, by an order of magnitude, while beating two state-of-the-art distributed machine learning systems by at least 60 percent.
Author Xiao, Wencong
Li, Zhen
Zhou, Lidong
Chen, Cheng
Li, Wei
Miao, Youshan
Wu, Ming
Xue, Jilong
Author_xml – sequence: 1
  givenname: Wencong
  orcidid: 0000-0002-3043-522X
  surname: Xiao
  fullname: Xiao, Wencong
  email: wencong.xwc@alibaba-inc.com
  organization: Alibaba Group, Hangzhou, China
– sequence: 2
  givenname: Jilong
  surname: Xue
  fullname: Xue, Jilong
  email: jxue@microsoft.com
  organization: Microsoft Research, Beijing, China
– sequence: 3
  givenname: Youshan
  surname: Miao
  fullname: Miao, Youshan
  email: yomia@microsoft.com
  organization: Microsoft Research, Beijing, China
– sequence: 4
  givenname: Zhen
  surname: Li
  fullname: Li, Zhen
  email: lizhenpi@gmail.com
  organization: Google, Mountain View, CA, USA
– sequence: 5
  givenname: Cheng
  surname: Chen
  fullname: Chen, Cheng
  email: chencheng.kit@bytedance.com
  organization: ByteDance, Beijing, China
– sequence: 6
  givenname: Ming
  surname: Wu
  fullname: Wu, Ming
  email: ming@conflux-chain.org
  organization: Conflux Foundation, Singapore
– sequence: 7
  givenname: Wei
  surname: Li
  fullname: Li, Wei
  email: liwei@nlsde.buaa.edu.cn
  organization: State Key Laboratory of Software Development Environment, Beihang University, Beijing, China
– sequence: 8
  givenname: Lidong
  orcidid: 0000-0002-7258-3116
  surname: Zhou
  fullname: Zhou, Lidong
  email: lidongz@microsoft.com
  organization: Microsoft Research, Beijing, China
BookMark eNp9kMFOwzAMhiM0JMbgARCXSpw74iRtkyPaYCBtAolxjtLUZZm2tiTZgbenUycOHDjZsv7Plr9LMmraBgm5AToFoOp-_TZ_nzLK6JSpglJRnJExZJlMGUg-6nsqslQxUBfkMoQtpSAyKsZEzF2I3pWHiFWy8KbbJLN23x2iia5tkhViDMnK2I1rMFmi8Y1rPq_IeW12Aa9PdUI-nh7Xs-d0-bp4mT0sU8sUj6miUFprayiNLdGYqqoNlzVYpKqfCVEwyasMOLU8y2oDOUWshLFQgywg5xNyN-ztfPt1wBD1tj34pj-pGc8lyFwo1adgSFnfhuCx1p13e-O_NVB9lKOPcvRRjj7J6ZniD2Pd8HL0xu3-JW8H0iHi7yWpCiEE5z-g6HO7
CODEN ITDSEO
CitedBy_id crossref_primary_10_1016_j_rcim_2021_102222
crossref_primary_10_1109_TMC_2022_3195765
crossref_primary_10_1016_j_jfranklin_2024_107002
crossref_primary_10_1109_TPDS_2023_3317388
Cites_doi 10.1145/2647868.2654889
10.1145/3037697.3037740
10.1145/3064176.3064191
10.14778/1920841.1920931
10.1145/1557019.1557121
10.1145/3322125
10.1109/IPDPS.2017.53
10.1145/2588555.2610518
10.14778/2212351.2212354
10.1145/2517349.2522740
10.1145/2124295.2124312
10.1145/2806777.2806778
10.1145/2442516.2442530
10.1145/3020078.3021739
10.1145/2901318.2901331
10.1109/TBDATA.2015.2472014
10.1109/IPDPS.2016.86
10.1145/2806777.2806849
10.1145/2465351.2465369
10.1109/TPDS.2017.2703904
10.1145/2517349.2522738
10.14778/3204028.3204035
10.14778/2809974.2809983
10.1109/SC.2014.50
10.1145/2637166.2637236
10.1109/TPDS.2017.2776115
10.1561/2400000003
10.14778/2824032.2824077
10.1109/SC.2018.00059
10.1145/3192366.3192404
10.1145/2640087.2644155
10.1145/1807167.1807184
10.1145/2020408.2020426
10.1073/pnas.0307752101
10.1109/ICDM.2013.158
10.1145/2807591.2807620
10.1145/2815400.2815408
10.1109/HPEC.2018.8547538
10.1145/2901318.2901323
10.1016/j.jocs.2017.09.006
10.1109/ICDCS.2017.98
10.1177/1094342011403516
10.1109/TKDE.2017.2745562
10.1145/2523616.2523633
10.1145/2517349.2522739
10.1145/2694413.2694421
10.1145/2688500.2688507
10.1137/1.9781611972740.43
10.1145/2741948.2741970
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2020
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2020
DBID 97E
RIA
RIE
AAYXX
CITATION
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
DOI 10.1109/TPDS.2020.2970047
DatabaseName IEEE Xplore (IEEE)
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Electronic Library (IEL)
CrossRef
Computer and Information Systems Abstracts
Electronics & Communications Abstracts
Technology Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
DatabaseTitle CrossRef
Technology Research Database
Computer and Information Systems Abstracts – Academic
Electronics & Communications Abstracts
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts Professional
DatabaseTitleList Technology Research Database

Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
Computer Science
EISSN 1558-2183
EndPage 1604
ExternalDocumentID 10_1109_TPDS_2020_2970047
8974443
Genre orig-research
GrantInformation_xml – fundername: National Natural Science Foundation of China; NSF of China
  grantid: 61472009
  funderid: 10.13039/501100001809
GroupedDBID --Z
-~X
.DC
0R~
29I
4.4
5GY
6IK
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABQJQ
ABVLG
ACGFO
ACIWK
AENEX
AGQYO
AHBIQ
AKJIK
AKQYR
ALMA_UNASSIGNED_HOLDINGS
ASUFR
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CS3
DU5
EBS
EJD
HZ~
IEDLZ
IFIPE
IPLJI
JAVBF
LAI
M43
MS~
O9-
OCL
P2P
PQQKQ
RIA
RIE
RNS
TN5
TWZ
UHB
AAYXX
CITATION
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
ID FETCH-LOGICAL-c293t-901bcccf1bacbeaaddfa38f1ce091ba447283d5130c355fa160eed4ac1f187163
IEDL.DBID RIE
ISICitedReferencesCount 7
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000526542500004&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 1045-9219
IngestDate Sun Nov 09 08:35:59 EST 2025
Sat Nov 29 06:06:47 EST 2025
Tue Nov 18 22:30:36 EST 2025
Wed Aug 27 06:30:21 EDT 2025
IsPeerReviewed true
IsScholarly true
Issue 7
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
https://doi.org/10.15223/policy-029
https://doi.org/10.15223/policy-037
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c293t-901bcccf1bacbeaaddfa38f1ce091ba447283d5130c355fa160eed4ac1f187163
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ORCID 0000-0002-7258-3116
0000-0002-3043-522X
PQID 2368186499
PQPubID 85437
PageCount 17
ParticipantIDs ieee_primary_8974443
crossref_citationtrail_10_1109_TPDS_2020_2970047
proquest_journals_2368186499
crossref_primary_10_1109_TPDS_2020_2970047
PublicationCentury 2000
PublicationDate 2020-07-01
PublicationDateYYYYMMDD 2020-07-01
PublicationDate_xml – month: 07
  year: 2020
  text: 2020-07-01
  day: 01
PublicationDecade 2020
PublicationPlace New York
PublicationPlace_xml – name: New York
PublicationTitle IEEE transactions on parallel and distributed systems
PublicationTitleAbbrev TPDS
PublicationYear 2020
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References ref57
ref13
ma (ref80) 2018
ref56
ref59
ref15
ref14
ref52
ref54
kyrola (ref5) 2012
ref17
ref16
ho (ref18) 2013
power (ref83) 2010
de sa (ref30) 2015
xiao (ref1) 2017
ref51
ref50
lee (ref69) 2014
ref46
gonzalez (ref3) 2012
ref45
ref48
johnson (ref29) 2013
ref47
ref42
ref41
dean (ref68) 2012
zhu (ref55) 2016
ref49
ma (ref58) 2017
ref8
ref7
ref9
ref4
zhu (ref43) 2015
li (ref36) 2014
li (ref24) 2013
ref82
chilimbi (ref72) 2014
ref40
paszke (ref79) 2019
ref35
zhang (ref44) 2018
ref78
ref37
xiao (ref34) 2018
ref75
bennett (ref31) 2007
ref74
ref32
ref2
ref38
recht (ref28) 2011
gonzalez (ref39) 2016
nelson (ref12) 2015
ref71
prabhakaran (ref6) 2012
ref70
ref73
cui (ref19) 2014
dai (ref21) 2015
ref67
ref23
zhang (ref53) 2016
(ref81) 2012
ref26
ref25
ref64
ref20
gonzalez (ref10) 2014
abadi (ref76) 2016
ref63
ref66
ref22
chen (ref77) 2015
cui (ref33) 2014
ref27
ref60
zheng (ref11) 2015
ref62
ref61
mcsherry (ref65) 2015
References_xml – ident: ref78
  doi: 10.1145/2647868.2654889
– start-page: 1223
  year: 2012
  ident: ref68
  article-title: Large scale distributed deep networks
  publication-title: Proc Int Conf Neural Inf Process
– start-page: 2674
  year: 2015
  ident: ref30
  article-title: Taming the wild: A unified analysis of hogwild-style algorithms
  publication-title: Proc Int Conf Neural Inf Process
– year: 2015
  ident: ref65
  article-title: Scalability! But at what cost?
  publication-title: Proc Workshop Hot Topics in Operating Systems
– start-page: 37
  year: 2014
  ident: ref19
  article-title: Exploiting bounded staleness to speed up big data analytics
  publication-title: Proc USENIX Annu Tech Conf
– ident: ref71
  doi: 10.1145/3037697.3037740
– start-page: 45
  year: 2015
  ident: ref11
  article-title: FlashGraph: Processing billion-node graphs on an array of commodity SSDs
  publication-title: Proc USENIX Conf File Storage Technol
– ident: ref46
  doi: 10.1145/3064176.3064191
– year: 2007
  ident: ref31
  article-title: The Netflix Prize
  publication-title: Proc KDD Cup Workshop
– start-page: 265
  year: 2016
  ident: ref76
  article-title: TensorFlow: A system for large-scale machine learning
  publication-title: Proc 12th USENIX Symp Operating Syst Des Implementation
– ident: ref66
  doi: 10.14778/1920841.1920931
– ident: ref23
  doi: 10.1145/1557019.1557121
– ident: ref50
  doi: 10.1145/3322125
– ident: ref42
  doi: 10.1109/IPDPS.2017.53
– start-page: 375
  year: 2015
  ident: ref43
  article-title: GridGraph: Large-scale graph processing on a single machine using 2-level hierarchical partitioning
  publication-title: Proc USENIX Annu Tech Conf
– ident: ref64
  doi: 10.1145/2588555.2610518
– start-page: 79
  year: 2015
  ident: ref21
  article-title: High-performance distributed ML at scale through parameter server consistency models
  publication-title: Proc 29th AAAI Conf Artif Intell
– ident: ref17
  doi: 10.14778/2212351.2212354
– ident: ref8
  doi: 10.1145/2517349.2522740
– ident: ref67
  doi: 10.1145/2124295.2124312
– start-page: 669
  year: 2017
  ident: ref1
  article-title: Tux^{\mbox{2}}$2: Distributed graph computation for machine learning
  publication-title: Proc 10th USENIX Symp Netw Syst Des Implementation
– ident: ref20
  doi: 10.1145/2806777.2806778
– start-page: 441
  year: 2018
  ident: ref44
  article-title: Cgraph: A correlations-aware approach for efficient concurrent iterative graph processing
  publication-title: Proc USENIX Annu Tech Conf
– ident: ref54
  doi: 10.1145/2442516.2442530
– start-page: 293
  year: 2010
  ident: ref83
  article-title: Piccolo: Building fast, distributed programs with partitioned tables
  publication-title: Proc 9th USENIX Conf Operating Syst Des Implementation
– ident: ref60
  doi: 10.1145/3020078.3021739
– ident: ref70
  doi: 10.1145/2901318.2901331
– ident: ref25
  doi: 10.1109/TBDATA.2015.2472014
– ident: ref48
  doi: 10.1109/IPDPS.2016.86
– ident: ref15
  doi: 10.1145/2806777.2806849
– year: 2015
  ident: ref77
  article-title: MXNet: A flexible and efficient machine learning library for heterogeneous distributed systems
– start-page: 2715
  year: 2013
  ident: ref29
  article-title: Analyzing Hogwild parallel Gaussian Gibbs sampling
  publication-title: Proc Int Conf Neural Inf Process
– ident: ref9
  doi: 10.1145/2465351.2465369
– start-page: 285
  year: 2016
  ident: ref53
  article-title: Exploring the hidden dimension in graph processing
  publication-title: Proc 12th USENIX Symp Operating Syst Des Implementation
– year: 2013
  ident: ref24
  article-title: Distributed delayed proximal gradient methods
  publication-title: Proc NIPS Workshop Optim Mach Learn
– ident: ref74
  doi: 10.1109/TPDS.2017.2703904
– start-page: 17
  year: 2012
  ident: ref3
  article-title: PowerGraph: Distributed graph-parallel computation on natural graphs
  publication-title: Proc 10th USENIX Conf Operat Syst Des Implementation
– ident: ref38
  doi: 10.1145/2517349.2522738
– ident: ref52
  doi: 10.14778/3204028.3204035
– year: 2012
  ident: ref6
  article-title: Managing large graphs on multi-cores with graph awareness
  publication-title: Proc USENIX Annu Tech Conf
– ident: ref47
  doi: 10.14778/2809974.2809983
– ident: ref62
  doi: 10.1109/SC.2014.50
– start-page: 195
  year: 2017
  ident: ref58
  article-title: Garaph: Efficient GPU-accelerated graph processing on a single machine with balanced replication
  publication-title: Proc USENIX Conf USENIX Annu Tech Conf
– start-page: 291
  year: 2015
  ident: ref12
  article-title: Latency-tolerant software distributed shared memory
  publication-title: Proc USENIX Annu Tech Conf
– ident: ref27
  doi: 10.1145/2637166.2637236
– ident: ref45
  doi: 10.1109/TPDS.2017.2776115
– ident: ref37
  doi: 10.1561/2400000003
– ident: ref14
  doi: 10.14778/2824032.2824077
– start-page: 599
  year: 2014
  ident: ref10
  article-title: GraphX: Graph processing in a distributed dataflow framework
  publication-title: Proc 11th USENIX Symp Operating Syst Des Implementation
– ident: ref41
  doi: 10.1109/SC.2018.00059
– ident: ref59
  doi: 10.1145/3192366.3192404
– ident: ref26
  doi: 10.1145/2640087.2644155
– start-page: 19
  year: 2014
  ident: ref36
  article-title: Communication efficient distributed machine learning with the parameter server
  publication-title: Proc Int Conf Neural Inf Process
– ident: ref2
  doi: 10.1145/1807167.1807184
– start-page: 571
  year: 2014
  ident: ref72
  article-title: Project Adam: Building an efficient and scalable deep learning training system
  publication-title: Proc 11th USENIX Symp Operating Syst Des Implementation
– ident: ref22
  doi: 10.1145/2020408.2020426
– ident: ref35
  doi: 10.1073/pnas.0307752101
– start-page: 301
  year: 2016
  ident: ref55
  article-title: Gemini: A computation-centric distributed graph processing system
  publication-title: Proc 12th USENIX Symp Operating Syst Des Implementation
– ident: ref82
  doi: 10.1109/ICDM.2013.158
– start-page: 8024
  year: 2019
  ident: ref79
  article-title: PyTorch: An imperative style, high-performance deep learning library
  publication-title: Adv Neural Inform Process Syst
– start-page: 37
  year: 2014
  ident: ref33
  article-title: Exploiting bounded staleness to speed up big data analytics
  publication-title: Proc USENIX Conf USENIX Annu Tech Conf
– ident: ref63
  doi: 10.1145/2807591.2807620
– year: 2018
  ident: ref80
  article-title: Towards efficient large-scale graph neural network computing
  publication-title: arXiv 1810 08403
– start-page: 2834
  year: 2014
  ident: ref69
  article-title: On model parallelization and scheduling strategies for distributed machine learning
  publication-title: Proc Int Conf Neural Inf Process
– year: 2012
  ident: ref81
  article-title: Mahout project
– ident: ref16
  doi: 10.1145/2815400.2815408
– ident: ref51
  doi: 10.1109/HPEC.2018.8547538
– ident: ref73
  doi: 10.1145/2901318.2901323
– ident: ref75
  doi: 10.1016/j.jocs.2017.09.006
– ident: ref61
  doi: 10.1109/ICDCS.2017.98
– start-page: 693
  year: 2011
  ident: ref28
  article-title: HOGWILD: A lock-free approach to parallelizing stochastic gradient descent
  publication-title: Proc Int Conf Neural Inf Process
– start-page: 31
  year: 2012
  ident: ref5
  article-title: GraphChi: Large-scale graph computation on just a PC
  publication-title: Proc 10th USENIX Conf Operat Syst Des Implementation
– ident: ref49
  doi: 10.1177/1094342011403516
– start-page: 595
  year: 2018
  ident: ref34
  article-title: Gandiva: Introspective cluster scheduling for deep learning
  publication-title: Proc 12th USENIX Symp Operating Syst Des Implementation
– ident: ref57
  doi: 10.1109/TKDE.2017.2745562
– ident: ref32
  doi: 10.1145/2523616.2523633
– ident: ref7
  doi: 10.1145/2517349.2522739
– year: 2016
  ident: ref39
  article-title: PowerGraph v2.2
– start-page: 1223
  year: 2013
  ident: ref18
  article-title: More effective distributed ML via a stale synchronous parallel param eter server
  publication-title: Proc Int Conf Neural Inf Process
– ident: ref56
  doi: 10.1145/2694413.2694421
– ident: ref13
  doi: 10.1145/2688500.2688507
– ident: ref40
  doi: 10.1137/1.9781611972740.43
– ident: ref4
  doi: 10.1145/2741948.2741970
SSID ssj0014504
Score 2.37028
Snippet TuX 2 is a new distributed graph engine that bridges graph computation and distributed machine learning. TuX 2 inherits the benefits of elegant graph...
TuX2 is a new distributed graph engine that bridges graph computation and distributed machine learning. TuX2 inherits the benefits of elegant graph computation...
SourceID proquest
crossref
ieee
SourceType Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 1588
SubjectTerms Algorithms
distributed machine learning
Fault tolerance
Graph computing
Graph theory
heterogeneity
Layouts
Machine learning
MEGA model
Optimization
stale synchronous parallel
Title Distributed Graph Computation Meets Machine Learning
URI https://ieeexplore.ieee.org/document/8974443
https://www.proquest.com/docview/2368186499
Volume 31
WOSCitedRecordID wos000526542500004&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVIEE
  databaseName: IEEE Electronic Library (IEL)
  customDbUrl:
  eissn: 1558-2183
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0014504
  issn: 1045-9219
  databaseCode: RIE
  dateStart: 19900101
  isFulltext: true
  titleUrlDefault: https://ieeexplore.ieee.org/
  providerName: IEEE
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1NT8MwDLW2iQMcGGwgBgPlwAnRrW3SjxwRY3Bg0ySGtFuVpg5CQhvaOn4_SZpNIBAStx7iqrLj2K6d9wAueRKh3ryRF6DKPaZUoH2OUi8qVFwEscFDtpZ-TMbjdDbjkxpcb-_CIKIdPsOeebS9_GIh1-ZXWT_VyS9jtA71JImru1rbjgGLLFWgri4ij2s3dB3MwOf96WTwpCvB0O-F3KC5J99ikCVV-XES2_AybP7vww5g36WR5Kay-yHUcN6C5oaigTiPbcHeF7zBNrCBgck1DFdYkHsDVU0qEWseMkIsV2RkxyuROOTVlyN4Ht5Nbx88R5vgSR27SzNwkUspVZALmaPQB5gSNFWBRJ0b5IKxRKcURaSDl9TJhhLaIjpQMiEDFZjyiR5DY76Y4wkQjlzpfC8VCgXjOU1TlhaSJTlXfihD2gF_o8hMOkxxQ23xltnawueZ0X1mdJ853XfgaivyXgFq_LW4bZS9Xej03IHuxlqZc7lVFtLYoPPpCu70d6kz2DXvrmZtu9Aol2s8hx35Ub6ulhd2N30CjGrGkQ
linkProvider IEEE
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3dS8MwED_mFNQHp5vidGoefBK79SNdm0dxzonbGDhhbyVNLyLIJvvw7zdJs6Iogm99yNFyl8vd9S6_H8Ali0JUmzd0PJSpQ6X0lM8FgRNmsp15bY2HbCzdj4bDeDJhoxJcF3dhENEMn2FTP5pefjYTK_2rrBWr5JfSYAM2NXOWva1V9AxoaMgCVX0ROkw5ou1hei5rjUedJ1UL-m7TZxrPPfoWhQytyo-z2ASYbuV_n7YPezaRJDe55Q-ghNMqVNYkDcT6bBV2vyAO1oB2NFCu5rjCjNxrsGqSixgDkQHickEGZsASicVefTmE5-7d-LbnWOIER6jovdQjF6kQQnopFylydYRJHsTSE6iyg5RTGqmkIgtV-BIq3ZBc2USFSsqFJz1dQAVHUJ7OpngMhCGTKuOLuUROWRrEMY0zQaOUSdcXflAHd63IRFhUcU1u8ZaY6sJlidZ9onWfWN3X4aoQec8hNf5aXNPKLhZaPdehsbZWYp1ukfhBW-PzqRru5HepC9jujQf9pP8wfDyFHf2efPK2AeXlfIVnsCU-lq-L-bnZWZ8QXsna
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Distributed+Graph+Computation+Meets+Machine+Learning&rft.jtitle=IEEE+transactions+on+parallel+and+distributed+systems&rft.au=Xiao%2C+Wencong&rft.au=Xue%2C+Jilong&rft.au=Miao%2C+Youshan&rft.au=Li%2C+Zhen&rft.date=2020-07-01&rft.pub=The+Institute+of+Electrical+and+Electronics+Engineers%2C+Inc.+%28IEEE%29&rft.issn=1045-9219&rft.eissn=1558-2183&rft.volume=31&rft.issue=7&rft.spage=1588&rft_id=info:doi/10.1109%2FTPDS.2020.2970047&rft.externalDBID=NO_FULL_TEXT
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1045-9219&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1045-9219&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1045-9219&client=summon