Falcon: Addressing Stragglers in Heterogeneous Parameter Server Via Multiple Parallelism
The parameter server architecture has shown promising performance advantages when handling deep learning (DL) applications. One crucial issue in this regard is the presence of stragglers, which significantly retards DL training progress. Previous solutions for solving stragglers may not fully exploi...
Gespeichert in:
| Veröffentlicht in: | IEEE transactions on computers Jg. 70; H. 1; S. 139 - 155 |
|---|---|
| Hauptverfasser: | , , , , , , |
| Format: | Journal Article |
| Sprache: | Englisch |
| Veröffentlicht: |
New York
IEEE
01.01.2021
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Schlagworte: | |
| ISSN: | 0018-9340, 1557-9956 |
| Online-Zugang: | Volltext |
| Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
| Abstract | The parameter server architecture has shown promising performance advantages when handling deep learning (DL) applications. One crucial issue in this regard is the presence of stragglers, which significantly retards DL training progress. Previous solutions for solving stragglers may not fully exploit the computation resource of the cluster as evidenced by our experiments, especially in the heterogeneous environment. This motivates us to design a heterogeneity-aware parameter server paradigm that addresses stragglers and accelerates DL training from the perspective of computation parallelism. We introduce a novel methodology named straggler projection to give a comprehensive inspection of stragglers and reveal practical guidelines to solve this problem in two aspects: (1) controlling each worker's training speed via elastic training parallelism control and (2) transferring blocked tasks from stragglers to pioneers to fully utilize the computation resource. Following these guidelines, we propose the abstraction of parallelism as an infrastructure and design the Elastic-Parallelism Synchronous Parallel (EPSP) algorithm to handle distributed training and parameter synchronization, supporting both enforced- and slack-synchronization schemes. The whole idea has been implemented into a prototype called <inline-formula><tex-math notation="LaTeX">{\sf Falcon}</tex-math> <mml:math><mml:mi mathvariant="sans-serif">Falcon</mml:mi></mml:math><inline-graphic xlink:href="wang-ieq1-2974461.gif"/> </inline-formula> which effectively accelerates the DL training speed with the presence of stragglers. Evaluation under various benchmarks with baseline comparison demonstrates the superiority of our system. Specifically, <inline-formula><tex-math notation="LaTeX">{\sf Falcon}</tex-math> <mml:math><mml:mi mathvariant="sans-serif">Falcon</mml:mi></mml:math><inline-graphic xlink:href="wang-ieq2-2974461.gif"/> </inline-formula> reduces the training convergence time, by up to 61.83, 55.19, 38.92, and 23.68 percent shorter than FlexRR, Sync-opt, ConSGD, and DynSGD, respectively. |
|---|---|
| AbstractList | The parameter server architecture has shown promising performance advantages when handling deep learning (DL) applications. One crucial issue in this regard is the presence of stragglers, which significantly retards DL training progress. Previous solutions for solving stragglers may not fully exploit the computation resource of the cluster as evidenced by our experiments, especially in the heterogeneous environment. This motivates us to design a heterogeneity-aware parameter server paradigm that addresses stragglers and accelerates DL training from the perspective of computation parallelism. We introduce a novel methodology named straggler projection to give a comprehensive inspection of stragglers and reveal practical guidelines to solve this problem in two aspects: (1) controlling each worker's training speed via elastic training parallelism control and (2) transferring blocked tasks from stragglers to pioneers to fully utilize the computation resource. Following these guidelines, we propose the abstraction of parallelism as an infrastructure and design the Elastic-Parallelism Synchronous Parallel (EPSP) algorithm to handle distributed training and parameter synchronization, supporting both enforced- and slack-synchronization schemes. The whole idea has been implemented into a prototype called [Formula Omitted] which effectively accelerates the DL training speed with the presence of stragglers. Evaluation under various benchmarks with baseline comparison demonstrates the superiority of our system. Specifically, [Formula Omitted] reduces the training convergence time, by up to 61.83, 55.19, 38.92, and 23.68 percent shorter than FlexRR, Sync-opt, ConSGD, and DynSGD, respectively. The parameter server architecture has shown promising performance advantages when handling deep learning (DL) applications. One crucial issue in this regard is the presence of stragglers, which significantly retards DL training progress. Previous solutions for solving stragglers may not fully exploit the computation resource of the cluster as evidenced by our experiments, especially in the heterogeneous environment. This motivates us to design a heterogeneity-aware parameter server paradigm that addresses stragglers and accelerates DL training from the perspective of computation parallelism. We introduce a novel methodology named straggler projection to give a comprehensive inspection of stragglers and reveal practical guidelines to solve this problem in two aspects: (1) controlling each worker's training speed via elastic training parallelism control and (2) transferring blocked tasks from stragglers to pioneers to fully utilize the computation resource. Following these guidelines, we propose the abstraction of parallelism as an infrastructure and design the Elastic-Parallelism Synchronous Parallel (EPSP) algorithm to handle distributed training and parameter synchronization, supporting both enforced- and slack-synchronization schemes. The whole idea has been implemented into a prototype called <inline-formula><tex-math notation="LaTeX">{\sf Falcon}</tex-math> <mml:math><mml:mi mathvariant="sans-serif">Falcon</mml:mi></mml:math><inline-graphic xlink:href="wang-ieq1-2974461.gif"/> </inline-formula> which effectively accelerates the DL training speed with the presence of stragglers. Evaluation under various benchmarks with baseline comparison demonstrates the superiority of our system. Specifically, <inline-formula><tex-math notation="LaTeX">{\sf Falcon}</tex-math> <mml:math><mml:mi mathvariant="sans-serif">Falcon</mml:mi></mml:math><inline-graphic xlink:href="wang-ieq2-2974461.gif"/> </inline-formula> reduces the training convergence time, by up to 61.83, 55.19, 38.92, and 23.68 percent shorter than FlexRR, Sync-opt, ConSGD, and DynSGD, respectively. |
| Author | Zhou, Qihua Li, Li Sun, Yanfei Guo, Song Lu, Haodong Wang, Kun Guo, Minyi |
| Author_xml | – sequence: 1 givenname: Qihua orcidid: 0000-0002-2052-638X surname: Zhou fullname: Zhou, Qihua email: kimizqh@foxmail.com organization: School of Automation and School of Artificial Intelligence, Nanjing University of Posts and Telecommunications, Nanjing, Jiangsu, China – sequence: 2 givenname: Song orcidid: 0000-0001-9831-2202 surname: Guo fullname: Guo, Song email: song.guo@polyu.edu.hk organization: Department of Computing, The Hong Kong Polytechnic University, Hong Kong – sequence: 3 givenname: Haodong surname: Lu fullname: Lu, Haodong email: ihaodonglu@gmail.com organization: School of Internet of Things, Nanjing University of Posts and Telecommunications, Nanjing, Jiangsu, China – sequence: 4 givenname: Li surname: Li fullname: Li, Li email: lilijp@sjtu.edu.cn organization: School of Software of Shanghai, Jiao Tong University, Shanghai, China – sequence: 5 givenname: Minyi surname: Guo fullname: Guo, Minyi email: myguo@sjtu.edu.cn organization: Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, China – sequence: 6 givenname: Yanfei orcidid: 0000-0003-0085-1545 surname: Sun fullname: Sun, Yanfei email: sunyanfei@njupt.edu.cn organization: School of Automation and School of Artificial Intelligence, Nanjing University of Posts and Telecommunications, Nanjing, China – sequence: 7 givenname: Kun orcidid: 0000-0002-9099-2781 surname: Wang fullname: Wang, Kun email: wangk@ucla.edu organization: Department of Electrical and Computer Engineering, University of California, Los Angeles, CA, USA |
| BookMark | eNp9kEFLw0AQhRepYFs9e_AS8Jx2drPZdL2VYK1QUWgVb2GzmYQt26TuJoL_3tQWDx48vWHmvXnwjcigbmok5JrChFKQ0006YcBgwmTCuaBnZEjjOAmljMWADAHoLJQRhwsy8n4LAIKBHJL3hbK6qe-CeVE49N7UVbBunaoqi84Hpg6W2KJrKqyx6XzwopzaHTbBGt1nL29GBU-dbc3e4s_VWrTG7y7Jeamsx6uTjsnr4n6TLsPV88NjOl-Fms1kG-ZSUVZKzIHlOeQi0lRDyWmCEZc5FwWNUSRAWRHpflalzpNS6YiVPAaRzKIxuT3-3bvmo0PfZtumc3VfmTEuJE9EFNPeNT26tGu8d1hme2d2yn1lFLIDvmyTZgd82Qlfn4j_JLRpVWuauqdj7D-5m2POIOJvi-yBS0ajb11Pfn0 |
| CODEN | ITCOB4 |
| CitedBy_id | crossref_primary_10_3390_s21155124 crossref_primary_10_1109_TMC_2024_3416312 crossref_primary_10_1109_TPDS_2022_3228733 crossref_primary_10_1007_s11227_023_05508_5 crossref_primary_10_1007_s11227_022_04466_8 crossref_primary_10_1016_j_comcom_2025_108262 crossref_primary_10_1016_j_parco_2024_103092 crossref_primary_10_1016_j_heliyon_2023_e23567 crossref_primary_10_1109_ACCESS_2025_3535085 crossref_primary_10_1109_MC_2021_3099211 crossref_primary_10_26599_BDMA_2022_9020046 crossref_primary_10_1109_TCC_2021_3062398 crossref_primary_10_1109_TMC_2025_3533591 crossref_primary_10_1109_TNET_2024_3441039 crossref_primary_10_1109_TNSE_2021_3083263 crossref_primary_10_1109_TPDS_2021_3051059 crossref_primary_10_1007_s11227_022_04422_6 crossref_primary_10_1109_JIOT_2022_3182394 crossref_primary_10_1109_JSAC_2021_3118402 crossref_primary_10_1002_ett_4962 crossref_primary_10_1155_2021_9446653 |
| Cites_doi | 10.1109/CVPR.2016.308 10.1145/2987550.2987586 10.1145/2391229.2391236 10.1145/2723372.2737792 10.1109/IPDPS.2018.00060 10.1109/TC.2019.2931716 10.14778/2732977.2733001 10.1109/ICDCS.2019.00028 10.1093/nsr/nwx018 10.1109/COMST.2018.2857922 10.1145/2987550.2987554 10.1145/1807167.1807184 10.1109/TBDATA.2015.2472014 10.1145/1048935.1050204 10.1145/3068335 10.1145/3187009.3177734 10.1145/3035918.3035933 10.1145/2623330.2623612 10.1145/3055399.3055448 10.1145/3267809.3275463 10.14778/1920841.1920931 10.1145/2640087.2644155 10.1145/2785956.2787481 10.1145/2391229.2391254 |
| ContentType | Journal Article |
| Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021 |
| Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021 |
| DBID | 97E RIA RIE AAYXX CITATION 7SC 7SP 8FD JQ2 L7M L~C L~D |
| DOI | 10.1109/TC.2020.2974461 |
| DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005–Present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library (IEL) CrossRef Computer and Information Systems Abstracts Electronics & Communications Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional |
| DatabaseTitle | CrossRef Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Professional |
| DatabaseTitleList | Technology Research Database |
| Database_xml | – sequence: 1 dbid: RIE name: IEEE Xplore url: https://ieeexplore.ieee.org/ sourceTypes: Publisher |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Engineering Computer Science |
| EISSN | 1557-9956 |
| EndPage | 155 |
| ExternalDocumentID | 10_1109_TC_2020_2974461 9000921 |
| Genre | orig-research |
| GrantInformation_xml | – fundername: National Key Research and Development Program of China grantid: 2018YFB1003500 – fundername: Natural Science Foundation of Jiangsu Province grantid: BK20191381 funderid: 10.13039/501100004608 – fundername: The General Research Fund of the Research Grants Council of Hong Kong grantid: PolyU 152221/19E – fundername: Jiangsu Key Research and Development Program grantid: BE2019742 – fundername: National Natural Science Foundation of China grantid: 61872310; 61772286; 61872195; 61872240; 61832006; 61572262; 61802208 funderid: 10.13039/501100001809 |
| GroupedDBID | --Z -DZ -~X .DC 0R~ 29I 4.4 5GY 6IK 85S 97E AAJGR AARMG AASAJ AAWTH ABAZT ABQJQ ABVLG ACGFO ACIWK ACNCT AENEX AETEA AGQYO AHBIQ AKJIK AKQYR ALMA_UNASSIGNED_HOLDINGS ASUFR ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 EBS EJD HZ~ IEDLZ IFIPE IPLJI JAVBF LAI M43 MS~ O9- OCL P2P PQQKQ RIA RIE RNS RXW TAE TN5 TWZ UHB UPT XZL YZZ AAYXX ABUFD CITATION 7SC 7SP 8FD JQ2 L7M L~C L~D |
| ID | FETCH-LOGICAL-c289t-b9a12f9eb02bb0b63c1c0f417e349b46d15e67012d3cd15afcb7fac32f4506783 |
| IEDL.DBID | RIE |
| ISICitedReferencesCount | 27 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000597781200010&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 0018-9340 |
| IngestDate | Sun Nov 30 05:17:11 EST 2025 Tue Nov 18 21:37:43 EST 2025 Sat Nov 29 01:35:41 EST 2025 Wed Aug 27 02:33:48 EDT 2025 |
| IsPeerReviewed | true |
| IsScholarly | true |
| Issue | 1 |
| Language | English |
| License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c289t-b9a12f9eb02bb0b63c1c0f417e349b46d15e67012d3cd15afcb7fac32f4506783 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| ORCID | 0000-0003-0085-1545 0000-0002-9099-2781 0000-0001-9831-2202 0000-0002-2052-638X |
| PQID | 2469476351 |
| PQPubID | 85452 |
| PageCount | 17 |
| ParticipantIDs | crossref_citationtrail_10_1109_TC_2020_2974461 crossref_primary_10_1109_TC_2020_2974461 proquest_journals_2469476351 ieee_primary_9000921 |
| PublicationCentury | 2000 |
| PublicationDate | 2021-Jan.-1 2021-1-1 20210101 |
| PublicationDateYYYYMMDD | 2021-01-01 |
| PublicationDate_xml | – month: 01 year: 2021 text: 2021-Jan.-1 day: 01 |
| PublicationDecade | 2020 |
| PublicationPlace | New York |
| PublicationPlace_xml | – name: New York |
| PublicationTitle | IEEE transactions on computers |
| PublicationTitleAbbrev | TC |
| PublicationYear | 2021 |
| Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| References | ref13 ref12 abadi (ref4) 2016 ref18 (ref45) 0 (ref26) 0 ref46 dipietro (ref40) 2017; abs 1702 7805 ref48 ref42 hsieh (ref6) 2017 ref49 chen (ref15) 2016; abs 1604 981 ref8 ref7 dean (ref16) 2012 zinkevich (ref43) 2010 ref3 niu (ref44) 2011 ref5 ref34 ref37 krizhevsky (ref22) 2012 ref36 ref30 ref33 ref32 ananthanarayanan (ref14) 2013 li (ref2) 2014 ref1 wang (ref20) 2018 ref38 chilimbi (ref10) 2014 dipietro (ref39) 2017; 1702 7805 cui (ref9) 2014 (ref47) 0 ref24 stoica (ref31) 2017; abs 1712 5855 ref21 ref28 ref27 ref29 simonyan (ref23) 2014 ester (ref41) 1996 ananthanarayanan (ref17) 2010 zhang (ref19) 2017 ho (ref11) 2013 ousterhout (ref35) 2015 (ref25) 0 |
| References_xml | – ident: ref24 doi: 10.1109/CVPR.2016.308 – start-page: 1106 year: 2012 ident: ref22 article-title: Imagenet classification with deep convolutional neural networks publication-title: Proc Int Conf Neural Inf Process – start-page: 693 year: 2011 ident: ref44 article-title: HOGWILD!: A lock-free approach to parallelizing stochastic gradient descent publication-title: Proc Int Conf Neural Inf Process – year: 2014 ident: ref23 article-title: Very deep convolutional networks for large-scale image recognition publication-title: CoRR – start-page: 571 year: 2014 ident: ref10 article-title: Project adam: Building an efficient and scalable deep learning training system publication-title: Proc 11th USENIX Conf Operating Syst Des Implementation – ident: ref18 doi: 10.1145/2987550.2987586 – ident: ref32 doi: 10.1145/2391229.2391236 – ident: ref36 doi: 10.1145/2723372.2737792 – volume: 1702 7805 year: 2017 ident: ref39 article-title: Analyzing and exploiting NARX recurrent neural networks for long-term dependencies – start-page: 185 year: 2013 ident: ref14 article-title: Effective straggler mitigation: Attack of the clones publication-title: Proc 10th USENIX Conf Netw Syst Des Implementation – volume: abs 1712 5855 year: 2017 ident: ref31 article-title: A berkeley view of systems challenges for AI publication-title: CoRR – start-page: 181 year: 2017 ident: ref19 article-title: Poseidon: An efficient communication architecture for distributed deep learning on GPU clusters publication-title: Proc USENIX Conf USENIX Annu Tech Conf – volume: abs 1702 7805 year: 2017 ident: ref40 article-title: Revisiting NARX recurrent neural networks for long-term dependencies publication-title: CoRR – ident: ref30 doi: 10.1109/IPDPS.2018.00060 – ident: ref46 doi: 10.1109/TC.2019.2931716 – start-page: 293 year: 2015 ident: ref35 article-title: Making sense of performance in data analytics frameworks publication-title: Proc 12th USENIX Conf Netw Syst Des Implementation – ident: ref42 doi: 10.14778/2732977.2733001 – ident: ref49 doi: 10.1109/ICDCS.2019.00028 – ident: ref27 doi: 10.1093/nsr/nwx018 – start-page: 1223 year: 2013 ident: ref11 article-title: More effective distributed ml via a stale synchronous parallel parameter server publication-title: Proc Int Conf Neural Inf Process – start-page: 265 year: 2010 ident: ref17 article-title: Reining in the outliers in map-reduce clusters using mantri publication-title: Proc USENIX Conf Operating Syst Des Implementation – ident: ref7 doi: 10.1109/COMST.2018.2857922 – ident: ref13 doi: 10.1145/2987550.2987554 – ident: ref21 doi: 10.1145/1807167.1807184 – start-page: 19 year: 2014 ident: ref2 article-title: Communication efficient distributed machine learning with the parameter server publication-title: Proc Int Conf Neural Inf Process – ident: ref3 doi: 10.1109/TBDATA.2015.2472014 – ident: ref48 doi: 10.1145/1048935.1050204 – ident: ref37 doi: 10.1145/3068335 – ident: ref5 doi: 10.1145/3187009.3177734 – start-page: 265 year: 2016 ident: ref4 article-title: Tensorflow: A system for large-scale machine learning publication-title: Proc USENIX Conf Operating Syst Des Implementation – ident: ref8 doi: 10.1145/3035918.3035933 – start-page: 2595 year: 2010 ident: ref43 article-title: Parallelized stochastic gradient descent publication-title: Proc Int Conf Neural Inf Process – ident: ref28 doi: 10.1145/2623330.2623612 – ident: ref29 doi: 10.1145/3055399.3055448 – start-page: 629 year: 2017 ident: ref6 article-title: Gaia: Geo-distributed machine learning approaching LAN speeds publication-title: Proc 10th USENIX Conf Netw Syst Des Implementation – year: 0 ident: ref25 article-title: Fashion MNIST dataset – year: 0 ident: ref26 article-title: CIFAR-10 dataset – ident: ref38 doi: 10.1145/3267809.3275463 – ident: ref34 doi: 10.14778/1920841.1920931 – year: 0 ident: ref47 article-title: Microsoft Azure – start-page: 226 year: 1996 ident: ref41 article-title: A density-based algorithm for discovering clusters in large spatial databases with noise publication-title: Proc Int'l Conf Knowledge Discovery and Data Mining – ident: ref1 doi: 10.1145/2640087.2644155 – ident: ref12 doi: 10.1145/2785956.2787481 – ident: ref33 doi: 10.1145/2391229.2391254 – start-page: 1223 year: 2012 ident: ref16 article-title: Large scale distributed deep networks publication-title: Proc Int Conf Neural Inf Process – volume: abs 1604 981 year: 2016 ident: ref15 article-title: Revisiting distributed synchronous SGD publication-title: CoRR – start-page: 4243 year: 2018 ident: ref20 article-title: BML: A high-performance, low-cost gradient synchronization algorithm for DML training publication-title: Proc Int Conf Neural Inf Process – start-page: 37 year: 2014 ident: ref9 article-title: Exploiting bounded staleness to speed up big data analytics publication-title: Proc USENIX Conf USENIX Annu Tech Conf – year: 0 ident: ref45 article-title: PyTorch |
| SSID | ssj0006209 |
| Score | 2.4470477 |
| Snippet | The parameter server architecture has shown promising performance advantages when handling deep learning (DL) applications. One crucial issue in this regard is... |
| SourceID | proquest crossref ieee |
| SourceType | Aggregation Database Enrichment Source Index Database Publisher |
| StartPage | 139 |
| SubjectTerms | Algorithms Computation Computational modeling Computer architecture Convergence Design parameters Distributed deep learning Guidelines Heterogeneity heterogeneous environment Inspection Machine learning Parallel processing parameter server Servers straggler Synchronism Task analysis Training |
| Title | Falcon: Addressing Stragglers in Heterogeneous Parameter Server Via Multiple Parallelism |
| URI | https://ieeexplore.ieee.org/document/9000921 https://www.proquest.com/docview/2469476351 |
| Volume | 70 |
| WOSCitedRecordID | wos000597781200010&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVIEE databaseName: IEEE Xplore customDbUrl: eissn: 1557-9956 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0006209 issn: 0018-9340 databaseCode: RIE dateStart: 19680101 isFulltext: true titleUrlDefault: https://ieeexplore.ieee.org/ providerName: IEEE |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LTxsxEB4B4lAO5ZEiAgH50EMP3cSvrDG3KCLiAuKQotxWfkaRthuUR38_tuNEVC0HbpZ2vFrt5_HM2N_MAHz3Xt1q5mlhFbYFJ1YWmgtZGCNEMFBOYaxSswnx9HQ7mcjnPfi5y4VxziXymevGYbrLt3OzjkdlvdjgUsas8X0hyk2u1m7XLbd0DhIUmHGcy_gQLHvjYYgDKe7S4DvzkvxlgVJLlX_24WRcRsef-6wT-JqdSDTYoH4Ke645g-NtgwaU9fUMjt5VG2zBZKTi7dMdGlib2K_NFMXitNNpHXxANGvQQ-TGzMOScvP1Ej2rSNxKL4y0yAV6mSn0mAmI6Wldu3q2_P0Nfo3ux8OHIjdWKEyIr1aFlopQL53GVGusS2aIwZ4T4RiXmpeW9F0pgumyzISx8kYLrwyjnvejdWPncNDMG3cBKLhHmkoaxKXk3EtpWfSpJCOs75kmbehuf3ZlctXx2PyirlL0gWU1HlYRnSqj04Yfuwmvm4IbH4u2Ihg7sYxDGzpbNKuskMuK8lLyWHyPXP5_1hV8oZGukk5XOnCwWqzdNRyaP6vZcnGT1tobAKbQ3w |
| linkProvider | IEEE |
| linkToHtml | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LTxsxEB4hQKI98CxqIKU-cODQDX5lF_eGokZBDRGHgHJb-RlFWjYoD34_tuNEVKWH3izteLXaz-OZsb-ZAbh0Tt4o5mhmJDYZJ0Zkihci07oovIGyEmMZm00Ug8HNaCQetuDHJhfGWhvJZ7YVhvEu30z1MhyVXYcGlyJkje-0Oad4la212XfzNaGDeBVmHKdCPgSL62HHR4IUt6j3nnlO_rBBsanKXztxNC_dg__7sEPYT24kul3hfgRbtj6Gg3WLBpQ09hg-v6s3eAKjrgz3Tz_RrTGR_1qPUShPOx5X3gtEkxr1Ajtm6heVnS7n6EEG6lZ8YSBGztDTRKL7REGMT6vKVpP58xd47P4adnpZaq2QaR9hLTIlJKFOWIWpUljlTBONHSeFZVwonhvStnnhjZdh2o-l06pwUjPqeDvYN3YK2_W0tl8BeQdJUUG9uBCcOyEMC16VYIS1HVOkAa31zy51qjse2l9UZYw_sCiHnTKgUyZ0GnC1mfCyKrnxb9GTAMZGLOHQgOYazTKp5LykPBc8lN8jZx_P-g57veF9v-zfDX6fwycayCvxrKUJ24vZ0n6DXf26mMxnF3HdvQEIf9Qm |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Falcon%3A+Addressing+Stragglers+in+Heterogeneous+Parameter+Server+Via+Multiple+Parallelism&rft.jtitle=IEEE+transactions+on+computers&rft.au=Zhou%2C+Qihua&rft.au=Guo%2C+Song&rft.au=Lu%2C+Haodong&rft.au=Li%2C+Li&rft.date=2021-01-01&rft.pub=IEEE&rft.issn=0018-9340&rft.volume=70&rft.issue=1&rft.spage=139&rft.epage=155&rft_id=info:doi/10.1109%2FTC.2020.2974461&rft.externalDocID=9000921 |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0018-9340&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0018-9340&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0018-9340&client=summon |