Convergence of Distributed Stochastic Variance Reduced Methods Without Sampling Extra Data

Stochastic variance reduced methods have gained a lot of interest recently for empirical risk minimization due to its appealing run time complexity. When the data size is large and disjointly stored on different machines, it becomes imperative to distribute the implementation of such variance reduce...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on signal processing Jg. 68; S. 3976 - 3989
Hauptverfasser: Cen, Shicong, Zhang, Huishuai, Chi, Yuejie, Chen, Wei, Liu, Tie-Yan
Format: Journal Article
Sprache:Englisch
Veröffentlicht: New York IEEE 2020
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Schlagworte:
ISSN:1053-587X, 1941-0476
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Abstract Stochastic variance reduced methods have gained a lot of interest recently for empirical risk minimization due to its appealing run time complexity. When the data size is large and disjointly stored on different machines, it becomes imperative to distribute the implementation of such variance reduced methods. In this paper, we consider a general framework that directly distributes popular stochastic variance reduced methods in the master/slave model, by assigning outer loops to the parameter server, and inner loops to worker machines. This framework is natural and friendly to implement, but its theoretical convergence is not well understood. We obtain a comprehensive understanding of algorithmic convergence with respect to data homogeneity by measuring the smoothness of the discrepancy between the local and global loss functions. We establish the linear convergence of distributed versions of a family of stochastic variance reduced algorithms, including those using accelerated and recursive gradient updates, for minimizing strongly convex losses. Our theory captures how the convergence of distributed algorithms behaves as the number of machines and the size of local data vary. Furthermore, we show that when the data are less balanced, regularization can be used to ensure convergence at a slower rate. We also demonstrate that our analysis can be further extended to handle nonconvex loss functions.
AbstractList Stochastic variance reduced methods have gained a lot of interest recently for empirical risk minimization due to its appealing run time complexity. When the data size is large and disjointly stored on different machines, it becomes imperative to distribute the implementation of such variance reduced methods. In this paper, we consider a general framework that directly distributes popular stochastic variance reduced methods in the master/slave model, by assigning outer loops to the parameter server, and inner loops to worker machines. This framework is natural and friendly to implement, but its theoretical convergence is not well understood. We obtain a comprehensive understanding of algorithmic convergence with respect to data homogeneity by measuring the smoothness of the discrepancy between the local and global loss functions. We establish the linear convergence of distributed versions of a family of stochastic variance reduced algorithms, including those using accelerated and recursive gradient updates, for minimizing strongly convex losses. Our theory captures how the convergence of distributed algorithms behaves as the number of machines and the size of local data vary. Furthermore, we show that when the data are less balanced, regularization can be used to ensure convergence at a slower rate. We also demonstrate that our analysis can be further extended to handle nonconvex loss functions.
Author Chen, Wei
Chi, Yuejie
Cen, Shicong
Liu, Tie-Yan
Zhang, Huishuai
Author_xml – sequence: 1
  givenname: Shicong
  surname: Cen
  fullname: Cen, Shicong
  email: shicongc@andrew.cmu.edu
  organization: Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA, USA
– sequence: 2
  givenname: Huishuai
  surname: Zhang
  fullname: Zhang, Huishuai
  email: huzhang@microsoft.com
  organization: Microsoft Research Asia, Beijing, China
– sequence: 3
  givenname: Yuejie
  orcidid: 0000-0002-6766-5459
  surname: Chi
  fullname: Chi, Yuejie
  email: yuejiechi@cmu.edu
  organization: Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA, USA
– sequence: 4
  givenname: Wei
  surname: Chen
  fullname: Chen, Wei
  email: wche@microsoft.com
  organization: Microsoft Research Asia, Beijing, China
– sequence: 5
  givenname: Tie-Yan
  surname: Liu
  fullname: Liu, Tie-Yan
  email: tyliu@microsoft.com
  organization: Microsoft Research Asia, Beijing, China
BookMark eNp9kM1LAzEQxYNUsK3eBS8Bz1uTzSbbHKX1CyqKrR94Cdls0qa0m5pkRf97U1o8ePA0A_N7M29eD3Qa12gATjEaYIz4xWz6OMhRjgYEIZpzfAC6mBc4Q0XJOqlHlGR0WL4dgV4IS4RwUXDWBe8j13xqP9eN0tAZOLYhelu1UddwGp1ayBCtgi_SW7lFnnTdqjS713Hh6gBfbapthFO53qxsM4dXX9FLOJZRHoNDI1dBn-xrHzxfX81Gt9nk4eZudDnJVLIZs6GkjHJe1HWlSI2IMTkxpWHE0IoWTCLJOCGGacwUySWTCao0r1V6lhk8JH1wvtu78e6j1SGKpWt9k06KvMgpQ6gY8kSxHaW8C8FrI5SNMlrXJL92JTAS2xxFylFscxT7HJMQ_RFuvF1L__2f5GwnsVrrX5zjvMSYkh8wJ4AF
CODEN ITPRED
CitedBy_id crossref_primary_10_1109_JIOT_2024_3464239
crossref_primary_10_1109_TSP_2023_3316588
crossref_primary_10_1007_s41060_024_00625_7
crossref_primary_10_1109_TSP_2024_3351469
crossref_primary_10_1137_20M1361158
crossref_primary_10_1109_TSP_2024_3368751
crossref_primary_10_1016_j_sigpro_2021_108020
crossref_primary_10_1109_TSP_2020_3029461
crossref_primary_10_1016_j_iot_2025_101726
crossref_primary_10_1109_TVT_2021_3089431
crossref_primary_10_1109_TSP_2024_3356257
Cites_doi 10.1214/17-AOS1637
10.1145/3055399.3055448
10.1007/s10107-016-1030-6
10.1109/TSP.2018.2872003
10.1109/CDC.2012.6426691
10.1561/2200000016
10.1137/140961791
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2020
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2020
DBID 97E
RIA
RIE
AAYXX
CITATION
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
DOI 10.1109/TSP.2020.3005291
DatabaseName IEEE All-Society Periodicals Package (ASPP) 2005–Present
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Electronic Library (IEL)
CrossRef
Computer and Information Systems Abstracts
Electronics & Communications Abstracts
Technology Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
DatabaseTitle CrossRef
Technology Research Database
Computer and Information Systems Abstracts – Academic
Electronics & Communications Abstracts
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts Professional
DatabaseTitleList Technology Research Database

Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
EISSN 1941-0476
EndPage 3989
ExternalDocumentID 10_1109_TSP_2020_3005291
9127115
Genre orig-research
GrantInformation_xml – fundername: Office of Naval Research
  grantid: N00014-18-1-2142; N00014-19-1-2404
  funderid: 10.13039/100000006
– fundername: National Science Foundation
  grantid: CCF-1806154; CCF-1901199; CCF-2007911
  funderid: 10.13039/100000001
– fundername: Army Research Office
  grantid: W911NF-18-1-0303
  funderid: 10.13039/100000183
GroupedDBID -~X
.DC
0R~
29I
3EH
4.4
53G
5GY
5VS
6IK
85S
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABFSI
ABQJQ
ABVLG
ACGFO
ACIWK
ACKIV
ACNCT
AENEX
AETIX
AGQYO
AGSQL
AHBIQ
AI.
AIBXA
AJQPL
AKJIK
AKQYR
ALLEH
ALMA_UNASSIGNED_HOLDINGS
ASUFR
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CS3
E.L
EBS
EJD
F5P
HZ~
H~9
ICLAB
IFIPE
IFJZH
IPLJI
JAVBF
LAI
MS~
O9-
OCL
P2P
RIA
RIE
RNS
TAE
TN5
VH1
AAYXX
CITATION
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
ID FETCH-LOGICAL-c291t-8a565994ddbc3d03ff23f7f63f5b546a0a6933f6e16c32a6a3d0be9dc0206f183
IEDL.DBID RIE
ISICitedReferencesCount 14
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000550634500001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 1053-587X
IngestDate Mon Jun 30 10:16:45 EDT 2025
Tue Nov 18 22:45:13 EST 2025
Sat Nov 29 04:10:51 EST 2025
Wed Aug 27 02:32:45 EDT 2025
IsPeerReviewed true
IsScholarly true
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
https://doi.org/10.15223/policy-029
https://doi.org/10.15223/policy-037
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c291t-8a565994ddbc3d03ff23f7f63f5b546a0a6933f6e16c32a6a3d0be9dc0206f183
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ORCID 0000-0002-6766-5459
PQID 2425600489
PQPubID 85478
PageCount 14
ParticipantIDs proquest_journals_2425600489
crossref_primary_10_1109_TSP_2020_3005291
ieee_primary_9127115
crossref_citationtrail_10_1109_TSP_2020_3005291
PublicationCentury 2000
PublicationDate 20200000
2020-00-00
20200101
PublicationDateYYYYMMDD 2020-01-01
PublicationDate_xml – year: 2020
  text: 20200000
PublicationDecade 2020
PublicationPlace New York
PublicationPlace_xml – name: New York
PublicationTitle IEEE transactions on signal processing
PublicationTitleAbbrev TSP
PublicationYear 2020
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References tang (ref38) 0
ref14
zinkevich (ref21) 0
lin (ref36) 0
lian (ref35) 0
reddi (ref5) 2016
nguyen (ref10) 2019
defazio (ref24) 0
lewis (ref45) 2004; 5
zhou (ref11) 0
shamir (ref7) 0
ref39
zhao (ref12) 0
wang (ref29) 0
hu (ref43) 0
zhang (ref17) 2015
lee (ref6) 2017; 18
li (ref41) 0
bertsekas (ref15) 1989; 23
smith (ref19) 2017; 18
mokhtari (ref40) 2016; 17
goldstein (ref4) 0
recht (ref2) 0
nguyen (ref9) 0
kone?n? (ref1) 2015
ref46
shalev-shwartz (ref25) 2013; 14
wangni (ref37) 0
ref23
ref26
ref42
guyon (ref44) 0
alistarh (ref30) 0
wang (ref18) 0
ref22
alistarh (ref34) 0
johnson (ref3) 0
fan (ref20) 2019
bernstein (ref31) 0
zhao (ref13) 0
lin (ref27) 0
wang (ref8) 0
shamir (ref16) 0
seide (ref32) 0
wen (ref33) 0
fang (ref28) 0
References_xml – volume: 23
  year: 1989
  ident: ref15
  publication-title: Parallel and Distributed Computation Numerical Methods
– start-page: 545
  year: 0
  ident: ref44
  article-title: Result analysis of the nips 2003 feature selection challenge
  publication-title: Proc Advances Neural Inf Process Syst
– start-page: 2406
  year: 0
  ident: ref29
  article-title: SpiderBoost and momentum: Faster variance reduction algorithms
  publication-title: Proc Advances Neural Inf Process Syst
– volume: 5
  start-page: 361
  year: 2004
  ident: ref45
  article-title: RCV1: A new benchmark collection for text categorization research
  publication-title: J Mach Learn Res
– start-page: 362
  year: 2015
  ident: ref17
  article-title: DiSCO: Distributed optimization for self-concordant empirical loss
  publication-title: Int Conf Mach Learn
– start-page: 2338
  year: 0
  ident: ref18
  article-title: Giant: Globally improved approximate newton method for distributed optimization
  publication-title: Proc Advances Neural Inf Process Syst
– start-page: 5973
  year: 0
  ident: ref34
  article-title: The convergence of sparsified gradient methods
  publication-title: Proc Advances Neural Inf Process Syst
– start-page: 1000
  year: 0
  ident: ref16
  article-title: Communication-efficient distributed optimization using an approximate Newton-type method
  publication-title: Proc Int Conf Mach Learn
– ident: ref42
  doi: 10.1214/17-AOS1637
– ident: ref26
  doi: 10.1145/3055399.3055448
– year: 2016
  ident: ref5
  article-title: Aide: Fast and communication efficient distributed optimization
– start-page: 1662
  year: 0
  ident: ref41
  publication-title: Proc Int Conf Artif Intell Statist
– ident: ref23
  doi: 10.1007/s10107-016-1030-6
– start-page: 1646
  year: 0
  ident: ref24
  article-title: Saga: A fast incremental gradient method with support for non-strongly convex composite objectives
  publication-title: Proc Advances Neural Inf Process Syst
– ident: ref39
  doi: 10.1109/TSP.2018.2872003
– volume: 17
  start-page: 2165
  year: 2016
  ident: ref40
  article-title: DSA: Decentralized double stochastic averaging gradient algorithm
  publication-title: J Mach Learn Res
– start-page: 2928
  year: 0
  ident: ref12
  article-title: SCOPE: Scalable composite optimization for learning on spark
  publication-title: Proc 31st AAAI Conf Artif Intell
– start-page: 1709
  year: 0
  ident: ref30
  article-title: QSGD: Communication-efficient SGD via gradient quantization and encoding
  publication-title: Proc Advances Neural Inf Process Syst
– volume: 14
  start-page: 567
  year: 2013
  ident: ref25
  article-title: Stochastic dual coordinate ascent methods for regularized loss minimization
  publication-title: J Mach Learn Res
– start-page: 2595
  year: 0
  ident: ref21
  article-title: Parallelized stochastic gradient descent
  publication-title: Proc Advances Neural Inf Process Syst
– year: 2019
  ident: ref20
  article-title: Communication-efficient accurate statistical estimation
  publication-title: arXiv 1906 04870
– start-page: 559
  year: 0
  ident: ref31
  article-title: SignSDG: Compressed optimisation for non-convex problems
  publication-title: Proc Int Conf Mach Learn
– start-page: 5975
  year: 0
  ident: ref11
  article-title: A simple stochastic variance reduced algorithm with fast convergence rates
  publication-title: Proc Int Conf Mach Learn
– start-page: 687
  year: 0
  ident: ref28
  article-title: Spider: Near-optimal non-convex optimization via stochastic path-integrated differential estimator
  publication-title: Proc Advances Neural Inf Process Syst
– ident: ref22
  doi: 10.1109/CDC.2012.6426691
– start-page: 315
  year: 0
  ident: ref3
  article-title: Accelerating stochastic gradient descent using predictive variance reduction
  publication-title: Proc Advances Neural Inf Process Syst
– volume: 18
  start-page: 8590
  year: 2017
  ident: ref19
  article-title: CoCoA: A general framework for communication-efficient distributed optimization
  publication-title: J Mach Learn Res
– start-page: 693
  year: 0
  ident: ref2
  article-title: Hogwild: A lock-free approach to parallelizing stochastic gradient descent
  publication-title: Proc Advances Neural Inf Process Syst
– start-page: 4855
  year: 0
  ident: ref38
  article-title: ${D}^{2}$: Decentralized training over decentralized data
  publication-title: Proc Int Conf Mach Learn
– year: 2015
  ident: ref1
  article-title: Federated optimization: Distributed optimization beyond the datacenter
– start-page: 1058
  year: 0
  ident: ref32
  publication-title: Proc Annu Conf Int Speech Commun Assoc
– volume: 18
  start-page: 4404
  year: 2017
  ident: ref6
  article-title: Distributed stochastic variance reduced gradient methods by sampling extra data with replacement
  publication-title: J Mach Learn Res
– year: 0
  ident: ref36
  publication-title: Proc Int Conf Learn Representations
– year: 2019
  ident: ref10
  article-title: Finite-sum smooth optimization with SARAH
– start-page: 5330
  year: 0
  ident: ref35
  article-title: Can decentralized algorithms outperform centralized algorithms? A case study for decentralized parallel stochastic gradient descent
  publication-title: Proc Advances Neural Inf Process Syst
– start-page: 46
  year: 0
  ident: ref7
  article-title: Without-replacement sampling for stochastic gradient methods
  publication-title: Proc Advances Neural Inf Process Syst
– start-page: 6552
  year: 0
  ident: ref13
  article-title: Proximal SCOPE for distributed sparse learning
  publication-title: Proc Advances Neural Inf Process Syst
– start-page: 1882
  year: 0
  ident: ref8
  article-title: Memory and communication efficient distributed stochastic optimization with minibatch prox
  publication-title: Proc Conf Learn Theory
– start-page: 1299
  year: 0
  ident: ref37
  article-title: Gradient sparsification for communication-efficient distributed optimization
  publication-title: Proc Advances Neural Inf Process Syst
– start-page: 111
  year: 0
  ident: ref4
  article-title: Efficient distributed sgd with variance reduction
  publication-title: Proc IEEE 16th Int Conf Data Mining
– ident: ref14
  doi: 10.1561/2200000016
– start-page: 2613
  year: 0
  ident: ref9
  article-title: Sarah: A novel method for machine learning problems using stochastic recursive gradient
  publication-title: Proc Int Conf Mach Learn
– ident: ref46
  doi: 10.1137/140961791
– start-page: 3384
  year: 0
  ident: ref27
  article-title: A universal catalyst for first-order optimization
  publication-title: Proc Advances Neural Inf Process Syst
– start-page: 2038
  year: 0
  ident: ref43
  publication-title: Proc Int Conf Mach Learn
– start-page: 1509
  year: 0
  ident: ref33
  article-title: Terngrad: Ternary gradients to reduce communication in distributed deep learning
  publication-title: Proc Advances Neural Inf Process Syst
SSID ssj0014496
Score 2.4457662
Snippet Stochastic variance reduced methods have gained a lot of interest recently for empirical risk minimization due to its appealing run time complexity. When the...
SourceID proquest
crossref
ieee
SourceType Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 3976
SubjectTerms Algorithms
Convergence
Distributed databases
Distributed optimization
Empirical analysis
Homogeneity
master/slave model
Optimization
Regularization
Risk management
Servers
Signal processing algorithms
Smoothness
stochastic optimization
Stochastic processes
variance reduction
Title Convergence of Distributed Stochastic Variance Reduced Methods Without Sampling Extra Data
URI https://ieeexplore.ieee.org/document/9127115
https://www.proquest.com/docview/2425600489
Volume 68
WOSCitedRecordID wos000550634500001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVIEE
  databaseName: IEEE Electronic Library (IEL)
  customDbUrl:
  eissn: 1941-0476
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0014496
  issn: 1053-587X
  databaseCode: RIE
  dateStart: 19910101
  isFulltext: true
  titleUrlDefault: https://ieeexplore.ieee.org/
  providerName: IEEE
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3PS8MwFA5zeNCDv6Y4nZKDF8G6tmnT5ijbxItjuKnDS0nzQwfSytaJf755aVcURfDUQpPSvq9J38vL-z6EzoRQxg-hnpn9fAhQAunEoQ6dFKQKheJEamnFJqLhMJ5O2aiBLupaGKWU3XymLuHU5vJlLpawVNZlnh95UFG-FkW0rNWqMwZBYLW4jLtAnDCOpquUpMu6k_HIBIK-iU9tXsv79guymio_JmL7d7ne_t9z7aCtyovEVyXsu6ihsj20-YVbsIWeerCf3JZWKpxr3AeGXBC3UhKPi1y8cGBoxg8mVgbg8R1wuJprt1ZSeoEfZ-a4LPCYw57z7BkPPoo5x31e8H10fz2Y9G6cSkjBEeZ1Cyfmxm1jLJAyFUS6RGuf6EhTosM0DCh3OWWEaKo8KojPqUHITRWTwhiNajPoD1AzyzN1iHBEQNHP5ZwDDY2gqTTtUsJU5EtfadZG3ZVtE1GxjIPYxWtiow2XJQaNBNBIKjTa6Lzu8VYybPzRtgXWr9tVhm-jzgq-pBqCiwRiKQoTFDv6vdcx2oB7l-spHdQs5kt1gtbFezFbzE_t1_UJ2TLM_A
linkProvider IEEE
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1bS8MwFD4MFdQHb1Oc1zz4IljXNm26PMqmKG5D3NThS0lzUUE22Trx55uTdUNRBJ9aaELb8zXpOTk53wdwJKW2fggL7OwXYoASKa8Wm9jLUKpQakGVUU5sImm3a70evynByawWRmvtNp_pUzx1uXw1kGNcKqvyIEwCrCifj6Mo9CfVWrOcQRQ5NS7rMFAvriW9aVLS59Vu58aGgqGNUF1mK_j2E3KqKj-mYvd_uVj935OtwUrhR5KzCfDrUNL9DVj-wi5Yhsc67ih3xZWaDAxpIEcuyltpRTr5QD4L5Ggm9zZaRujJLbK42mstJyo9Ig8v9jjOSUfgrvP-Ezn_yIeCNEQuNuHu4rxbv_QKKQVP2tfNvZqwjhvnkVKZpMqnxoTUJIZRE2dxxIQvGKfUMB0wSUPBLEZ-prmS1mjM2GG_BXP9QV9vA0koavr5QggkopEsU7ZdRrlOQhVqwytQndo2lQXPOMpdvKYu3vB5atFIEY20QKMCx7MebxOOjT_altH6s3aF4SuwN4UvLQbhKMVoiuEUxXd-73UIi5fdVjNtXrWvd2EJ7zNZXdmDuXw41vuwIN_zl9HwwH1pn6xh0EM
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Convergence+of+Distributed+Stochastic+Variance+Reduced+Methods+Without+Sampling+Extra+Data&rft.jtitle=IEEE+transactions+on+signal+processing&rft.au=Cen%2C+Shicong&rft.au=Zhang%2C+Huishuai&rft.au=Chi%2C+Yuejie&rft.au=Chen%2C+Wei&rft.date=2020&rft.pub=IEEE&rft.issn=1053-587X&rft.volume=68&rft.spage=3976&rft.epage=3989&rft_id=info:doi/10.1109%2FTSP.2020.3005291&rft.externalDocID=9127115
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1053-587X&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1053-587X&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1053-587X&client=summon