Hyperfast second-order local solvers for efficient statistically preconditioned distributed optimization

Statistical preconditioning enables fast methods for distributed large-scale empirical risk minimization problems. In this approach, multiple worker nodes compute gradients in parallel, which are then used by the central node to update the parameter by solving an auxiliary (preconditioned) smaller-s...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:EURO journal on computational optimization Jg. 10; S. 100045
Hauptverfasser: Dvurechensky, Pavel, Kamzolov, Dmitry, Lukashevich, Aleksandr, Lee, Soomin, Ordentlich, Erik, Uribe, César A., Gasnikov, Alexander
Format: Journal Article
Sprache:Englisch
Veröffentlicht: Elsevier Ltd 2022
Elsevier
Schlagworte:
ISSN:2192-4406
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Abstract Statistical preconditioning enables fast methods for distributed large-scale empirical risk minimization problems. In this approach, multiple worker nodes compute gradients in parallel, which are then used by the central node to update the parameter by solving an auxiliary (preconditioned) smaller-scale optimization problem. The recently proposed Statistically Preconditioned Accelerated Gradient (SPAG) method [1] has complexity bounds superior to other such algorithms but requires an exact solution for computationally intensive auxiliary optimization problems at every iteration. In this paper, we propose an Inexact SPAG (InSPAG) and explicitly characterize the accuracy by which the corresponding auxiliary subproblem needs to be solved to guarantee the same convergence rate as the exact method. We build our results by first developing an inexact adaptive accelerated Bregman proximal gradient method for general optimization problems under relative smoothness and strong convexity assumptions, which may be of independent interest. Moreover, we explore the properties of the auxiliary problem in the InSPAG algorithm assuming Lipschitz third-order derivatives and strong convexity. For such problem class, we develop a linearly convergent Hyperfast second-order method and estimate the total complexity of the InSPAG method with hyperfast auxiliary problem solver. Finally, we illustrate the proposed method's practical efficiency by performing large-scale numerical experiments on logistic regression models. To the best of our knowledge, these are the first empirical results on implementing high-order methods on large-scale problems, as we work with data where the dimension is of the order of 3 million, and the number of samples is 700 million. •Inexact Statistically Preconditioned Accelerated Gradient Method for large-scale distributed convex empirical risk minimization problems.•Hyperfast second-order algorithm for minimizing strongly convex functions with Lipschitz third-order derivatives.•Inexact adaptive accelerated Bregman proximal gradient method for minimization under relative smoothness and strong convexity assumptions.•Empirical evidence for the efficiency of tensor optimization methods for large-scale ERM problems.
AbstractList Statistical preconditioning enables fast methods for distributed large-scale empirical risk minimization problems. In this approach, multiple worker nodes compute gradients in parallel, which are then used by the central node to update the parameter by solving an auxiliary (preconditioned) smaller-scale optimization problem. The recently proposed Statistically Preconditioned Accelerated Gradient (SPAG) method [1] has complexity bounds superior to other such algorithms but requires an exact solution for computationally intensive auxiliary optimization problems at every iteration. In this paper, we propose an Inexact SPAG (InSPAG) and explicitly characterize the accuracy by which the corresponding auxiliary subproblem needs to be solved to guarantee the same convergence rate as the exact method. We build our results by first developing an inexact adaptive accelerated Bregman proximal gradient method for general optimization problems under relative smoothness and strong convexity assumptions, which may be of independent interest. Moreover, we explore the properties of the auxiliary problem in the InSPAG algorithm assuming Lipschitz third-order derivatives and strong convexity. For such problem class, we develop a linearly convergent Hyperfast second-order method and estimate the total complexity of the InSPAG method with hyperfast auxiliary problem solver. Finally, we illustrate the proposed method's practical efficiency by performing large-scale numerical experiments on logistic regression models. To the best of our knowledge, these are the first empirical results on implementing high-order methods on large-scale problems, as we work with data where the dimension is of the order of 3 million, and the number of samples is 700 million. •Inexact Statistically Preconditioned Accelerated Gradient Method for large-scale distributed convex empirical risk minimization problems.•Hyperfast second-order algorithm for minimizing strongly convex functions with Lipschitz third-order derivatives.•Inexact adaptive accelerated Bregman proximal gradient method for minimization under relative smoothness and strong convexity assumptions.•Empirical evidence for the efficiency of tensor optimization methods for large-scale ERM problems.
Statistical preconditioning enables fast methods for distributed large-scale empirical risk minimization problems. In this approach, multiple worker nodes compute gradients in parallel, which are then used by the central node to update the parameter by solving an auxiliary (preconditioned) smaller-scale optimization problem. The recently proposed Statistically Preconditioned Accelerated Gradient (SPAG) method [1] has complexity bounds superior to other such algorithms but requires an exact solution for computationally intensive auxiliary optimization problems at every iteration. In this paper, we propose an Inexact SPAG (InSPAG) and explicitly characterize the accuracy by which the corresponding auxiliary subproblem needs to be solved to guarantee the same convergence rate as the exact method. We build our results by first developing an inexact adaptive accelerated Bregman proximal gradient method for general optimization problems under relative smoothness and strong convexity assumptions, which may be of independent interest. Moreover, we explore the properties of the auxiliary problem in the InSPAG algorithm assuming Lipschitz third-order derivatives and strong convexity. For such problem class, we develop a linearly convergent Hyperfast second-order method and estimate the total complexity of the InSPAG method with hyperfast auxiliary problem solver. Finally, we illustrate the proposed method's practical efficiency by performing large-scale numerical experiments on logistic regression models. To the best of our knowledge, these are the first empirical results on implementing high-order methods on large-scale problems, as we work with data where the dimension is of the order of 3 million, and the number of samples is 700 million.
ArticleNumber 100045
Author Dvurechensky, Pavel
Lukashevich, Aleksandr
Lee, Soomin
Ordentlich, Erik
Uribe, César A.
Kamzolov, Dmitry
Gasnikov, Alexander
Author_xml – sequence: 1
  givenname: Pavel
  orcidid: 0000-0003-1201-2343
  surname: Dvurechensky
  fullname: Dvurechensky, Pavel
  email: pavel.dvurechensky@wias-berlin.de
  organization: Weierstrass Institute for Applied Analysis and Stochastics, Berlin, Germany
– sequence: 2
  givenname: Dmitry
  orcidid: 0000-0001-8488-9692
  surname: Kamzolov
  fullname: Kamzolov, Dmitry
  email: kamzolov.dmitry@phystech.edu
  organization: Moscow Institute of Physics and Technology, Dolgoprudny, Russia
– sequence: 3
  givenname: Aleksandr
  orcidid: 0000-0002-4986-9941
  surname: Lukashevich
  fullname: Lukashevich, Aleksandr
  email: aleksandr.lukashevich@skoltech.ru
  organization: Center for Energy Science and Technology, Skolkovo Institute of Science and Technology, Moscow, Russia
– sequence: 4
  givenname: Soomin
  surname: Lee
  fullname: Lee, Soomin
  email: soominl@yahooinc.com
  organization: Yahoo! Research, Sunnyvale, CA, United States of America
– sequence: 5
  givenname: Erik
  surname: Ordentlich
  fullname: Ordentlich, Erik
  email: eord@yahooinc.com
  organization: Yahoo! Research, Sunnyvale, CA, United States of America
– sequence: 6
  givenname: César A.
  orcidid: 0000-0002-7080-9724
  surname: Uribe
  fullname: Uribe, César A.
  email: cauribe@rice.edu
  organization: Rice University, Houston, TX, United States of America
– sequence: 7
  givenname: Alexander
  surname: Gasnikov
  fullname: Gasnikov, Alexander
  email: gasnikov.av@mipt.ru
  organization: Moscow Institute of Physics and Technology, Dolgoprudny, Russia
BookMark eNp9kEtLAzEUhbOoYK39A67mD0zNszMDbqT4KBTc6DrkcaMZpk1JYqH-ejOtbly4Sjj3nMu53xWa7MIOELoheEEwWd72C-hNWFBMaREw5mKCppR0tOYcLy_RPKW-yETgBjMxRR_Pxz1Ep1KuEpiws3WIFmI1BKOGKoXhADFVLsQKnPPGw64Ys8o-ZV8cw7Hax1POZ1-a2MqWSfT6M5d_2Ge_9V9qHF2jC6eGBPOfd4beHh9eV8_15uVpvbrf1IYTnGvmWKsZgHGd7YRreMtJx6nlQnVcM9wyYolYUsCtbpwhwIqREwtaaCGgYzO0Pu-1QfVyH_1WxaMMysuTEOK7VLF0H0A63dqGcN60nHMBoJkRTcsUNVxTW5rMUHveZWJIKYKTxufTNTkqP0iC5Qhd9nKELkfo8gy9ROmf6G-Vf0N35xAUQAcPUaaRuAHrC-RcLvD_xb8BnXSh0Q
CitedBy_id crossref_primary_10_1080_10556788_2025_2456118
crossref_primary_10_1109_TSP_2022_3185885
crossref_primary_10_1007_s11590_021_01834_w
crossref_primary_10_1016_j_ejco_2024_100098
Cites_doi 10.1007/s10957-016-0999-6
10.1007/s10107-016-1065-8
10.1007/s10107-013-0677-5
10.1137/20M134842X
10.1137/110833786
10.1145/1327452.1327492
10.1134/S0965542518010050
10.1137/20M134705X
10.1007/s10957-021-01930-y
10.1137/16M1099546
10.1137/19M130769X
10.1287/moor.2016.0817
10.1007/s10107-021-01618-1
10.1137/19M1259973
10.1007/s10107-012-0629-5
10.1007/s10589-021-00273-8
10.1007/s10107-019-01406-y
10.1007/s10107-004-0552-5
10.1016/j.ejco.2021.100015
10.1137/080716542
10.1080/10556788.2021.1924714
10.1137/16M1106316
ContentType Journal Article
Copyright 2022 The Author(s)
Copyright_xml – notice: 2022 The Author(s)
DBID 6I.
AAFTH
AAYXX
CITATION
DOA
DOI 10.1016/j.ejco.2022.100045
DatabaseName ScienceDirect Open Access Titles
Elsevier:ScienceDirect:Open Access
CrossRef
Directory of Open Access Journals
DatabaseTitle CrossRef
DatabaseTitleList

Database_xml – sequence: 1
  dbid: DOA
  name: Directory of Open Access Journals (DOAJ)
  url: https://www.doaj.org/
  sourceTypes: Open Website
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
ExternalDocumentID oai_doaj_org_article_fb8d7144784445eeb3c5783a2c4b2d3f
10_1016_j_ejco_2022_100045
S2192440622000211
GroupedDBID -EM
0R~
0SF
0VY
203
30V
3V.
4.4
408
409
6I.
7WY
8FE
8FG
8FL
96X
AAEDW
AAFTH
AAIAL
AAJKR
AALRI
AARHV
AARTL
AATVU
AAWCG
AAXUO
AAYIU
AAYQN
AAYTO
AAZMS
ABBXA
ABDZT
ABFTV
ABJNI
ABJOX
ABKCH
ABQBU
ABTEG
ABTMW
ABUWG
ABXPI
ACGFS
ACIPQ
ACKNC
ACMLO
ACOKC
ADHHG
ADHIR
ADINQ
ADKPE
ADRFC
ADURQ
ADVLN
ADYFF
ADZKW
AEBTG
AEGNC
AEJHL
AEJRE
AEOHA
AEPYU
AETCA
AEXQZ
AEXYK
AFBBN
AFKRA
AFLOW
AFWTZ
AFZKB
AGAYW
AGDGC
AGQMX
AGWZB
AGYKE
AHAVH
AHBYD
AHKAY
AHSBF
AHYZX
AILAN
AITGF
AITUG
AJBLW
AJRNO
AJZVZ
AKLTO
AKRWK
ALFXC
ALMA_UNASSIGNED_HOLDINGS
AMKLP
AMRAJ
AMYQR
ANMIH
ARAPS
ASPBG
AUKKA
AVWKF
AVXWI
AXJJW
AXYYD
AYQZM
AZFZN
AZQEC
BAPOH
BENPR
BEZIV
BGLVJ
BGNMA
BPHCQ
CCPQU
CSCUP
DWQXO
EBS
EIOEI
EJD
ESBYG
FDB
FEDTE
FERAY
FINBP
FNLPD
FRNLG
FRRFC
FSGXE
FYJPI
GGRSB
GJIRD
GNUQQ
GROUPED_DOAJ
HCIFZ
HF~
HMJXF
HQYDN
HRMNR
HVGLF
HZ~
I0C
IXD
J-C
JBSCW
JCJTX
K60
K6V
K6~
K7-
KOV
M0C
M0N
M4Y
NB0
NQJWS
NU0
O9-
O93
O9G
O9J
OK1
P62
PQBIZ
PQBZA
PQQKQ
PROAC
RLLFE
ROL
RSV
SHX
SISQX
SNX
SOJ
SPISZ
STPWE
TSG
UG4
UOJIU
UTJUX
UZXMN
VFIZW
W48
ZMTXR
AAYWO
AAYXX
ABFSG
ACSTC
ACVFH
ADCNI
AEUPX
AEZWR
AFFHD
AFHIU
AFPUW
AHWEU
AIGII
AIXLP
AKBMS
AKYEP
APXCP
CITATION
PHGZM
PHGZT
PQGLB
ID FETCH-LOGICAL-c410t-3f38b3eecf9d95f74841942d45a94b30831d1562e08b7fc1e3f9d41deb5b55e93
IEDL.DBID DOA
ISICitedReferencesCount 2
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000876421600003&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 2192-4406
IngestDate Fri Oct 03 12:50:51 EDT 2025
Wed Nov 05 20:49:21 EST 2025
Tue Nov 18 21:24:14 EST 2025
Mon Sep 30 11:37:07 EDT 2024
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Keywords Empirical risk minimization
Distributed optimization
Tensor optimization methods
Statistical preconditioning
Language English
License This is an open access article under the CC BY-NC-ND license.
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c410t-3f38b3eecf9d95f74841942d45a94b30831d1562e08b7fc1e3f9d41deb5b55e93
ORCID 0000-0002-7080-9724
0000-0001-8488-9692
0000-0002-4986-9941
0000-0003-1201-2343
OpenAccessLink https://doaj.org/article/fb8d7144784445eeb3c5783a2c4b2d3f
ParticipantIDs doaj_primary_oai_doaj_org_article_fb8d7144784445eeb3c5783a2c4b2d3f
crossref_citationtrail_10_1016_j_ejco_2022_100045
crossref_primary_10_1016_j_ejco_2022_100045
elsevier_sciencedirect_doi_10_1016_j_ejco_2022_100045
PublicationCentury 2000
PublicationDate 2022
2022-00-00
2022-01-01
PublicationDateYYYYMMDD 2022-01-01
PublicationDate_xml – year: 2022
  text: 2022
PublicationDecade 2020
PublicationTitle EURO journal on computational optimization
PublicationYear 2022
Publisher Elsevier Ltd
Elsevier
Publisher_xml – name: Elsevier Ltd
– name: Elsevier
References Yang (br0060) 2013
Stonyakin, Tyurin, Gasnikov, Dvurechensky, Agafonov, Dvinskikh, Alkousa, Pasechnyuk, Artamonov, Piskunova (br0310) 2021; 36
Birgin, Gardenghi, Martínez, Santos, Toint (br0180) 2017; 163
Lu, Freund, Nesterov (br0300) 2018; 28
Kamzolov (br0590) 2020
Nesterov (br0420) 2018; vol. 137
Dean, Ghemawat (br0080) 2008; 51
Gasnikov, Dvurechensky, Gorbunov, Vorontsova, Selikhanovych, Uribe, Jiang, Wang, Zhang, Bubeck, Jiang, Lee, Li, Sidford (br0230) 2019; vol. 99
Nesterov (br0100) 2018
Monteiro, Svaiter (br0470) 2013; 23
Cormen, Leiserson, Rivest, Stein (br0270) 2009
Yuan, Li (br0030) 2020; 21
Shalev-Shwartz (br0610) 2016; vol. 48
Florea (br0380) 2022
Cartis, Gould, Toint (br0200) 2019; 29
Li, Andersen, Park, Smola, Ahmed, Josifovski, Long, Shekita, Su (br0070) 2014
Lan (br0490) 2020
Kamzolov, Gasnikov, Dvurechensky (br0550) 2020
Doikov, Nesterov (br0530) 2020; vol. 119
Pytorch (br0580) 2020
Reddi, Konečnỳ, Richtárik, Póczós, Smola (br0110)
Apache (br0570) 2020
Nesterov (br0360) 2013; 140
Hendrikx, Bach, Massoulié (br0050) 2021; 31
Dragomir, Taylor, d'Aspremont, Bolte (br0140) 2022; 194
Nesterov (br0480) 2005; 103
Arjevani, Shamir (br0150) 2015
Scaman, Bach, Bubeck, Lee, Massoulié (br0400) 2017; vol. 70
Nesterov (br0510) 2020
Nesterov (br0250) 2021; 31
Zhang, Lin (br0120) 2015; vol. 37
Nesterov (br0240) 2021; 1
Bauschke, Bolte, Teboulle (br0390) 2017; 42
Shamir, Srebro, Zhang (br0620) 2014
Beck, Teboulle (br0350) 2009; 2
Hanzely, Richtárik, Xiao (br0370) 2021; 79
Carmon, Duchi, Hinder, Sidford (br0190) 2020; 184
Dvurechensky, Gasnikov, Kroshnin (br0430) 2018; vol. 80
Lin, Xiao (br0460) 2014; vol. 32
Lin, Mairal, Harchaoui (br0130) 2015
Huang, Smith, Henry, van de Geijn (br0280) 2016
Gasnikov, Nesterov (br0410) 2018; 58
Bauschke, Bolte, Teboulle (br0290) 2016; 42
Dvurechensky, Shtern, Staudigl (br0450) 2021; 9
Kamzolov, Gasnikov (br0260)
Nesterov (br0220) 2019
Lan, Lee, Zhou (br0090) 2018
Doikov, Nesterov (br0500) 2020; 30
Kingma, Ba (br0600)
Lewis, Yang, Rose, Li (br0560) Apr 2004; 5
Ben-Tal, Nemirovski (br0320) 2020
Dvurechensky, Gasnikov (br0340) 2016; 171
Wang, Roosta, Xu, Mahoney (br0040) 2018
Sun, Scutari, Daneshmand (br0160) 2022; 32
Devolder, Glineur, Nesterov (br0330) 2014; 146
Shamir, Srebro, Zhang (br0020) 2014; vol. 32
Dvurechensky, Dvinskikh, Gasnikov, Uribe, Nedić (br0440) 2018
Gasnikov (br0520)
Hendrikx, Xiao, Bubeck, Bach, Massoulie (br0010) 2020; vol. 119
Agafonov, Kamzolov, Dvurechensky, Gasnikov (br0540)
Bullins (br0170) 2020; vol. 125
Baes (br0210) 2009
Kamzolov (10.1016/j.ejco.2022.100045_br0260)
Dean (10.1016/j.ejco.2022.100045_br0080) 2008; 51
Bauschke (10.1016/j.ejco.2022.100045_br0290) 2016; 42
Wang (10.1016/j.ejco.2022.100045_br0040) 2018
Bullins (10.1016/j.ejco.2022.100045_br0170) 2020; vol. 125
Bauschke (10.1016/j.ejco.2022.100045_br0390) 2017; 42
Nesterov (10.1016/j.ejco.2022.100045_br0420) 2018; vol. 137
Yuan (10.1016/j.ejco.2022.100045_br0030) 2020; 21
Lan (10.1016/j.ejco.2022.100045_br0490) 2020
Kingma (10.1016/j.ejco.2022.100045_br0600)
Doikov (10.1016/j.ejco.2022.100045_br0530) 2020; vol. 119
Gasnikov (10.1016/j.ejco.2022.100045_br0410) 2018; 58
Doikov (10.1016/j.ejco.2022.100045_br0500) 2020; 30
Ben-Tal (10.1016/j.ejco.2022.100045_br0320)
Dvurechensky (10.1016/j.ejco.2022.100045_br0450) 2021; 9
Carmon (10.1016/j.ejco.2022.100045_br0190) 2020; 184
Nesterov (10.1016/j.ejco.2022.100045_br0220) 2019
Lin (10.1016/j.ejco.2022.100045_br0460) 2014; vol. 32
Lan (10.1016/j.ejco.2022.100045_br0090) 2018
Beck (10.1016/j.ejco.2022.100045_br0350) 2009; 2
Florea (10.1016/j.ejco.2022.100045_br0380) 2022
Monteiro (10.1016/j.ejco.2022.100045_br0470) 2013; 23
Lewis (10.1016/j.ejco.2022.100045_br0560) 2004; 5
Apache (10.1016/j.ejco.2022.100045_br0570)
Li (10.1016/j.ejco.2022.100045_br0070) 2014
Cormen (10.1016/j.ejco.2022.100045_br0270) 2009
Reddi (10.1016/j.ejco.2022.100045_br0110)
Lu (10.1016/j.ejco.2022.100045_br0300) 2018; 28
Shamir (10.1016/j.ejco.2022.100045_br0020) 2014; vol. 32
Birgin (10.1016/j.ejco.2022.100045_br0180) 2017; 163
Agafonov (10.1016/j.ejco.2022.100045_br0540)
Dvurechensky (10.1016/j.ejco.2022.100045_br0440) 2018
Shalev-Shwartz (10.1016/j.ejco.2022.100045_br0610) 2016; vol. 48
Shamir (10.1016/j.ejco.2022.100045_br0620) 2014
Nesterov (10.1016/j.ejco.2022.100045_br0240) 2021; 1
Nesterov (10.1016/j.ejco.2022.100045_br0250) 2021; 31
Yang (10.1016/j.ejco.2022.100045_br0060) 2013
Lin (10.1016/j.ejco.2022.100045_br0130) 2015
Scaman (10.1016/j.ejco.2022.100045_br0400) 2017; vol. 70
Hendrikx (10.1016/j.ejco.2022.100045_br0050) 2021; 31
Kamzolov (10.1016/j.ejco.2022.100045_br0590) 2020
Baes (10.1016/j.ejco.2022.100045_br0210) 2009
Hanzely (10.1016/j.ejco.2022.100045_br0370) 2021; 79
Dvurechensky (10.1016/j.ejco.2022.100045_br0430) 2018; vol. 80
Nesterov (10.1016/j.ejco.2022.100045_br0100) 2018
Cartis (10.1016/j.ejco.2022.100045_br0200) 2019; 29
Huang (10.1016/j.ejco.2022.100045_br0280) 2016
Arjevani (10.1016/j.ejco.2022.100045_br0150) 2015
Hendrikx (10.1016/j.ejco.2022.100045_br0010) 2020; vol. 119
Dragomir (10.1016/j.ejco.2022.100045_br0140) 2022; 194
Gasnikov (10.1016/j.ejco.2022.100045_br0520)
Nesterov (10.1016/j.ejco.2022.100045_br0360) 2013; 140
Sun (10.1016/j.ejco.2022.100045_br0160) 2022; 32
Stonyakin (10.1016/j.ejco.2022.100045_br0310) 2021; 36
Devolder (10.1016/j.ejco.2022.100045_br0330) 2014; 146
Dvurechensky (10.1016/j.ejco.2022.100045_br0340) 2016; 171
Nesterov (10.1016/j.ejco.2022.100045_br0510) 2020
Nesterov (10.1016/j.ejco.2022.100045_br0480) 2005; 103
Kamzolov (10.1016/j.ejco.2022.100045_br0550) 2020
Pytorch (10.1016/j.ejco.2022.100045_br0580)
Gasnikov (10.1016/j.ejco.2022.100045_br0230) 2019; vol. 99
Zhang (10.1016/j.ejco.2022.100045_br0120) 2015; vol. 37
References_xml – volume: 28
  start-page: 333
  year: 2018
  end-page: 354
  ident: br0300
  article-title: Relatively smooth convex optimization by first-order methods, and applications
  publication-title: SIAM J. Control Optim.
– volume: vol. 32
  start-page: 73
  year: 2014
  end-page: 81
  ident: br0460
  article-title: An adaptive accelerated proximal gradient method and its homotopy continuation for sparse optimization
  publication-title: Proceedings of the 31st International Conference on Machine Learning
– start-page: 1
  year: 2020
  end-page: 29
  ident: br0510
  article-title: Inexact basic tensor methods for some classes of convex optimization problems
  publication-title: Optim. Methods Softw.
– ident: br0540
  article-title: Inexact tensor methods and their application to stochastic convex optimization
– year: 2009
  ident: br0210
  article-title: Estimate Sequence Methods: Extensions and Approximations
– volume: vol. 32
  start-page: 1000
  year: 2014
  end-page: 1008
  ident: br0020
  article-title: Communication-efficient distributed optimization using an approximate Newton-type method
  publication-title: Proceedings of the 31st International Conference on Machine Learning
– volume: 184
  start-page: 71
  year: 2020
  end-page: 120
  ident: br0190
  article-title: Lower bounds for finding stationary points I
  publication-title: Math. Program.
– volume: vol. 125
  start-page: 988
  year: 2020
  end-page: 1030
  ident: br0170
  article-title: Highly smooth minimization of non-smooth problems
  publication-title: Proceedings of Thirty Third Conference on Learning Theory
– volume: 36
  start-page: 1155
  year: 2021
  end-page: 1201
  ident: br0310
  article-title: Inexact model: a framework for optimization and variational inequalities
  publication-title: Optim. Methods Softw.
– volume: 51
  start-page: 107
  year: 2008
  end-page: 113
  ident: br0080
  article-title: Mapreduce: simplified data processing on large clusters
  publication-title: Commun. ACM
– volume: 171
  start-page: 121
  year: 2016
  end-page: 145
  ident: br0340
  article-title: Stochastic intermediate gradient method for convex problems with stochastic inexact oracle
  publication-title: J. Optim. Theory Appl.
– volume: 140
  start-page: 125
  year: 2013
  end-page: 161
  ident: br0360
  article-title: Gradient methods for minimizing composite functions
  publication-title: Math. Program.
– start-page: 583
  year: 2014
  end-page: 598
  ident: br0070
  article-title: Scaling distributed machine learning with the parameter server
  publication-title: 11th {USENIX} Symposium on Operating Systems Design and Implementation ({OSDI} 14)
– volume: vol. 119
  start-page: 4203
  year: 2020
  end-page: 4227
  ident: br0010
  article-title: Statistically preconditioned accelerated gradient method for distributed optimization
  publication-title: Proceedings of the 37th International Conference on Machine Learning
– volume: 23
  start-page: 1092
  year: 2013
  end-page: 1125
  ident: br0470
  article-title: An accelerated hybrid proximal extragradient method for convex optimization and its implications to second-order methods
  publication-title: SIAM J. Control Optim.
– volume: 103
  start-page: 127
  year: 2005
  end-page: 152
  ident: br0480
  article-title: Smooth minimization of non-smooth functions
  publication-title: Math. Program.
– year: 2018
  ident: br0100
  article-title: Lectures on Convex Optimization, vol. 137
– volume: 146
  start-page: 37
  year: 2014
  end-page: 75
  ident: br0330
  article-title: First-order methods of smooth convex optimization with inexact oracle
  publication-title: Math. Program.
– ident: br0110
  article-title: Aide: fast and communication efficient distributed optimization
– volume: vol. 37
  start-page: 362
  year: 2015
  end-page: 370
  ident: br0120
  article-title: Disco: distributed optimization for self-concordant empirical loss
  publication-title: Proceedings of the 32nd International Conference on Machine Learning
– volume: 29
  start-page: 595
  year: 2019
  end-page: 615
  ident: br0200
  article-title: Universal regularization methods: varying the power, the smoothness and the accuracy
  publication-title: SIAM J. Control Optim.
– start-page: 1
  year: 2018
  end-page: 48
  ident: br0090
  article-title: Communication-efficient algorithms for decentralized and stochastic optimization
  publication-title: Math. Program.
– start-page: 1756
  year: 2015
  end-page: 1764
  ident: br0150
  article-title: Communication complexity of distributed convex learning and optimization
  publication-title: Advances in Neural Information Processing Systems, vol. 28
– year: 2020
  ident: br0490
  article-title: First-Order and Stochastic Optimization Methods for Machine Learning
– volume: 9
  year: 2021
  ident: br0450
  article-title: First-order methods for convex optimization
  publication-title: EURO J. Comput. Optim.
– volume: 31
  start-page: 2807
  year: 2021
  end-page: 2828
  ident: br0250
  article-title: Inexact high-order proximal-point methods with auxiliary search procedure
  publication-title: SIAM J. Control Optim.
– start-page: 1
  year: 2022
  end-page: 28
  ident: br0380
  article-title: Exact gradient methods with memory
  publication-title: Optim. Methods Softw.
– volume: 2
  start-page: 183
  year: 2009
  end-page: 202
  ident: br0350
  article-title: A fast iterative shrinkage-thresholding algorithm for linear inverse problems
  publication-title: SIAM J. Imaging Sci.
– volume: 163
  start-page: 359
  year: 2017
  end-page: 368
  ident: br0180
  article-title: Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models
  publication-title: Math. Program.
– year: 2020
  ident: br0580
– volume: vol. 99
  start-page: 1392
  year: 2019
  end-page: 1393
  ident: br0230
  article-title: Near optimal methods for minimizing convex functions with Lipschitz
  publication-title: Proceedings of the Thirty-Second Conference on Learning Theory
– volume: vol. 48
  start-page: 747
  year: 2016
  end-page: 754
  ident: br0610
  article-title: Sdca without duality, regularization, and individual convexity
  publication-title: Proceedings of the 33rd International Conference on Machine Learning
– start-page: 1
  year: 2019
  end-page: 27
  ident: br0220
  article-title: Implementable tensor methods in unconstrained convex optimization
  publication-title: Math. Program.
– volume: 30
  start-page: 3146
  year: 2020
  end-page: 3169
  ident: br0500
  article-title: Contracting proximal methods for smooth convex optimization
  publication-title: SIAM J. Control Optim.
– start-page: 2332
  year: 2018
  end-page: 2342
  ident: br0040
  article-title: Giant: globally improved approximate Newton method for distributed optimization
  publication-title: Advances in Neural Information Processing Systems
– volume: 1
  start-page: 1
  year: 2021
  end-page: 30
  ident: br0240
  article-title: Superfast second-order methods for unconstrained convex optimization
  publication-title: J. Optim. Theory Appl.
– start-page: 690
  year: 2016
  end-page: 701
  ident: br0280
  article-title: Strassen's algorithm reloaded
  publication-title: SC'16: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis
– volume: 21
  start-page: 1
  year: 2020
  end-page: 51
  ident: br0030
  article-title: On convergence of distributed approximate Newton methods: globalization, sharper bounds and beyond
  publication-title: J. Mach. Learn. Res.
– volume: 58
  start-page: 48
  year: 2018
  end-page: 64
  ident: br0410
  article-title: Universal method for stochastic composite optimization problems
  publication-title: Comput. Math. Math. Phys.
– start-page: 10783
  year: 2018
  end-page: 10793
  ident: br0440
  article-title: Decentralize and randomize: faster algorithm for Wasserstein barycenters
  publication-title: Advances in Neural Information Processing Systems, vol. 31
– volume: vol. 70
  start-page: 3027
  year: 2017
  end-page: 3036
  ident: br0400
  article-title: Optimal algorithms for smooth and strongly convex distributed optimization in networks
  publication-title: Proceedings of the 34th International Conference on Machine Learning
– year: 2020
  ident: br0320
  article-title: Lectures on Modern Convex Optimization (Lecture Notes), Personal web-page of A. Nemirovski
– volume: 42
  start-page: 330
  year: 2017
  end-page: 348
  ident: br0390
  article-title: A descent lemma beyond Lipschitz gradient continuity: first-order methods revisited and applications
  publication-title: Math. Oper. Res.
– volume: 194
  start-page: 41
  year: 2022
  end-page: 83
  ident: br0140
  article-title: Optimal complexity and certification of Bregman first-order methods
  publication-title: Math. Program.
– ident: br0520
  article-title: Universal gradient descent
– volume: 5
  start-page: 361
  year: Apr 2004
  end-page: 397
  ident: br0560
  article-title: Rcv1: a new benchmark collection for text categorization research
  publication-title: J. Mach. Learn. Res.
– year: 2009
  ident: br0270
  article-title: Introduction to Algorithms
– start-page: 1000
  year: 2014
  end-page: 1008
  ident: br0620
  article-title: Communication-efficient distributed optimization using an approximate Newton-type method
  publication-title: International Conference on Machine Learning
– start-page: 166
  year: 2020
  end-page: 183
  ident: br0550
  article-title: Optimal combination of tensor optimization methods
  publication-title: Optimization and Applications
– ident: br0600
  article-title: Adam: a method for stochastic optimization
– volume: vol. 80
  start-page: 1367
  year: 2018
  end-page: 1376
  ident: br0430
  article-title: Computational optimal transport: complexity by accelerated gradient descent is better than by Sinkhorn's algorithm
  publication-title: Proceedings of the 35th International Conference on Machine Learning
– start-page: 167
  year: 2020
  end-page: 178
  ident: br0590
  article-title: Near-optimal hyperfast second-order method for convex optimization
  publication-title: Mathematical Optimization Theory and Operations Research
– volume: 79
  start-page: 405
  year: 2021
  end-page: 440
  ident: br0370
  article-title: Accelerated Bregman proximal gradient methods for relatively smooth convex optimization
  publication-title: Comput. Optim. Appl.
– volume: vol. 137
  year: 2018
  ident: br0420
  article-title: Lectures on Convex Optimization
  publication-title: Springer Optimization and Its Applications
– start-page: 3384
  year: 2015
  end-page: 3392
  ident: br0130
  article-title: A universal catalyst for first-order optimization
  publication-title: Proceedings of the 28th International Conference on Neural Information Processing Systems, vol. 2
– volume: 32
  start-page: 354
  year: 2022
  end-page: 385
  ident: br0160
  article-title: Distributed optimization based on gradient tracking revisited: enhancing convergence rate via surrogation
  publication-title: SIAM J. Control Optim.
– volume: vol. 119
  start-page: 2577
  year: 2020
  end-page: 2586
  ident: br0530
  article-title: Inexact tensor methods with dynamic accuracies
  publication-title: Proceedings of the 37th International Conference on Machine Learning
– start-page: 629
  year: 2013
  end-page: 637
  ident: br0060
  article-title: Trading computation for communication: distributed stochastic dual coordinate ascent
  publication-title: Advances in Neural Information Processing Systems
– year: 2020
  ident: br0570
  article-title: Spark 2.4.5
– volume: 31
  start-page: 2753
  year: 2021
  end-page: 2783
  ident: br0050
  article-title: An optimal algorithm for decentralized finite-sum optimization
  publication-title: SIAM J. Control Optim.
– ident: br0260
  article-title: Near-optimal hyperfast second-order method for convex optimization and its sliding
– volume: 42
  start-page: 330
  year: 2016
  end-page: 348
  ident: br0290
  article-title: A descent lemma beyond Lipschitz gradient continuity: first-order methods revisited and applications
  publication-title: Math. Oper. Res.
– volume: vol. 119
  start-page: 4203
  year: 2020
  ident: 10.1016/j.ejco.2022.100045_br0010
  article-title: Statistically preconditioned accelerated gradient method for distributed optimization
– volume: 171
  start-page: 121
  issue: 1
  year: 2016
  ident: 10.1016/j.ejco.2022.100045_br0340
  article-title: Stochastic intermediate gradient method for convex problems with stochastic inexact oracle
  publication-title: J. Optim. Theory Appl.
  doi: 10.1007/s10957-016-0999-6
– volume: 5
  start-page: 361
  year: 2004
  ident: 10.1016/j.ejco.2022.100045_br0560
  article-title: Rcv1: a new benchmark collection for text categorization research
  publication-title: J. Mach. Learn. Res.
– volume: vol. 119
  start-page: 2577
  year: 2020
  ident: 10.1016/j.ejco.2022.100045_br0530
  article-title: Inexact tensor methods with dynamic accuracies
– volume: vol. 32
  start-page: 73
  year: 2014
  ident: 10.1016/j.ejco.2022.100045_br0460
  article-title: An adaptive accelerated proximal gradient method and its homotopy continuation for sparse optimization
– volume: vol. 48
  start-page: 747
  year: 2016
  ident: 10.1016/j.ejco.2022.100045_br0610
  article-title: Sdca without duality, regularization, and individual convexity
– start-page: 1000
  year: 2014
  ident: 10.1016/j.ejco.2022.100045_br0620
  article-title: Communication-efficient distributed optimization using an approximate Newton-type method
– volume: 163
  start-page: 359
  issue: 1
  year: 2017
  ident: 10.1016/j.ejco.2022.100045_br0180
  article-title: Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models
  publication-title: Math. Program.
  doi: 10.1007/s10107-016-1065-8
– volume: 146
  start-page: 37
  issue: 1
  year: 2014
  ident: 10.1016/j.ejco.2022.100045_br0330
  article-title: First-order methods of smooth convex optimization with inexact oracle
  publication-title: Math. Program.
  doi: 10.1007/s10107-013-0677-5
– ident: 10.1016/j.ejco.2022.100045_br0320
– start-page: 3384
  year: 2015
  ident: 10.1016/j.ejco.2022.100045_br0130
  article-title: A universal catalyst for first-order optimization
– ident: 10.1016/j.ejco.2022.100045_br0260
– start-page: 167
  year: 2020
  ident: 10.1016/j.ejco.2022.100045_br0590
  article-title: Near-optimal hyperfast second-order method for convex optimization
– volume: 31
  start-page: 2753
  issue: 4
  year: 2021
  ident: 10.1016/j.ejco.2022.100045_br0050
  article-title: An optimal algorithm for decentralized finite-sum optimization
  publication-title: SIAM J. Control Optim.
  doi: 10.1137/20M134842X
– volume: 23
  start-page: 1092
  issue: 2
  year: 2013
  ident: 10.1016/j.ejco.2022.100045_br0470
  article-title: An accelerated hybrid proximal extragradient method for convex optimization and its implications to second-order methods
  publication-title: SIAM J. Control Optim.
  doi: 10.1137/110833786
– start-page: 1
  year: 2022
  ident: 10.1016/j.ejco.2022.100045_br0380
  article-title: Exact gradient methods with memory
  publication-title: Optim. Methods Softw.
– ident: 10.1016/j.ejco.2022.100045_br0540
– volume: 51
  start-page: 107
  issue: 1
  year: 2008
  ident: 10.1016/j.ejco.2022.100045_br0080
  article-title: Mapreduce: simplified data processing on large clusters
  publication-title: Commun. ACM
  doi: 10.1145/1327452.1327492
– volume: 58
  start-page: 48
  issue: 1
  year: 2018
  ident: 10.1016/j.ejco.2022.100045_br0410
  article-title: Universal method for stochastic composite optimization problems
  publication-title: Comput. Math. Math. Phys.
  doi: 10.1134/S0965542518010050
– volume: 31
  start-page: 2807
  issue: 4
  year: 2021
  ident: 10.1016/j.ejco.2022.100045_br0250
  article-title: Inexact high-order proximal-point methods with auxiliary search procedure
  publication-title: SIAM J. Control Optim.
  doi: 10.1137/20M134705X
– start-page: 1
  year: 2018
  ident: 10.1016/j.ejco.2022.100045_br0090
  article-title: Communication-efficient algorithms for decentralized and stochastic optimization
  publication-title: Math. Program.
– ident: 10.1016/j.ejco.2022.100045_br0520
– volume: vol. 125
  start-page: 988
  year: 2020
  ident: 10.1016/j.ejco.2022.100045_br0170
  article-title: Highly smooth minimization of non-smooth problems
– volume: vol. 80
  start-page: 1367
  year: 2018
  ident: 10.1016/j.ejco.2022.100045_br0430
  article-title: Computational optimal transport: complexity by accelerated gradient descent is better than by Sinkhorn's algorithm
– volume: 21
  start-page: 1
  issue: 206
  year: 2020
  ident: 10.1016/j.ejco.2022.100045_br0030
  article-title: On convergence of distributed approximate Newton methods: globalization, sharper bounds and beyond
  publication-title: J. Mach. Learn. Res.
– volume: 1
  start-page: 1
  year: 2021
  ident: 10.1016/j.ejco.2022.100045_br0240
  article-title: Superfast second-order methods for unconstrained convex optimization
  publication-title: J. Optim. Theory Appl.
  doi: 10.1007/s10957-021-01930-y
– volume: 28
  start-page: 333
  issue: 1
  year: 2018
  ident: 10.1016/j.ejco.2022.100045_br0300
  article-title: Relatively smooth convex optimization by first-order methods, and applications
  publication-title: SIAM J. Control Optim.
  doi: 10.1137/16M1099546
– volume: 30
  start-page: 3146
  issue: 4
  year: 2020
  ident: 10.1016/j.ejco.2022.100045_br0500
  article-title: Contracting proximal methods for smooth convex optimization
  publication-title: SIAM J. Control Optim.
  doi: 10.1137/19M130769X
– volume: 42
  start-page: 330
  issue: 2
  year: 2017
  ident: 10.1016/j.ejco.2022.100045_br0390
  article-title: A descent lemma beyond Lipschitz gradient continuity: first-order methods revisited and applications
  publication-title: Math. Oper. Res.
  doi: 10.1287/moor.2016.0817
– volume: 194
  start-page: 41
  issue: 1
  year: 2022
  ident: 10.1016/j.ejco.2022.100045_br0140
  article-title: Optimal complexity and certification of Bregman first-order methods
  publication-title: Math. Program.
  doi: 10.1007/s10107-021-01618-1
– start-page: 10783
  year: 2018
  ident: 10.1016/j.ejco.2022.100045_br0440
  article-title: Decentralize and randomize: faster algorithm for Wasserstein barycenters
– start-page: 1
  year: 2019
  ident: 10.1016/j.ejco.2022.100045_br0220
  article-title: Implementable tensor methods in unconstrained convex optimization
  publication-title: Math. Program.
– volume: vol. 137
  year: 2018
  ident: 10.1016/j.ejco.2022.100045_br0420
  article-title: Lectures on Convex Optimization
– volume: 32
  start-page: 354
  issue: 2
  year: 2022
  ident: 10.1016/j.ejco.2022.100045_br0160
  article-title: Distributed optimization based on gradient tracking revisited: enhancing convergence rate via surrogation
  publication-title: SIAM J. Control Optim.
  doi: 10.1137/19M1259973
– start-page: 166
  year: 2020
  ident: 10.1016/j.ejco.2022.100045_br0550
  article-title: Optimal combination of tensor optimization methods
– volume: 140
  start-page: 125
  issue: 1
  year: 2013
  ident: 10.1016/j.ejco.2022.100045_br0360
  article-title: Gradient methods for minimizing composite functions
  publication-title: Math. Program.
  doi: 10.1007/s10107-012-0629-5
– volume: vol. 70
  start-page: 3027
  year: 2017
  ident: 10.1016/j.ejco.2022.100045_br0400
  article-title: Optimal algorithms for smooth and strongly convex distributed optimization in networks
– ident: 10.1016/j.ejco.2022.100045_br0600
– year: 2020
  ident: 10.1016/j.ejco.2022.100045_br0490
– ident: 10.1016/j.ejco.2022.100045_br0570
– start-page: 2332
  year: 2018
  ident: 10.1016/j.ejco.2022.100045_br0040
  article-title: Giant: globally improved approximate Newton method for distributed optimization
– volume: 79
  start-page: 405
  issue: 2
  year: 2021
  ident: 10.1016/j.ejco.2022.100045_br0370
  article-title: Accelerated Bregman proximal gradient methods for relatively smooth convex optimization
  publication-title: Comput. Optim. Appl.
  doi: 10.1007/s10589-021-00273-8
– year: 2009
  ident: 10.1016/j.ejco.2022.100045_br0210
– ident: 10.1016/j.ejco.2022.100045_br0580
– volume: 184
  start-page: 71
  issue: 1
  year: 2020
  ident: 10.1016/j.ejco.2022.100045_br0190
  article-title: Lower bounds for finding stationary points I
  publication-title: Math. Program.
  doi: 10.1007/s10107-019-01406-y
– start-page: 629
  year: 2013
  ident: 10.1016/j.ejco.2022.100045_br0060
  article-title: Trading computation for communication: distributed stochastic dual coordinate ascent
– volume: 103
  start-page: 127
  issue: 1
  year: 2005
  ident: 10.1016/j.ejco.2022.100045_br0480
  article-title: Smooth minimization of non-smooth functions
  publication-title: Math. Program.
  doi: 10.1007/s10107-004-0552-5
– volume: 9
  year: 2021
  ident: 10.1016/j.ejco.2022.100045_br0450
  article-title: First-order methods for convex optimization
  publication-title: EURO J. Comput. Optim.
  doi: 10.1016/j.ejco.2021.100015
– year: 2009
  ident: 10.1016/j.ejco.2022.100045_br0270
– volume: 2
  start-page: 183
  issue: 1
  year: 2009
  ident: 10.1016/j.ejco.2022.100045_br0350
  article-title: A fast iterative shrinkage-thresholding algorithm for linear inverse problems
  publication-title: SIAM J. Imaging Sci.
  doi: 10.1137/080716542
– volume: vol. 37
  start-page: 362
  year: 2015
  ident: 10.1016/j.ejco.2022.100045_br0120
  article-title: Disco: distributed optimization for self-concordant empirical loss
– start-page: 690
  year: 2016
  ident: 10.1016/j.ejco.2022.100045_br0280
  article-title: Strassen's algorithm reloaded
– volume: vol. 99
  start-page: 1392
  year: 2019
  ident: 10.1016/j.ejco.2022.100045_br0230
  article-title: Near optimal methods for minimizing convex functions with Lipschitz p-th derivatives
– start-page: 1756
  year: 2015
  ident: 10.1016/j.ejco.2022.100045_br0150
  article-title: Communication complexity of distributed convex learning and optimization
– start-page: 583
  year: 2014
  ident: 10.1016/j.ejco.2022.100045_br0070
  article-title: Scaling distributed machine learning with the parameter server
– year: 2018
  ident: 10.1016/j.ejco.2022.100045_br0100
– volume: 36
  start-page: 1155
  issue: 6
  year: 2021
  ident: 10.1016/j.ejco.2022.100045_br0310
  article-title: Inexact model: a framework for optimization and variational inequalities
  publication-title: Optim. Methods Softw.
  doi: 10.1080/10556788.2021.1924714
– start-page: 1
  year: 2020
  ident: 10.1016/j.ejco.2022.100045_br0510
  article-title: Inexact basic tensor methods for some classes of convex optimization problems
  publication-title: Optim. Methods Softw.
– ident: 10.1016/j.ejco.2022.100045_br0110
– volume: vol. 32
  start-page: 1000
  year: 2014
  ident: 10.1016/j.ejco.2022.100045_br0020
  article-title: Communication-efficient distributed optimization using an approximate Newton-type method
– volume: 42
  start-page: 330
  issue: 2
  year: 2016
  ident: 10.1016/j.ejco.2022.100045_br0290
  article-title: A descent lemma beyond Lipschitz gradient continuity: first-order methods revisited and applications
  publication-title: Math. Oper. Res.
  doi: 10.1287/moor.2016.0817
– volume: 29
  start-page: 595
  issue: 1
  year: 2019
  ident: 10.1016/j.ejco.2022.100045_br0200
  article-title: Universal regularization methods: varying the power, the smoothness and the accuracy
  publication-title: SIAM J. Control Optim.
  doi: 10.1137/16M1106316
SSID ssj0001507035
Score 2.2474773
Snippet Statistical preconditioning enables fast methods for distributed large-scale empirical risk minimization problems. In this approach, multiple worker nodes...
SourceID doaj
crossref
elsevier
SourceType Open Website
Enrichment Source
Index Database
Publisher
StartPage 100045
SubjectTerms Distributed optimization
Empirical risk minimization
Statistical preconditioning
Tensor optimization methods
Title Hyperfast second-order local solvers for efficient statistically preconditioned distributed optimization
URI https://dx.doi.org/10.1016/j.ejco.2022.100045
https://doaj.org/article/fb8d7144784445eeb3c5783a2c4b2d3f
Volume 10
WOSCitedRecordID wos000876421600003&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVAON
  databaseName: Directory of Open Access Journals (DOAJ)
  issn: 2192-4406
  databaseCode: DOA
  dateStart: 20130101
  customDbUrl:
  isFulltext: true
  dateEnd: 99991231
  titleUrlDefault: https://www.doaj.org/
  omitProxy: false
  ssIdentifier: ssj0001507035
  providerName: Directory of Open Access Journals
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV07T8MwELZQxQAD4inKSx7YkEXiRxOPgKg6oIoBpG6R44egKqVqCxL_njs7rTKVhS2KHDu6u_juc-6-I-RacKF6pqcZL41i0ljHSscDE44Xubc-5zaSuD4Vw2E5GunnVqsvzAlL9MBJcLehLl0BUX9RSimVB-xnwciE4VbW3ImAuy9EPS0wleqD0ZQxfxG-SM4kuK2mYiYld_mxxco_zjFLIMNappZXiuT9LefUcjj9fbLXRIr0Lr3hAdny00Oy2-IPPCJvA0CR82AWS7pAYOtYZNKk0UFRsCrMuaAQllIfmSLAwVCsIIrkzGYy-aGzCIhdIixy1CGNLnbAgutP2Ew-mirNY_Laf3x5GLCmdQKzMs-WTARR1sJ7G7TTKiBhaK4ld1IZLWuB7cUcIDfus7Iugs29gIEyd75WtVJeixPSmcLCp4TKOvNeBtAo_lHNNIQQJRaIA26DvaIwXZKvRFfZhlcc21tMqlUC2bhCcVco7iqJu0tu1s_MEqvGxtH3qJH1SGTEjjfATqrGTqq_7KRL1EqfVRNcpKABpnrfsPjZfyx-TnZwynR0c0E6y_mXvyTb9hsUPr-KpvsLuGPyRw
linkProvider Directory of Open Access Journals
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Hyperfast+second-order+local+solvers+for+efficient+statistically+preconditioned+distributed+optimization&rft.jtitle=EURO+journal+on+computational+optimization&rft.au=Dvurechensky%2C+Pavel&rft.au=Kamzolov%2C+Dmitry&rft.au=Lukashevich%2C+Aleksandr&rft.au=Lee%2C+Soomin&rft.date=2022&rft.pub=Elsevier+Ltd&rft.issn=2192-4406&rft.volume=10&rft_id=info:doi/10.1016%2Fj.ejco.2022.100045&rft.externalDocID=S2192440622000211
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2192-4406&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2192-4406&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2192-4406&client=summon