Parallel and distributed sparse optimization

This paper proposes parallel and distributed algorithms for solving very large-scale sparse optimization problems on computer clusters and clouds. Modern datasets usually have a large number of features or training samples, and they are usually stored in a distributed manner. Motivated by the need o...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Conference record - Asilomar Conference on Signals, Systems, & Computers s. 659 - 646
Hlavní autoři: Peng, Zhimin, Yan, Ming, Yin, Wotao
Médium: Konferenční příspěvek
Jazyk:angličtina
Vydáno: IEEE 01.11.2013
Témata:
ISSN:1058-6393
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:This paper proposes parallel and distributed algorithms for solving very large-scale sparse optimization problems on computer clusters and clouds. Modern datasets usually have a large number of features or training samples, and they are usually stored in a distributed manner. Motivated by the need of solving sparse optimization problems with large datasets, we propose two approaches including (i) distributed implementations of prox-linear algorithms and (ii) GRock, a parallel greedy block coordinate descent method. Different separability properties of the objective terms in the problem enable different data distributed schemes along with their corresponding algorithm implementations. We also establish the convergence of GRock and explain why it often performs exceptionally well for sparse optimization. Numerical results on a computer cluster and Amazon EC2 demonstrate the efficiency and elasticity of our algorithms.
ISSN:1058-6393
DOI:10.1109/ACSSC.2013.6810364