On Distributed Nonconvex Optimization: Projected Subgradient Method for Weakly Convex Problems in Networks

The stochastic subgradient method is a widely used algorithm for solving large-scale optimization problems arising in machine learning. Often, these problems are neither smooth nor convex. Recently, Davis et al. , 2018 characterized the convergence of the stochastic subgradient method for the weakly...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on automatic control Vol. 67; no. 2; pp. 662 - 675
Main Authors: Chen, Shixiang, Garcia, Alfredo, Shahrampour, Shahin
Format: Journal Article
Language:English
Published: New York IEEE 01.02.2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects:
ISSN:0018-9286, 1558-2523
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The stochastic subgradient method is a widely used algorithm for solving large-scale optimization problems arising in machine learning. Often, these problems are neither smooth nor convex. Recently, Davis et al. , 2018 characterized the convergence of the stochastic subgradient method for the weakly convex case, which encompasses many important applications (e.g., robust phase retrieval, blind deconvolution, biconvex compressive sensing, and dictionary learning). In practice, distributed implementations of the projected stochastic subgradient method (stoDPSM) are used to speed up risk minimization. In this article, we propose a distributed implementation of the stochastic subgradient method with a theoretical guarantee. Specifically, we show the global convergence of stoDPSM using the Moreau envelope stationarity measure. Furthermore, under a so-called sharpness condition, we show that deterministic DPSM (with a proper initialization) converges linearly to the sharp minima, using geometrically diminishing step size. We provide numerical experiments to support our theoretical analysis.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0018-9286
1558-2523
DOI:10.1109/TAC.2021.3056535