Proximal Methods for Self-Healing and Exact Distributed Convex Optimization Extended abstract
Distributed convex optimization algorithms employ a variety of methods for achieving exact convergence to the global optimal value (modulo numerical precision): some use time-varying dynamics, some use dynamics on each edge rather than on each node of the communication graph, some use double the com...
Uložené v:
| Vydané v: | 2022 58th Annual Allerton Conference on Communication, Control, and Computing (Allerton) s. 1 - 7 |
|---|---|
| Hlavný autor: | |
| Médium: | Konferenčný príspevok.. |
| Jazyk: | English |
| Vydavateľské údaje: |
IEEE
27.09.2022
|
| Predmet: | |
| On-line prístup: | Získať plný text |
| Tagy: |
Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
|
| Shrnutí: | Distributed convex optimization algorithms employ a variety of methods for achieving exact convergence to the global optimal value (modulo numerical precision): some use time-varying dynamics, some use dynamics on each edge rather than on each node of the communication graph, some use double the communication between nodes per optimization step, and some use a specific initialization that enforces the dynamics to evolve on a particular subspace. Each of these methods has its drawbacks. Using time-varying dynamics might require a global clock, and it might cause transients due to disturbances to take longer and longer to die out as time progresses. Using edge-based rather than node-based dynamics might increase the memory and computation costs, and it typically requires the communication graph to be undirected. Using double the communication per optimization step might increase the communication cost, and it might also slow down the convergence to the optimal value. Using a specific initialization to enforce a particular invariant subspace might render the algorithm unable to recover from faults or disturbances that perturb its dynamics from this subspace, resulting in convergence to the incorrect value. In this latter case we say that the algorithm is not self-healing. In our previous work [1] we considered strongly convex objective functions having Lipschitz continuous gradients, and we introduced a new first-order method for achieving exact convergence to the global optimal value. Our new algorithm has none of the above drawbacks, and in particular it is a self-healing algorithm. But, it does possess a peculiar internal instability: each node has states that grows linearly in time even as its output converges to the optimal value. In the present work, we consider general non-smooth and extended-real-valued convex objective functions (which can thus incorporate hard convex constraints). We present proximal algorithms that again employ our new, internally unstable method for achieving exact convergence. |
|---|---|
| DOI: | 10.1109/Allerton49937.2022.9929367 |