A Unified Bregman Alternating Minimization Algorithm for Generalized DC Programs with Application to Imaging

In this paper, we consider a class of nonconvex (not necessarily differentiable) optimization problems called generalized Difference-of-Convex functions (DC) programs, which is minimizing the sum of two separable DC parts and one two-block-variable coupling function. To circumvent the nonconvexity a...

Full description

Saved in:
Bibliographic Details
Published in:Journal of scientific computing Vol. 101; no. 3; p. 76
Main Authors: He, Hongjin, Zhang, Zhiyuan
Format: Journal Article
Language:English
Published: New York Springer US 01.12.2024
Springer Nature B.V
Subjects:
ISSN:0885-7474, 1573-7691
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In this paper, we consider a class of nonconvex (not necessarily differentiable) optimization problems called generalized Difference-of-Convex functions (DC) programs, which is minimizing the sum of two separable DC parts and one two-block-variable coupling function. To circumvent the nonconvexity and nonseparability of the problem under consideration, we accordingly introduce a unified Bregman alternating minimization algorithm by maximally exploiting the favorable DC structure of the objective. Specifically, we first follow the spirit of alternating minimization to update each block variable in a sequential order, which can efficiently tackle the nonseparablitity caused by the coupling function. Then, we employ the Fenchel–Young inequality to approximate the second DC components (i.e., concave parts) so that each subproblem reduces to a convex optimization problem, thereby alleviating the computational burden of the nonconvex DC parts. Moreover, each subproblem absorbs a Bregman proximal regularization term, which is usually beneficial for inducing closed-form solutions of subproblems for many cases via choosing appropriate Bregman kernel functions. It is remarkable that our algorithm not only provides an algorithmic framework to understand the iterative schemes of some recently proposed algorithms, but also enjoys implementable schemes with easier subproblems than some state-of-the-art first-order algorithms developed for generic nonconvex and nonsmooth optimization problems. Theoretically, we prove that the sequence generated by our algorithm globally converges to a critical point under the Kurdyka–Łojasiewicz (KŁ) condition. Besides, we estimate the local convergence rates of our algorithm when we further know the prior information of the KŁ exponent. A series of numerical experiments on imaging data demonstrate the reliability and efficiency of the proposed algorithmic framework.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0885-7474
1573-7691
DOI:10.1007/s10915-024-02715-x