Jacobi-Style Iteration for Distributed Submodular Maximization

This article presents a novel Jacobi-style iteration algorithm for solving the problem of distributed submodular maximization, in which multiple agents determine their strategies from the private sets so that a global, nonseparable submodular objective function is jointly maximized. Building on the...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on automatic control Jg. 67; H. 9; S. 4687 - 4702
Hauptverfasser: Du, Bin, Qian, Kun, Claudel, Christian, Sun, Dengfeng
Format: Journal Article
Sprache:Englisch
Veröffentlicht: New York IEEE 01.09.2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Schlagworte:
ISSN:0018-9286, 1558-2523
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:This article presents a novel Jacobi-style iteration algorithm for solving the problem of distributed submodular maximization, in which multiple agents determine their strategies from the private sets so that a global, nonseparable submodular objective function is jointly maximized. Building on the multilinear extension of the submodular function, we expect to achieve the solution from a probabilistic, rather than deterministic, perspective, and thus, transfer the considered problem from the discrete domain into the continuous domain. Since it is observed that an unbiased estimation of the gradient of multilinear extension function can be obtained by sampling the agents' local strategies, a projected stochastic gradient algorithm is proposed to solve the problem. Our algorithm enables simultaneous updates among all individual agents and guarantees to converge asymptotically to a desirable equilibrium solution. Such an equilibrium solution is shown to be at least <inline-formula><tex-math notation="LaTeX">1/2</tex-math></inline-formula> suboptimal, which is comparable to the state-of-art in the literature. The convergence rate, which is characterized by the running average of gradient mapping, is proved to be <inline-formula><tex-math notation="LaTeX">\mathcal {O}(1/T)</tex-math></inline-formula>, where <inline-formula><tex-math notation="LaTeX">T</tex-math></inline-formula> is the number of iterations. Moreover, we further enhance the proposed algorithm by handling the scenario in which agents' communication delays are present. The enhanced algorithm admits a more realistic distributed implementation of our approach. Finally, a movie recommendation task is conducted on a real-world movie rating dataset to validate the numerical performance of our algorithms.
Bibliographie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0018-9286
1558-2523
DOI:10.1109/TAC.2022.3180696