Stochastic Scale Invariant Power Iteration for KL-divergence Nonnegative Matrix Factorization

We introduce a mini-batch stochastic variance-reduced algorithm to solve finite-sum scale invariant problems which cover several examples in machine learning and statistics such as principal component analysis (PCA) and estimation of mixture proportions. The algorithm is a stochastic generalization...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:IEEE International Conference on Big Data s. 969 - 977
Hlavní autoři: Kim, Cheolmin, Kim, Youngseok, Jambunath, Yegna Subramanian, Klabjan, Diego
Médium: Konferenční příspěvek
Jazyk:angličtina
Vydáno: IEEE 15.12.2024
Témata:
ISSN:2573-2978
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:We introduce a mini-batch stochastic variance-reduced algorithm to solve finite-sum scale invariant problems which cover several examples in machine learning and statistics such as principal component analysis (PCA) and estimation of mixture proportions. The algorithm is a stochastic generalization of scale invariant power iteration, specializing to power iteration when full-batch is used for the PCA problem. In convergence analysis, we show the expectation of the optimality gap decreases at a linear rate under some conditions on the step size, epoch length, batch size and initial iterate. Numerical experiments on the non-negative factorization problem with the KullbackLeibler divergence using real and synthetic datasets demonstrate that the proposed stochastic approach not only converges faster than state-of-the-art deterministic algorithms but also produces excellent quality robust solutions.
ISSN:2573-2978
DOI:10.1109/BigData62323.2024.10825312