Stochastic Scale Invariant Power Iteration for KL-divergence Nonnegative Matrix Factorization

We introduce a mini-batch stochastic variance-reduced algorithm to solve finite-sum scale invariant problems which cover several examples in machine learning and statistics such as principal component analysis (PCA) and estimation of mixture proportions. The algorithm is a stochastic generalization...

Full description

Saved in:
Bibliographic Details
Published in:IEEE International Conference on Big Data pp. 969 - 977
Main Authors: Kim, Cheolmin, Kim, Youngseok, Jambunath, Yegna Subramanian, Klabjan, Diego
Format: Conference Proceeding
Language:English
Published: IEEE 15.12.2024
Subjects:
ISSN:2573-2978
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:We introduce a mini-batch stochastic variance-reduced algorithm to solve finite-sum scale invariant problems which cover several examples in machine learning and statistics such as principal component analysis (PCA) and estimation of mixture proportions. The algorithm is a stochastic generalization of scale invariant power iteration, specializing to power iteration when full-batch is used for the PCA problem. In convergence analysis, we show the expectation of the optimality gap decreases at a linear rate under some conditions on the step size, epoch length, batch size and initial iterate. Numerical experiments on the non-negative factorization problem with the KullbackLeibler divergence using real and synthetic datasets demonstrate that the proposed stochastic approach not only converges faster than state-of-the-art deterministic algorithms but also produces excellent quality robust solutions.
ISSN:2573-2978
DOI:10.1109/BigData62323.2024.10825312