Learning Latent Features With Infinite Nonnegative Binary Matrix Trifactorization

Nonnegative matrix factorization (NMF) has been widely exploited in many computational intelligence and pattern recognition problems. In particular, it can be used to extract latent features from data. However, previous NMF models often assume a fixed number of features, which are normally tuned and...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on emerging topics in computational intelligence Vol. 2; no. 6; pp. 450 - 463
Main Authors: Yang, Xi, Huang, Kaizhu, Zhang, Rui, Hussain, Amir
Format: Journal Article
Language:English
Published: Piscataway IEEE 01.12.2018
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects:
ISSN:2471-285X, 2471-285X
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Nonnegative matrix factorization (NMF) has been widely exploited in many computational intelligence and pattern recognition problems. In particular, it can be used to extract latent features from data. However, previous NMF models often assume a fixed number of features, which are normally tuned and searched using a trial and error approach. Learning binary features is also difficult, since the binary matrix posits a more challenging optimization problem. In this paper, we propose a new Bayesian model, termed the infinite nonnegative binary matrix trifactorization (iNBMT) model. This can automatically learn both latent binary features and feature numbers, based on the Indian buffet process (IBP). It exploits a trifactorization process that decomposes the nonnegative matrix into a product of three components: two binary matrices and a nonnegative real matrix. In contrast to traditional bifactorization, trifactorization can better reveal latent structures among samples and features. Specifically, an IBP prior is imposed on two infinite binary matrices, while a truncated Gaussian distribution is assumed on the weight matrix. To optimize the model, we develop a modified variational-Bayesian algorithm, with iteration complexity one order lower than the recently proposed maximization-expectation-IBP model <xref ref-type="bibr" rid="ref1">[1] and the correlated IBP-IBP model <xref ref-type="bibr" rid="ref2">[2] . A series of simulation experiments are carried out, both qualitatively and quantitatively, using benchmark feature extraction, reconstruction, and clustering tasks. Comparative results show that our proposed iNBMT model significantly outperforms state-of-the-art algorithms on a range of synthetic and real-world data. The new Bayesian model can thus serve as a benchmark technique for the computational intelligence research community.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2471-285X
2471-285X
DOI:10.1109/TETCI.2018.2806934