L1-Norm Low-Rank Matrix Factorization by Variational Bayesian Method

Research output: Contribution to journalArticlepeer-review

97 Scopus citations

Abstract

The L1-norm low-rank matrix factorization (LRMF) has been attracting much attention due to its wide applications to computer vision and pattern recognition. In this paper, we construct a new hierarchical Bayesian generative model for the L1-norm LRMF problem and design a mean-field variational method to automatically infer all the parameters involved in the model by closed-form equations. The variational Bayesian inference in the proposed method can be understood as solving a weighted LRMF problem with different weights on matrix elements based on their significance and with L2-regularization penalties on parameters. Throughout the inference process of our method, the weights imposed on the matrix elements can be adaptively fitted so that the adverse influence of noises and outliers embedded in data can be largely suppressed, and the parameters can be appropriately regularized so that the generalization capability of the problem can be statistically guaranteed. The robustness and the efficiency of the proposed method are substantiated by a series of synthetic and real data experiments, as compared with the state-of-the-art L1-norm LRMF methods. Especially, attributed to the intrinsic generalization capability of the Bayesian methodology, our method can always predict better on the unobserved ground truth data than existing methods.

Original languageEnglish
Article number7010972
Pages (from-to)825-839
Number of pages15
JournalIEEE Transactions on Neural Networks and Learning Systems
Volume26
Issue number4
DOIs
StatePublished - 1 Apr 2015

Keywords

  • Background subtraction
  • face reconstruction
  • low-rank matrix factorization (LRMF)
  • outlier detection
  • robustness
  • variational inference

Fingerprint

Dive into the research topics of 'L1-Norm Low-Rank Matrix Factorization by Variational Bayesian Method'. Together they form a unique fingerprint.

Cite this