Learning sparse features for classification by mixture models

Research output: Contribution to journalArticlepeer-review

21 Scopus citations

Abstract

Non-negative matrix factorization (NMF) can discover sparse features for classification via mixture models and the sparseness of features controls the learning rate of the basis function parameters. But the original NMF in which the basis vectors are unit ones in L1 norm, does not increase the sparseness of learned features. This paper generalizes NMF to Lp-NMF where the basis vectors are unit ones in Lp norm. Experiments demonstrate how p affects the sparseness of learned features and the final classification accuracy. And the results show that L2-NMF is superior one for practical implementation.

Original languageEnglish
Pages (from-to)155-161
Number of pages7
JournalPattern Recognition Letters
Volume25
Issue number2
DOIs
StatePublished - 19 Jan 2004

Keywords

  • Classification
  • L norm
  • Mixture models
  • Non-negative matrix factorization
  • Sparse features

Fingerprint

Dive into the research topics of 'Learning sparse features for classification by mixture models'. Together they form a unique fingerprint.

Cite this