Abstract
This paper combines linear sparse coding and non-negative matrix factorization into sparse non-negative matrix factorization. In contrast to non-negative matrix factorization, the new model can learn much sparser representation via imposing sparseness constraints explicitly; in contrast to a close model - non-negative sparse coding, the new model can learn parts-based representation via fully multiplicative updates because of adapting a generalized Kullback-Leibler divergence instead of the conventional mean square error for approximation error. Experiments on MIT-CBCL training faces data demonstrate the effectiveness of the proposed method.
| Original language | English |
|---|---|
| Pages (from-to) | 293-296 |
| Number of pages | 4 |
| Journal | ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings |
| Volume | 3 |
| State | Published - 2003 |
| Event | 2003 IEEE International Conference on Accoustics, Speech, and Signal Processing - Hong Kong, Hong Kong Duration: 6 Apr 2003 → 10 Apr 2003 |