An integrated model for bayesian learning of sparse representation and classifier training

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

We present an integrated model for Bayesian learning of sparse representation and classifier training, and apply it for the task of visual recognition. Most previous work learns the sparse representation and trains the classifier on top of it in two separate steps. We cast these two into a unified probabilistic model. This way, the supervised labels can effectively affect the learning of the sparse representation. In the training phase, the inference of the joint expectation for dictionary, code, classifier and other variables under the observation of descriptors and labels is carried out by Gibbs Sampling. In the testing phase, based on the learned parameters, the sparse code and the class label of the image are obtained by Bayesian inference. The proposed model is evaluated on Caltech 101 dataset and its efficacy is demonstrated by a careful analysis of the experimental results.

Original languageEnglish
Title of host publicationAdvances in Multimedia Information Processing, PCM 2013 - 14th Pacific-Rim Conference on Multimedia, Proceedings
PublisherSpringer Verlag
Pages620-628
Number of pages9
ISBN (Print)9783319037301
DOIs
StatePublished - 2013
Event14th Pacific-Rim Conference on Multimedia, PCM 2013 - Nanjing, China
Duration: 13 Dec 201316 Dec 2013

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume8294 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference14th Pacific-Rim Conference on Multimedia, PCM 2013
Country/TerritoryChina
CityNanjing
Period13/12/1316/12/13

Fingerprint

Dive into the research topics of 'An integrated model for bayesian learning of sparse representation and classifier training'. Together they form a unique fingerprint.

Cite this