Exploring spatial and channel contribution for object based image retrieval

Research output: Contribution to journalArticlepeer-review

23 Scopus citations

Abstract

With the rapid development of deep learning methods, researchers have gradually shifted the research focus from hand-crafted features to deep features in the field of the content-based image retrieval (CBIR). A great deal of attention has been paid to aggregate the extracted features from the convolutional layer in the deep convolutional neural network (CNN) into a global representation vector for CBIR. In this paper, we propose a simple but effective method which called Strong-Response-Stack-Contribution (SRSC) to generate the global representation vector for object retrieval. As we know, for object retrieval, when using CNN to extract features, what we want is to extract features in the region of interest (ROI). So we explored spatial and channel contribution to help us focus more on ROI and make the global image representation vector more representative. The process of the approach SRSC is to first generate spatial contribution according to the degree of channel response intensity. Then, we generate channel contribution by joining the sparsity information and the element-value information together. Finally, the global representation vector is generated according to spatial and channel contribution to perform image retrieval. Experiments on Oxford and Paris buildings datasets show the effectiveness of the proposed approach.

Original languageEnglish
Article number104955
JournalKnowledge-Based Systems
Volume186
DOIs
StatePublished - 15 Dec 2019

Keywords

  • Aggregate
  • Global representation vector
  • Object retrieval
  • Spatial and channel contribution

Fingerprint

Dive into the research topics of 'Exploring spatial and channel contribution for object based image retrieval'. Together they form a unique fingerprint.

Cite this