Adaptive Co-Weighting Deep Convolutional Features for Object Retrieval

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

4 Scopus citations

Abstract

Aggregating deep convolutional features into a global image vector has attracted sustained attention in image retrieval. In this paper, we propose an efficient unsupervised aggregation method that uses an adaptive Gaussian filter and an element-value sensitive vector to co-weight deep features. Specifically, the Gaussian filter assigns large weights to features of region-of-interests (RoI) by adaptively determining the RoI's center, while the element-value sensitive channel vector suppresses burstiness phenomenon by assigning small weights to feature maps with large sum values of all locations. Experimental results on benchmark datasets validate the proposed two weighting schemes both effectively improve the discrimination power of image vectors. Furthermore, with the same experimental setting, our method outperforms other very recent aggregation approaches by a considerable margin.

Original languageEnglish
Title of host publication2018 IEEE International Conference on Multimedia and Expo, ICME 2018
PublisherIEEE Computer Society
ISBN (Electronic)9781538617373
DOIs
StatePublished - 8 Oct 2018
Event2018 IEEE International Conference on Multimedia and Expo, ICME 2018 - San Diego, United States
Duration: 23 Jul 201827 Jul 2018

Publication series

NameProceedings - IEEE International Conference on Multimedia and Expo
Volume2018-July
ISSN (Print)1945-7871
ISSN (Electronic)1945-788X

Conference

Conference2018 IEEE International Conference on Multimedia and Expo, ICME 2018
Country/TerritoryUnited States
CitySan Diego
Period23/07/1827/07/18

Keywords

  • Gaussian filter
  • Object retrieval
  • aggregation
  • channel weighting vector
  • convolutional features

Fingerprint

Dive into the research topics of 'Adaptive Co-Weighting Deep Convolutional Features for Object Retrieval'. Together they form a unique fingerprint.

Cite this