Detecting occlusion boundaries via saliency network

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

3 Scopus citations

Abstract

In this paper, we address the problem of detecting occlusion boundaries from video sequences. We build a bi-directed graph whose nodes are line fragments extracted from superpixels's edges. Based on the graph, we compute a global occlusion saliency map by integrating motion, shape and topology cues into the framework of Saliency Network. Furthermore, with the structural information generated from the network, the property of structural consistency is proposed to prune the graph and refine the saliency map. Finally, we train a classifier to detect occlusion fragments combining the global saliency value and local edge strength. The detector outperforms the state-of-the-art on the benchmark of Stein and Hebert[8] by improving average precision to.80.

Original languageEnglish
Title of host publicationICPR 2012 - 21st International Conference on Pattern Recognition
Pages2569-2572
Number of pages4
StatePublished - 2012
Event21st International Conference on Pattern Recognition, ICPR 2012 - Tsukuba, Japan
Duration: 11 Nov 201215 Nov 2012

Publication series

NameProceedings - International Conference on Pattern Recognition
ISSN (Print)1051-4651

Conference

Conference21st International Conference on Pattern Recognition, ICPR 2012
Country/TerritoryJapan
CityTsukuba
Period11/11/1215/11/12

Fingerprint

Dive into the research topics of 'Detecting occlusion boundaries via saliency network'. Together they form a unique fingerprint.

Cite this