TY - GEN
T1 - Saliency based opportunistic search for object part extraction and labeling
AU - Wu, Yang
AU - Zhu, Qihui
AU - Shi, Jianbo
AU - Zheng, Nanning
PY - 2008
Y1 - 2008
N2 - We study the task of object part extraction and labeling, which seeks to understand objects beyond simply identifiying their bounding boxes. We start from bottom-up segmentation of images and search for correspondences between object parts in a few shape models and segments in images. Segments comprising different object parts in the image are usually not equally salient due to uneven contrast, illumination conditions, clutter, occlusion and pose changes. Moreover, object parts may have different scales and some parts are only distinctive and recognizable in a large scale. Therefore, we utilize a multi-scale shape representation of objects and their parts, figural contextual information of the whole object and semantic contextual information for parts. Instead of searching over a large segmentation space, we present a saliency based opportunistic search framework to explore bottom-up segmentation by gradually expanding and bounding the search domain. We tested our approach on a challenging statue face dataset and 3 human face datasets. Results show that our approach significantly outperforms Active Shape Models using far fewer exemplars. Our framework can be applied to other object categories.
AB - We study the task of object part extraction and labeling, which seeks to understand objects beyond simply identifiying their bounding boxes. We start from bottom-up segmentation of images and search for correspondences between object parts in a few shape models and segments in images. Segments comprising different object parts in the image are usually not equally salient due to uneven contrast, illumination conditions, clutter, occlusion and pose changes. Moreover, object parts may have different scales and some parts are only distinctive and recognizable in a large scale. Therefore, we utilize a multi-scale shape representation of objects and their parts, figural contextual information of the whole object and semantic contextual information for parts. Instead of searching over a large segmentation space, we present a saliency based opportunistic search framework to explore bottom-up segmentation by gradually expanding and bounding the search domain. We tested our approach on a challenging statue face dataset and 3 human face datasets. Results show that our approach significantly outperforms Active Shape Models using far fewer exemplars. Our framework can be applied to other object categories.
UR - https://www.scopus.com/pages/publications/56749130981
U2 - 10.1007/978-3-540-88693-8_56
DO - 10.1007/978-3-540-88693-8_56
M3 - 会议稿件
AN - SCOPUS:56749130981
SN - 3540886923
SN - 9783540886921
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 760
EP - 774
BT - Computer Vision - ECCV 2008 - 10th European Conference on Computer Vision, Proceedings
PB - Springer Verlag
T2 - 10th European Conference on Computer Vision, ECCV 2008
Y2 - 12 October 2008 through 18 October 2008
ER -