TY - GEN
T1 - Description-discrimination collaborative tracking
AU - Chen, Dapeng
AU - Yuan, Zejian
AU - Hua, Gang
AU - Wu, Yang
AU - Zheng, Nanning
PY - 2014
Y1 - 2014
N2 - Appearance model is one of the most important components for online visual tracking. An effective appearance model needs to strike the right balance between being adaptive, to account for appearance change, and being conservative, to re-track the object after it loses tracking (e.g., due to occlusion). Most conventional appearance models focus on one aspect out of the two, and hence are not able to achieve the right balance. In this paper, we approach this problem by a max-margin learning framework collaborating a descriptive component and a discriminative component. Particularly, the two components are for different purposes and with different lifespans. One forms a robust object model, and the other tries to distinguish the object from the current background. Taking advantages of their complementary roles, the components improve each other and collaboratively contribute to a shared score function. Besides, for realtime implementation, we also propose a series of optimization and sample-management strategies. Experiments over 30 challenging videos demonstrate the effectiveness and robustness of the proposed tracker. Our method generally outperforms the existing state-of-the-art methods.
AB - Appearance model is one of the most important components for online visual tracking. An effective appearance model needs to strike the right balance between being adaptive, to account for appearance change, and being conservative, to re-track the object after it loses tracking (e.g., due to occlusion). Most conventional appearance models focus on one aspect out of the two, and hence are not able to achieve the right balance. In this paper, we approach this problem by a max-margin learning framework collaborating a descriptive component and a discriminative component. Particularly, the two components are for different purposes and with different lifespans. One forms a robust object model, and the other tries to distinguish the object from the current background. Taking advantages of their complementary roles, the components improve each other and collaboratively contribute to a shared score function. Besides, for realtime implementation, we also propose a series of optimization and sample-management strategies. Experiments over 30 challenging videos demonstrate the effectiveness and robustness of the proposed tracker. Our method generally outperforms the existing state-of-the-art methods.
KW - collaborative tracking
KW - Descriptive model
KW - discriminative model
KW - long-term and short-term memory
KW - structural prediction
KW - SVDD
UR - https://www.scopus.com/pages/publications/84906497573
U2 - 10.1007/978-3-319-10590-1_23
DO - 10.1007/978-3-319-10590-1_23
M3 - 会议稿件
AN - SCOPUS:84906497573
SN - 9783319105895
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 345
EP - 360
BT - Computer Vision, ECCV 2014 - 13th European Conference, Proceedings
PB - Springer Verlag
T2 - 13th European Conference on Computer Vision, ECCV 2014
Y2 - 6 September 2014 through 12 September 2014
ER -