TY - GEN
T1 - Transfer classification for distinct manifestations with shared information
AU - Qi, Lu
AU - Yin, Peijie
AU - Huang, Xiayuan
AU - Chen, Ken
AU - Qiao, Hong
N1 - Publisher Copyright:
© 2016 IEEE.
PY - 2016/9/27
Y1 - 2016/9/27
N2 - An object often has many distinct manifestations in computer vision, which brings a great challenge to utilizing more comprehensive information. Inspired by some biological researches about edge sensitivity and global structure priority, our key insight is to establish unified transfer classification network with shared contour information. Combining two convolutional networks with three cascaded filters, we build a unified kernel SVM classifier based on shared contour features. Two convolutional networks are used for acquiring the contour information of objects exactly. Obtained by three cascaded filters, shared edge features are used by a unified kernels SVM classifier. Our transfer classification network(TCN) is trained and tested with distinct manifestations including real photos(imagenet dataset or cifar-10 dataset) and cartoon abstracts. The model is able to extract robust contour features and achieve considerable transfer recognition accuracy(40% relative improvement to some popular convolutional models).
AB - An object often has many distinct manifestations in computer vision, which brings a great challenge to utilizing more comprehensive information. Inspired by some biological researches about edge sensitivity and global structure priority, our key insight is to establish unified transfer classification network with shared contour information. Combining two convolutional networks with three cascaded filters, we build a unified kernel SVM classifier based on shared contour features. Two convolutional networks are used for acquiring the contour information of objects exactly. Obtained by three cascaded filters, shared edge features are used by a unified kernels SVM classifier. Our transfer classification network(TCN) is trained and tested with distinct manifestations including real photos(imagenet dataset or cifar-10 dataset) and cartoon abstracts. The model is able to extract robust contour features and achieve considerable transfer recognition accuracy(40% relative improvement to some popular convolutional models).
UR - https://www.scopus.com/pages/publications/84991585644
U2 - 10.1109/WCICA.2016.7578543
DO - 10.1109/WCICA.2016.7578543
M3 - 会议稿件
AN - SCOPUS:84991585644
T3 - Proceedings of the World Congress on Intelligent Control and Automation (WCICA)
SP - 1234
EP - 1239
BT - Proceedings of the 2016 12th World Congress on Intelligent Control and Automation, WCICA 2016
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 12th World Congress on Intelligent Control and Automation, WCICA 2016
Y2 - 12 June 2016 through 15 June 2016
ER -