TY - GEN
T1 - No-reference video quality assessment based on perceptual features extracted from multi-directional video spatiotemporal slices images
AU - Yan, Peng
AU - Mou, Xuanqin
N1 - Publisher Copyright:
© 2018 SPIE.
PY - 2018
Y1 - 2018
N2 - As video applications become more popular, no-reference video quality assessment (NR-VQA) has become a focus of research. In many existing NR-VQA methods, perceptual feature extraction is often the key to success. Therefore, we design methods to extract the perceptual features that contain a wider range of spatiotemporal information from multidirectional video spatiotemporal slices (STS) images (the images generated by cutting video data parallel to temporal dimension in multiple directions) and use support vector machine (SVM) to perform a successful NR video quality evaluation in this paper. In the proposed NR-VQA design, we first extracted the multi-directional video STS images to obtain as much as possible the overall video motion representation. Secondly, the perceptual features of multi-directional video STS images such as the moments of feature maps, joint distribution features from the gradient magnitude and filtering response of Laplacian of Gaussian, and motion energy characteristics were extracted to characterize the motion statistics of videos. Finally, the extracted perceptual features were fed in SVM or multilayer perceptron (MLP) to perform training and testing. And the experimental results show that the proposed method has achieved the state-of-theart quality prediction performance on the largest existing annotated video database.
AB - As video applications become more popular, no-reference video quality assessment (NR-VQA) has become a focus of research. In many existing NR-VQA methods, perceptual feature extraction is often the key to success. Therefore, we design methods to extract the perceptual features that contain a wider range of spatiotemporal information from multidirectional video spatiotemporal slices (STS) images (the images generated by cutting video data parallel to temporal dimension in multiple directions) and use support vector machine (SVM) to perform a successful NR video quality evaluation in this paper. In the proposed NR-VQA design, we first extracted the multi-directional video STS images to obtain as much as possible the overall video motion representation. Secondly, the perceptual features of multi-directional video STS images such as the moments of feature maps, joint distribution features from the gradient magnitude and filtering response of Laplacian of Gaussian, and motion energy characteristics were extracted to characterize the motion statistics of videos. Finally, the extracted perceptual features were fed in SVM or multilayer perceptron (MLP) to perform training and testing. And the experimental results show that the proposed method has achieved the state-of-theart quality prediction performance on the largest existing annotated video database.
KW - Multi-directional video spatiotemporal slices images
KW - No-reference
KW - Support vector machine
KW - Video quality assessment
UR - https://www.scopus.com/pages/publications/85059414503
U2 - 10.1117/12.2503149
DO - 10.1117/12.2503149
M3 - 会议稿件
AN - SCOPUS:85059414503
T3 - Proceedings of SPIE - The International Society for Optical Engineering
BT - Optoelectronic Imaging and Multimedia Technology V
A2 - Dai, Qionghai
A2 - Shimura, Tsutomu
PB - SPIE
T2 - Optoelectronic Imaging and Multimedia Technology V 2018
Y2 - 11 October 2018 through 12 October 2018
ER -