TY - GEN
T1 - No-reference video quality assessment based on spatiotemporal slice images and deep convolutional neural networks
AU - Yan, Peng
AU - Mou, Xuanqin
N1 - Publisher Copyright:
© 2019 SPIE.
PY - 2019
Y1 - 2019
N2 - Most learning-based no-reference (NR) video quality assessment (VQA) needs to be trained with a lot of subjective quality scores. However, it is currently difficult to obtain a large volume of subjective scores for videos. Inspired by the success of full-reference VQA methods based on the spatiotemporal slice (STS) images in the extraction of perceptual features and evaluation of video quality, this paper adopts multi-directional video STS images, which are images composed of multi-directional sections of video data, to deal with the lacking of subjective quality scores. By sampling the STS images of video into image patches and adding noise to the quality labels of patches, a successful NR VQA model based on multi-directional STS images and neural network training is proposed. Specifically, first, we select the subjective database that currently contains the largest number of real distortion videos as the test set. Second, we perform multi-directional STS extraction on the videos and sample the local patches from the multi-directional STS to augment the training sample set. Besides, we add some noise to the quality label of the local patches. Third, a reasonable deep neural network is constructed and trained to obtain a local quality prediction model for each patch in the STS image, and then the quality of an entire video is obtained by averaging the model prediction results of multi-directional STS images. Finally, the experiment results indicate that the proposed method tackles the insufficiency of training samples in small subjective VQA dataset and obtains a high correlation with the subjective evaluation.
AB - Most learning-based no-reference (NR) video quality assessment (VQA) needs to be trained with a lot of subjective quality scores. However, it is currently difficult to obtain a large volume of subjective scores for videos. Inspired by the success of full-reference VQA methods based on the spatiotemporal slice (STS) images in the extraction of perceptual features and evaluation of video quality, this paper adopts multi-directional video STS images, which are images composed of multi-directional sections of video data, to deal with the lacking of subjective quality scores. By sampling the STS images of video into image patches and adding noise to the quality labels of patches, a successful NR VQA model based on multi-directional STS images and neural network training is proposed. Specifically, first, we select the subjective database that currently contains the largest number of real distortion videos as the test set. Second, we perform multi-directional STS extraction on the videos and sample the local patches from the multi-directional STS to augment the training sample set. Besides, we add some noise to the quality label of the local patches. Third, a reasonable deep neural network is constructed and trained to obtain a local quality prediction model for each patch in the STS image, and then the quality of an entire video is obtained by averaging the model prediction results of multi-directional STS images. Finally, the experiment results indicate that the proposed method tackles the insufficiency of training samples in small subjective VQA dataset and obtains a high correlation with the subjective evaluation.
KW - Deep convolutional neural networks
KW - Multi-directional STS images
KW - No-reference
KW - Real distortion videos
KW - Spatiotemporal slice images
KW - Video quality assessment
UR - https://www.scopus.com/pages/publications/85079068095
U2 - 10.1117/12.2536866
DO - 10.1117/12.2536866
M3 - 会议稿件
AN - SCOPUS:85079068095
T3 - Proceedings of SPIE - The International Society for Optical Engineering
BT - Optoelectronic Imaging and Multimedia Technology VI
A2 - Dai, Qionghai
A2 - Shimura, Tsutomu
A2 - Zheng, Zhenrong
PB - SPIE
T2 - Optoelectronic Imaging and Multimedia Technology VI 2019
Y2 - 21 October 2019 through 23 October 2019
ER -