TY - GEN
T1 - No-reference image quality assessment based on an objective quality database and deep neural networks
AU - Zhang, Xiazhao
AU - Yan, Peng
AU - Mou, Xuanqin
N1 - Publisher Copyright:
© 2019 SPIE.
PY - 2019
Y1 - 2019
N2 - Image quality assessment (IQA) has always been an active research topic since the birth of the digital image. Actually, the arrival of deep learning has made IQA more promising. However, most state-of-the-art no-reference (NR) IQA methods require regression training on distorted images or extracted features with subjective image scores, which makes them suffer from insufficient reference image content and training samples with subjectively scoring due to timeconsuming and laborious subjective testing. Furthermore, most convolutional neural networks (CNN)-based methods generally transform original images into patches to accommodate fixed-size input of CNN, which often alter the image's data and introduce noise into the neural network. This paper aims to solve the above problems by adopting new strategies and proposes a novel NRIQA method based on deep CNN. Specifically, first, we obtain image data with diverse image content, multiple image sizes, and reasonable distortion by crawling, filtrating, and degrading numerous publicly licensed high-quality images from the Internet. Then, we score all the images using an excellent full-reference (FR) IQA algorithm, thereby artificially construct a large objective IQA database. Next, we design a deep CNN, which can accept input images of original sizes from our database instead of patches, then we train the model with the FRIQA index as training objective thus propose the opinionunaware(OU) NRIQA method. Finally, the experiment results show that our method achieves excellent performance, which outperforms state-of-the-art OU-NRIQA models and is comparable to most of the traditional opinion-aware NRIQA methods, even some FRIQA methods on standard subjective IQA databases.
AB - Image quality assessment (IQA) has always been an active research topic since the birth of the digital image. Actually, the arrival of deep learning has made IQA more promising. However, most state-of-the-art no-reference (NR) IQA methods require regression training on distorted images or extracted features with subjective image scores, which makes them suffer from insufficient reference image content and training samples with subjectively scoring due to timeconsuming and laborious subjective testing. Furthermore, most convolutional neural networks (CNN)-based methods generally transform original images into patches to accommodate fixed-size input of CNN, which often alter the image's data and introduce noise into the neural network. This paper aims to solve the above problems by adopting new strategies and proposes a novel NRIQA method based on deep CNN. Specifically, first, we obtain image data with diverse image content, multiple image sizes, and reasonable distortion by crawling, filtrating, and degrading numerous publicly licensed high-quality images from the Internet. Then, we score all the images using an excellent full-reference (FR) IQA algorithm, thereby artificially construct a large objective IQA database. Next, we design a deep CNN, which can accept input images of original sizes from our database instead of patches, then we train the model with the FRIQA index as training objective thus propose the opinionunaware(OU) NRIQA method. Finally, the experiment results show that our method achieves excellent performance, which outperforms state-of-the-art OU-NRIQA models and is comparable to most of the traditional opinion-aware NRIQA methods, even some FRIQA methods on standard subjective IQA databases.
KW - Convolutional neural network (CNN)
KW - Image quality assessment (IQA)
KW - No-reference IQA (NRIQA)
KW - Objective IQA database
KW - Opinion-unaware NRIQA (OU-NRIQA)
UR - https://www.scopus.com/pages/publications/85079063374
U2 - 10.1117/12.2536868
DO - 10.1117/12.2536868
M3 - 会议稿件
AN - SCOPUS:85079063374
T3 - Proceedings of SPIE - The International Society for Optical Engineering
BT - Optoelectronic Imaging and Multimedia Technology VI
A2 - Dai, Qionghai
A2 - Shimura, Tsutomu
A2 - Zheng, Zhenrong
PB - SPIE
T2 - Optoelectronic Imaging and Multimedia Technology VI 2019
Y2 - 21 October 2019 through 23 October 2019
ER -