TY - JOUR
T1 - Context-Based Multiscale Unified Network for Missing Data Reconstruction in Remote Sensing Images
AU - Shao, Mingwen
AU - Wang, Chao
AU - Wu, Tianjun
AU - Meng, Deyu
AU - Luo, Jiancheng
N1 - Publisher Copyright:
© 2004-2012 IEEE.
PY - 2022
Y1 - 2022
N2 - Missing data reconstruction is a classical yet challenging problem in remote sensing images. Most current methods based on traditional convolutional neural network require supplementary data and can only handle one specific task. To address these limitations, we propose a novel generative adversarial network-based missing data reconstruction method in this letter, which is capable of various reconstruction tasks given only single source data as input. Two auxiliary patch-based discriminators are deployed to impose additional constraints on the local and global regions, respectively. In order to better fit the nature of remote sensing images, we introduce special convolutions and attention mechanism in a two-stage generator, thereby benefiting the tradeoff between accuracy and efficiency. Combining with perceptual and multiscale adversarial losses, the proposed model can produce coherent structure with better details. Qualitative and quantitative experiments demonstrate the uncompromising performance of the proposed model against multisource methods in generating visually plausible reconstruction results. Moreover, further exploration shows a promising way for the proposed model to utilize spatio-spectral-temporal information. The codes and models are available at https://github.com/Oliiveralien/Inpainting-on-RSI.
AB - Missing data reconstruction is a classical yet challenging problem in remote sensing images. Most current methods based on traditional convolutional neural network require supplementary data and can only handle one specific task. To address these limitations, we propose a novel generative adversarial network-based missing data reconstruction method in this letter, which is capable of various reconstruction tasks given only single source data as input. Two auxiliary patch-based discriminators are deployed to impose additional constraints on the local and global regions, respectively. In order to better fit the nature of remote sensing images, we introduce special convolutions and attention mechanism in a two-stage generator, thereby benefiting the tradeoff between accuracy and efficiency. Combining with perceptual and multiscale adversarial losses, the proposed model can produce coherent structure with better details. Qualitative and quantitative experiments demonstrate the uncompromising performance of the proposed model against multisource methods in generating visually plausible reconstruction results. Moreover, further exploration shows a promising way for the proposed model to utilize spatio-spectral-temporal information. The codes and models are available at https://github.com/Oliiveralien/Inpainting-on-RSI.
KW - Context aware
KW - generative adversarial network (GAN)
KW - image reconstruction
KW - multiscale
KW - remote sensing images
UR - https://www.scopus.com/pages/publications/85122378375
U2 - 10.1109/LGRS.2020.3021116
DO - 10.1109/LGRS.2020.3021116
M3 - 文章
AN - SCOPUS:85122378375
SN - 1545-598X
VL - 19
JO - IEEE Geoscience and Remote Sensing Letters
JF - IEEE Geoscience and Remote Sensing Letters
ER -