TY - GEN
T1 - Denoising prestack random noise with deep generative prior
AU - Gao, Wenbin
AU - Liu, Daiwei
AU - Wang, Xiaokai
AU - Shi, Zhensheng
AU - Chen, Wenchao
N1 - Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - Supervised methods based on deep learning require noise-free training labels, either realistic-looking synthetic data or denoised results via conventional methods. Without ground-truth labels, it is challenging to get a considerable improvement in denoising effectiveness. Besides, generalization performance is also a critical problem for supervised methods. We proposed an unsupervised method based on a deep generator network to suppress random noise, which is a reconstruction process of prestack seismic data. Specifically, deep generator networks tend to generate signals with high correlation, while we pre-train the network to learn the mapping from random latent vectors to useful signals, limited by its own structure and pre-training, the output of the network is limited to the manifold of valuable signals. Then, we optimize the network parameters to find a point in this manifold closest to the noisy data. Such a point corresponds to the valuable signals contained in noisy data. To avoid the model overfitting the noise, both prior regularization and network structure regularization are used in our work. The random noise attenuation results of a CRP gather prove our method has an efficient denoising ability and a good generalization performance.
AB - Supervised methods based on deep learning require noise-free training labels, either realistic-looking synthetic data or denoised results via conventional methods. Without ground-truth labels, it is challenging to get a considerable improvement in denoising effectiveness. Besides, generalization performance is also a critical problem for supervised methods. We proposed an unsupervised method based on a deep generator network to suppress random noise, which is a reconstruction process of prestack seismic data. Specifically, deep generator networks tend to generate signals with high correlation, while we pre-train the network to learn the mapping from random latent vectors to useful signals, limited by its own structure and pre-training, the output of the network is limited to the manifold of valuable signals. Then, we optimize the network parameters to find a point in this manifold closest to the noisy data. Such a point corresponds to the valuable signals contained in noisy data. To avoid the model overfitting the noise, both prior regularization and network structure regularization are used in our work. The random noise attenuation results of a CRP gather prove our method has an efficient denoising ability and a good generalization performance.
KW - deep generator network
KW - denoising
KW - prestack
KW - prior
KW - regularization
UR - https://www.scopus.com/pages/publications/85131795234
U2 - 10.1109/ICSP54964.2022.9778474
DO - 10.1109/ICSP54964.2022.9778474
M3 - 会议稿件
AN - SCOPUS:85131795234
T3 - 2022 7th International Conference on Intelligent Computing and Signal Processing, ICSP 2022
SP - 262
EP - 265
BT - 2022 7th International Conference on Intelligent Computing and Signal Processing, ICSP 2022
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 7th International Conference on Intelligent Computing and Signal Processing, ICSP 2022
Y2 - 15 April 2022 through 17 April 2022
ER -