ADMMNet-Based Deep Unrolling Method for Ghost Imaging

  • Yuchen He
  • , Yue Zhou
  • , Jianming Yu
  • , Hui Chen
  • , Huaibin Zheng
  • , Jianbin Liu
  • , Yu Zhou
  • , Zhuo Xu

Research output: Contribution to journalArticlepeer-review

4 Scopus citations

Abstract

Due to the advantages that different from traditional imaging methods, ghost imaging (GI) attracts more and more researchers' attention, which has the potential applications in the fields of lidar, non-field-of-view imaging, etc. On the other hand, GI has been suffering from poor imaging quality and high sampling rate. In recent years, compressed sensing (CS)-based and deep learning (DL)-based methods have been studied to improve the bottleneck problems of GI, respectively. However, problems such as computational complexity, parameter selection and interpretability limit the application of these methods. In this paper, we proposed a deep unrolling method for GI based on alternating direction method of multipliers (ADMM), called ADUNet-GI, which implement the iterative process of ADMM on the neural network architecture. In this way, we can not only solve the problems caused by CS-based and DL-based methods, but also combine the advantage of model-driven and data-driven approaches. In a word, our motivation is to build a bridge between compressed sensing and deep learning methods, harnessing the strengths of each while mitigating their respective shortcomings. Physical experiment-based demonstrations show that ADUNet-GI can achieve reliable and stable reconstruction under low sampling rate (3%), while other classic methods can not even obtain the contour of the object.

Original languageEnglish
Pages (from-to)233-245
Number of pages13
JournalIEEE Transactions on Computational Imaging
Volume10
DOIs
StatePublished - 2024

Keywords

  • Ghost imaging
  • alternating direction method of multipliers
  • deep unrolling

Fingerprint

Dive into the research topics of 'ADMMNet-Based Deep Unrolling Method for Ghost Imaging'. Together they form a unique fingerprint.

Cite this