Skip to main navigation Skip to search Skip to main content

A tensor-based nonlocal total variation model for multi-channel image recovery

  • Wenfei Cao
  • , Jing Yao
  • , Jian Sun
  • , Guodong Han

Research output: Contribution to journalArticlepeer-review

11 Scopus citations

Abstract

In this paper, a new nonlocal total variation (NLTV) regularizer is proposed for solving the inverse problems in multi-channel image processing. Different from the existing nonlocal total variation regularizers that rely on the graph gradient, the proposed nonlocal total variation involves the standard image gradient and simultaneously exploits three important properties inherent in multi-channel images through a tensor nuclear norm, hence we call this proposed functional as tensor-based nonlocal total variation (TenNLTV). In specific, these three properties are the local structural image regularity, the nonlocal image self-similarity, and the image channel correlation, respectively. By fully utilizing these three properties, TenNLTV can provide a more robust measure of image variation. Then, based on the proposed regularizer TenNLTV, a novel regularization model for inverse imaging problems is presented. Moreover, an effective algorithm is designed for the proposed model, and a closed-form solution is derived for a two-order complex eigen system in our algorithm. Extensive experimental results on several inverse imaging problems demonstrate that the proposed regularizer is systematically superior over other competing local and nonlocal regularization approaches, both quantitatively and visually.

Original languageEnglish
Pages (from-to)321-335
Number of pages15
JournalSignal Processing
Volume153
DOIs
StatePublished - Dec 2018

Keywords

  • Image reconstruction
  • Inverse problems
  • Multi-channel
  • Nonlocal regularization
  • Tensor
  • Total variation

Fingerprint

Dive into the research topics of 'A tensor-based nonlocal total variation model for multi-channel image recovery'. Together they form a unique fingerprint.

Cite this