Elsevier

Information Sciences

Volume 271, 1 July 2014, Pages 179-195
Information Sciences

Two soft-thresholding based iterative algorithms for image deblurring

https://doi.org/10.1016/j.ins.2014.02.089Get rights and content

Abstract

Iterative regularization algorithms, such as the conjugate gradient algorithm for least squares problems (CGLS) and the modified residual norm steepest descent (MRNSD) algorithm, are popular tools for solving large-scale linear systems arising from image deblurring problems. These algorithms, however, are hindered by a semi-convergence behavior, in that the quality of the computed solution first increases and then decreases. In this paper, in order to overcome the semi-convergence behavior, we propose two iterative algorithms based on soft-thresholding for image deblurring problems. One of them combines CGLS with a denoising technique like soft-thresholding at each iteration and another combines MRNSD with soft-thresholding in a similar way. We prove the convergence of MRNSD and soft-thresholding based algorithm. Numerical results show that the proposed algorithms overcome the semi-convergence behavior and the restoration results are slightly better than those of CGLS and MRNSD with their optimal stopping iterations.

Introduction

Image deblurring is a process of reconstructing an approximation of an image from an observed but degraded image, which is often modeled as the solution of the linear operator equationKf+e=g,where fRn2 is a vector representing the true n×n image we aim to recover, eRn2 represents random noise, gRn2 stands for the observed distorted image, and KRn2×n2 is a blurring matrix with special structures [30], [40]. Such problems arise in applications such as astronomy, medical imaging, geophysical applications and many other areas [18], [19], [20], [23], [28], [32], [39], [48], [54], [55]. Typically Eq. (1) comes from the discretization of an ill-posed continuous model, that is, the matrix K is ill-conditioned and noise in the data may give rise to significant errors in the computed approximate solution. As a result, directly solving the equation Kf=g does not yield an accurate and stable approximate solution and it is necessary to resort to a regularization method. In the literature many regularization techniques can be found, such as Tikhonov regularization [52], truncated singular value decomposition (TSVD), truncated iterative algorithms (e.g. steepest descent and conjugate gradient (CG) iterations) [18], [54] and hybrid approaches [8], [33], [35], [36], [45].

A suitable value of the regularization parameter is important when incorporated with a regularization approach. For Tikhonov regularization, various parameter-choice techniques can be used, such as the discrepancy principle [41], the L-curve [18], [28], generalized cross validation (GCV) [21], the weighted-GCV (W-GCV) [14] and the fixed point algorithm (FP) [3]. These parameter-selection methods have both advantages and disadvantages and it is nontrivial to choose an ‘optimal’ regularization parameter [28], [54].

Iterative regularization algorithms like CG and steepest descent can be favorable alternatives to Tikhonov regularization [10], [18], [42], [49], [50], [54]. They access the coefficient matrix K only via matrix–vector multiplication with K and/or KT. It is known that applying these iterative regularization algorithms to solving the linear system Kf=g is often hindered by a semi-convergence behavior. That is, the first few iterations produce regularization solutions and, after some ‘optimal’ iteration, the approximate solutions converge to some other undesired vector. This undesired vector is contaminated by errors and is therefore a poor approximation. In other words, an imprecise estimate of the termination iteration is likely to result in a poor approximate solution and hence it becomes crucial to decide when to stop the iterations. Parameter-selection methods such as the discrepancy principle and the L-curve can be used to choose such a proper termination, but it is also nontrivial as in the case for Tikhonov regularization.

The difficulty in determining the stopping iteration number of iterative regularization algorithms can be partially alleviated by employing hybrid approaches [4], [9], [12], [14], [26], [27]. Lanczos-hybrid type approaches combine an iterative Lanczos bidiagonalization algorithm with a regularization algorithm such as Tikhonov regularization and TSVD at each iteration. Regularization is therefore achieved by Lanczos bidiagonalization filtering and appropriately selecting a regularization parameter at each iteration. Recently, parameter-selection methods such as W-GCV and FP have been studied for the Lanczos-Tikhonov hybrid approach, see [4], [14]. The W-GCV based Lanczos-Tikhonov hybrid approach makes the solution be less sensitive to iteration number. However, it is characterized by the semi-convergence property as the iteration proceeds. The GKB-FP algorithm in [4] combines a partial Golub-Kahan bidiagonalization (GKB) iteration with Tikhonov regularization in the generated Krylov subspace and the regularization parameter for the projected problem is chosen by FP. In some cases, GKB-FP yields more accurate solutions than W-GCV based Lanczos-Tikhonov. But in some cases the two methods perform comparably.

In this paper, we propose two iterative algorithms based on iterative regularization algorithms and soft-thresholding for image deblurring. Recall that it is the noise in the right-hand side g and its propagation with iterations that are the main reason of the semi-convergence behavior of iterative regularization algorithms like CGLS and MRNSD. Therefore, we propose to combine the iterative regularization algorithms with a noise reduction technique like soft-thresholding at each iteration to suppress the propagation of noise and thus overcome the semi-convergence behavior. One of the proposed algorithms combines CGLS with soft-thresholding at each iteration and another combines MRNSD with soft-thresholding in a similar way. The resulting algorithms are stable and very effective in practical applications. We prove the convergence of MRNSD and soft-thresholding based algorithm. Numerical experiments show that the proposed algorithms overcome the semi-convergence behavior and the restoration results are slightly better than those of CGLS and MRNSD with their optimal iterations.

The outline of the paper is as follows. In Section 2.1 we first review the classical CGLS iteration and then give soft-thresholding and CGLS based iterative algorithm, called CGLS-like. A widely used satellite test problem is considered to demonstrate the utility of CGLS-like compared with CGLS. In Section 2.2 we present an MRNSD-like algorithm and prove its convergence. Several numerical examples are given in Section 3 to illustrate the efficacy of the proposed algorithms. Finally Section 4 gives some conclusions.

Section snippets

Soft-thresholding based iterative algorithms

We are interested in using iterative regularization algorithms such as CG and the steepest descent to solve the large-scale linear systems Kf=g. Since our problems are usually not symmetric, we solve the normal equations KTKf=KTg using the conjugate gradient algorithm for least squares problems (CGLS) [7] and the modified residual norm steepest descent (MRNSD) algorithm [44], respectively. Due to the ill-conditioning of K and the presence of noise in the right-hand side g, these algorithms are

Numerical examples

In this section, we give several numerical examples to illustrate the performance of the proposed iterative algorithms: CGLS-like and MRNSD-like, for image deblurring problems. Our tests were done by using MATLAB 7.10.0 (R2010a) on a PC computer with Intel(R) Core(TM)2 Duo CPU 2.93 GHz and 2 GB memory. The floating-point precision is 10-16. The initial guess of each algorithm is set to be a zero vector and the threshold value τ for both Algorithm 2, Algorithm 4 is set to be 10-3. The

Conclusions

In this paper, we have proposed two soft-thresholding based iterative algorithms, CGLS-like and MRNSD-like. The proposed algorithms filter the residual vector at each iteration to overcome the semi-convergence behavior of CGLS and MRNSD, making iterations be more stable. We have proved the convergence of MRNSD-like. We demonstrated through a variety of test problems that our approaches stabilize the iterations, make the solution less sensitive to the number of iterations, and provide slightly

Acknowledgments

The authors would like to express their thanks to the reviewers, the Editor-in-Chief Prof. Witold Pedrycz for their constructive, detailed and helpful advice regarding this paper.

References (55)

  • J.G. Sun et al.

    Extending Sammon mapping with Bregman divergences

    Inform. Sci.

    (2012)
  • C.Y. Wee et al.

    Measure of image sharpness using eigenvalues

    Inform. Sci.

    (2007)
  • J.M. Bardsley et al.

    Covariance-preconditioned iterative methods for nonnegatively constrained astronomical imaging

    SIAM J. Matrix Anal. Appl.

    (2006)
  • F.S. Viloche Bazán

    Fixed-point iterations in determining the Tikhonov regularization parameter

    Inverse Probl.

    (2008)
  • F.S. Viloche Bazán et al.

    GKB-FP: an algorithm for large-scale discrete ill-posed problems

    BIT

    (2010)
  • A. Beck et al.

    A fast iterative shrinkage-thresholding algorithm for linear inverse problems

    SIAM J. Imag. Sci.

    (2009)
  • A. Björck

    Numerical Methods for Least Squares Problems

    (1996)
  • A. Björck

    A bidiagonalization algorithm for solving large and sparse ill-posed systems of linear equations

    BIT

    (1988)
  • A. Björck et al.

    An implicit shift bidiagonalization algorithm for ill-posed systems of linear equations

    BIT

    (1994)
  • J. Bolz et al.

    Sparse matrix solvers on the GPU: conjugate gradients and multigrid

    ACM Trans. Graph.

    (2003)
  • D. Calvetti et al.

    Tikhonov regularization of large linear problems

    BIT

    (2003)
  • J. Chung et al.

    A weighted-GCV method for Lanczos-hybrid regularization

    Electron. Trans. Numer. Anal.

    (2008)
  • I. Daubechies et al.

    An iterative thresholding algorithm for linear inverse problems with a sparsity constraint

    Commun. Pure Appl. Math.

    (2004)
  • M. Donatelli et al.

    Improved image deblurring with anti-reflective boundary conditions and re-blurring

    Inverse Probl.

    (2006)
  • M. Donatelli et al.

    Anti-reflective boundary conditions and re-blurring

    Inverse Probl.

    (2005)
  • H.W. Engl et al.

    Regularization of Inverse Problems

    (2000)
  • X.B. Gao et al.

    Zernike-moment-based image super resolution

    IEEE Trans. Image Process.

    (2011)
  • Cited by (30)

    • Discontinuity enhancement based on time-variant seismic image deblurring

      2016, Journal of Applied Geophysics
      Citation Excerpt :

      The goal of image deblurring is to reconstruct the original scene from a degraded observation. Classical image deblurring seeks an estimate of the true image assuming the blur is known (e.g., Oliveira et al., 2009; Zhao et al., 2013; Liu et al., 2014; Huang et al., 2014; Shi et al., 2016). In contrast, blind image restoration tackles the much more difficult problem where the degradation is unknown (Campisi and Egiazarian, 2016).

    View all citing articles on Scopus

    This research is supported by 973 Program (2013CB329404), NSFC (61370147,61170311), Sichuan Province Sci. & Tech. Research Project (2012GZX0080), and the Fundamental Research Funds for the Central Universities (ZYGX2013J106).

    View full text