Elsevier

Digital Signal Processing

Volume 24, January 2014, Pages 63-70
Digital Signal Processing

Compressive sensing using the modified entropy functional

https://doi.org/10.1016/j.dsp.2013.09.010Get rights and content

Abstract

In most compressive sensing problems, 1 norm is used during the signal reconstruction process. In this article, a modified version of the entropy functional is proposed to approximate the 1 norm. The proposed modified version of the entropy functional is continuous, differentiable and convex. Therefore, it is possible to construct globally convergent iterative algorithms using Bregmanʼs row-action method for compressive sensing applications. Simulation examples with both 1D signals and images are presented.

Introduction

The Nyquist–Shannon sampling theorem [1] is one of the fundamental theorems in signal processing literature. It specifies the conditions for perfect reconstruction of a continuous signal from its samples. If a signal is sampled with a sampling frequency that is at least two times larger than its bandwidth, it can be perfectly reconstructed from its samples. However in many applications of signal processing including waveform compression, perfect reconstruction is not necessary. In this article, a modified version of the entropy functional is proposed. The functional is defined for both positive and negative real numbers and it is continuous, differentiable and convex everywhere. Therefore it can be used as a cost function in many signal processing problems including the compressive sensing problem.

The most common method used in compression applications is transform coding. The signal x[n] is transformed into another domain defined by the transformation matrix ψ. The transformation procedure is simply finding the inner product of the signal x[n] with the rows ψi of the transformation matrix ψ represented as follows:sl=x,ψl,l=1,2,,N, where x is a column vector, whose entries are samples of the signal x[n].

The digital signal x[n] can be reconstructed from its transform coefficients sl as follows:x=l=1Nsl.ψlorx=ψ.s, where s is a vector containing the transform domain coefficients sl.

The basic idea in digital waveform coding is that the signal should be approximately reconstructed from only a few of its non-zero transform coefficients. In most cases, including the JPEG image coding standard, the transform matrix ψ is chosen in such a way that the new signal s is efficiently represented in the transform domain with a small number of coefficients. A signal x is compressible, if it has only a few large amplitude sl coefficients in the transform domain and the rest of the coefficients are either zeros or negligibly small-valued.

In a compressive sensing framework, the signal is assumed to be K-sparse in a transformation domain, such as the wavelet domain or the DCT (Discrete Cosine Transform) domain. A signal with length N is K-sparse if it has at most K non-zero and (NK) zero coefficients in a transform domain. The case of interest in CS problems is when KN, i.e., sparse in the transform domain.

The CS theory introduced in [2], [3], [4], [5], [6] provides answers to the question of reconstructing a signal from its compressed measurements y, which is defined as followsy=ϕx=ϕ.ψ.s=θ.s, where ϕ is the M×N measurement matrix and MN. The reconstruction of the original signal x from its compressed measurements y cannot be achieved by simple matrix inversion or inverse transformation techniques. A sparse solution can be obtained by solving the following optimization problem:sp=argmin||s||0such thatθ.s=y. However, this problem is an NP-complete optimization problem; therefore, its solution cannot be found easily. It is also shown in [2], [3], [4] that it is possible to construct the ϕ matrix from random numbers, which are i.i.d. Gaussian random variables. In this case, the number of measurements should be chosen as cKlog(NK)<MN to satisfy the conditions for perfect reconstruction [2], and [3]. With this choice of the measurement matrix, the optimization problem (4) can be approximated by 1 norm minimization as:sp=argmin||s||1such thatθ.s=y.

Instead of solving the original CS problem in (4) or (5), several researchers reformulate them to approximate the solution. For example, in [15], the authors developed a Bayesian framework and solved the CS problem using Relevance Vector Machines (RVM). In [7], [8] the authors replaced the objective function of the CS optimization in (4), (5) with a new objective function to solve the sparse signal reconstruction problem. One popular approach is replacing 0 norm with p norm, where p(0,1) [7], [9] or even with the mix of two different norms as in [10]. However, in these cases, the resulting optimization problems are not convex. Several studies in the literature addressed p norm based non-convex optimization problems and applied their results to the sparse signal reconstruction example [11], [12], [13], [14].

The entropy functional g(v)=vlogv is also used to approximate the solution of 1 optimization and linear programming problems in signal and image reconstruction by Bregman [16], and others [17], [18], [19], [20], [21], [22], [23]. In this article, we propose the use of a modified version of the entropy functional as an alternative way to approximate the CS problem. In Fig. 1, plots of the different cost functions including the proposed modified entropy functiong(v)=(|v|+1e)log(|v|+1e)+1e, as well as the absolute value g(v)=|v| and g(v)=v2 are shown. The modified entropy functional (6) is convex, continuous and differentiable, it slowly increases compared to g(v)=v2, because log(v) is much smaller than v for high v values as seen in Fig. 1. The convexity proof for the modified entropy functional is given in Appendix A.

Bregman also developed iterative row-action methods to solve the global optimization problem by successive local Bregman-projections. In each iteration step, a Bregman-projection, which is a generalized version of the orthogonal projection, is performed onto a hyperplane representing a row of the constraint matrix θ. In [16], Bregman proved that the proposed iterative method is guaranteed to converge to the global minimum, given that there is a proper choice of the initial estimate (e.g., v0=0).

An interesting interpretation of the row-action approach is that it provides an on-line solution to the CS problem. Each new measurement of the signal adds a row to the matrix θ. In the iterative row-action method, a Bregman-projection is performed onto the new hyperplane formed by the new measurement. In this way, the currently available solution is updated without solving the entire CS problem. The new solution can be further updated by using past or new measurements in an iterative manner by performing other Bregman-projections. Therefore, it is possible to develop a real-time on-line CS method using the proposed approach.

In Section 2 of this paper, we review the Bregman-projection concept and define the modified entropy functional and related Bregman-projections. We generalize the entropy function based convex optimization method introduced by Bregman because the ordinary entropy function is defined only for positive real numbers. On the other hand, transform domain coefficients can be both positive and negative.

Section 2 also contains the Bregman-projection definition, and formulation of the entropy functional based CS reconstruction problem. We define the iterative CS algorithm in Section 2.1, and provide experimental results in Section 4.

Section snippets

Bregman-projection based algorithm

The o and 1 norm based cost functions (4) and (5) used in compressive sensing problems are not differentiable everywhere. Therefore, it is not possible to use convex optimization algorithms to solve the CS problems in (4) and (5). Besides, as the size of the problem increases, solving these optimization problems becomes very compelling. As the original CS problem given in (4) and (5) involves non-convex 0 and 1 cost functions, it cannot be divided into simpler subproblems for convex

Proximal splitting based algorithm

In [35], [36], [37], [38] proximity operators of convex functions and their signal processing applications are reviewed. Various proximal splitting algorithms for convex optimization problems including the forward–backward splitting (FBS) algorithm are also presented. In [39], the proof of convergence of the FBS algorithm, and a framework, which uses Bregman distance function (17) as the cost function, is presented. This framework can be used to solve convex optimization problems involving

Experimental results

For the validation and testing of the entropic minimization method, experiments with four different one-dimensional (1D) signals, and 30 different images are carried out. The cusp signal, which consists of 1024 samples, and the hisine signal, which consists of 256 samples are shown in Fig. 3, Fig. 4, respectively. The cusp and hisine signals can be sparsely approximated in the DCT domain. The 4 and 25 sparse random signals are composed of 128 and 256 samples, respectively, and they consist of 4

Conclusion

In this article, we introduced an iterative CS reconstruction algorithm that uses an entropy based cost functional. The proposed entropy based cost functional is convex, continuous and differentiable everywhere and approximates 1 norm in the original CS formulation. It is convex, continuous and differentiable, therefore, it enables the user to divide the large and complex CS problems into smaller and simpler subproblems. We developed a globally convergent algorithm, that solves these

Acknowledgement

This work was supported by Scientific and Technical Research Council of Turkey, TUBITAK with a grant number 111E057 (Compressive Sensing Algorithms Using Entropy Functional and Total Variation Constraints).

Kivanc Kose received his B.Sc., M.Sc., Ph.D. degrees in the Electrical and Electronics Engineering from Bilkent University. Since December 2012, he has been working as a post-doctoral research fellow at Dermatology Service in Memorial Sloan Kettering Cancer Center, New York, USA. He has been serving as an associate editor for Signal, Image and Video Processing Journal since August 2013. His research interests are multimedia signal analysis, computer vision, machine learning, and their

References (45)

  • T.Y. Rezaii et al.

    Sparsity aware consistent and high precision variable selection

    Signal Image Video Process.

    (2012)
  • R. Chartrand et al.

    Iteratively reweighted algorithms for compressive sensing

  • M. Kowalski et al.

    Sparsity and persistence: mixed norms provide simple signal models with dependent coefficients

    Signal Image Video Process.

    (2009)
  • M. Ehler

    Shrinkage rules for variational minimization problems and applications to analytical ultracentrifugation

    J. Inverse Ill-Posed Probl.

    (2011)
  • Kristian Bredies et al.

    Minimization of non-smooth, non-convex functionals by iterative thresholding

  • A. Achim et al.

    Compressive sensing for ultrasound RF echoes using a-stable distributions

  • G. Tzagkarakis et al.

    Greedy sparse reconstruction of non-negative signals using symmetric alpha-stable distributions

  • S. Ji et al.

    Bayesian compressive sensing

    IEEE Trans. Signal Process.

    (June 2008)
  • Y. Censor et al.

    An iterative row-action method for interval convex programming

    J. Optim. Theory Appl.

    (1981)
  • S. Jafarpour et al.

    A game theoretic approach to expander-based compressive sensing

  • S. Jafarpour et al.

    Compressive sensing meets game theory

  • A.E. Cetin

    An iterative algorithm for signal reconstruction from bispectrum

    IEEE Trans. Signal Process.

    (December 1991)
  • Cited by (16)

    • Compressive sensing and entropy in seismic signals

      2017, Physica A: Statistical Mechanics and its Applications
      Citation Excerpt :

      The entropy was employed in CS technique [26] to construct a cost function to optimization and linear programming problems. The methodology we tested here should give support to the work developed in [26]. This work applies the CS method to seismic data and the measurement of its efficiency.

    • A new non-convex approach for compressive sensing mri

      2020, Progress In Electromagnetics Research C
    View all citing articles on Scopus

    Kivanc Kose received his B.Sc., M.Sc., Ph.D. degrees in the Electrical and Electronics Engineering from Bilkent University. Since December 2012, he has been working as a post-doctoral research fellow at Dermatology Service in Memorial Sloan Kettering Cancer Center, New York, USA. He has been serving as an associate editor for Signal, Image and Video Processing Journal since August 2013. His research interests are multimedia signal analysis, computer vision, machine learning, and their applications in various fields such as medical imaging.

    Osman Gunay received his B.Sc. and M.S. degrees in Electrical and Electronics Engineering from Bilkent University, Ankara, Turkey. He is currently working towards his Ph.D. in Electrical and Electronics Engineering at Bilkent University. His research interests include image analysis, computer vision, video segmentation, dynamic texture recognition and image retrieval.

    A. Enis Cetin got his B.Sc. degree from Electrical Engineering at the Middle East Technical University; and got his M.S.E. and Ph.D. degrees in Systems Engineering from the Moore School of Electrical Engineering at the University of Pennsylvania in Philadelphia. Between 1987–1989, he was Assistant Professor of Electrical Engineering at the University of Toronto, Canada. Since then he has been with Bilkent University, Ankara, Turkey. Currently he is a full professor. He has involved in more than ten EU and Nationally funded research projects as a researcher and/or project coordinator. He was in the editorial boards of IEEE Transactions on Image Processing, and several EURASIP journals. He has been the Editor in Chief of Signal, Image and Video Processing journal since July 2013. His research interests are Multimedia Signal Analysis and Applications, Computer Vision, Biomedical Image Processing, Human-Computer Interaction using vision and speech, Speech Processing, Digital Coding of Waveforms (Image, Video, Speech, and Biomedical Signals), Adaptive Filtering and Adaptive Subband Coding, Time Series Analysis and Stochastic Processes.

    View full text