Skip to main content

2010 | Buch

Sparse and Redundant Representations

From Theory to Applications in Signal and Image Processing

insite
SUCHEN

Über dieses Buch

A long long time ago, echoing philosophical and aesthetic principles that existed since antiquity, William of Ockham enounced the principle of parsimony, better known today as Ockham’s razor: “Entities should not be multiplied without neces sity. ” This principle enabled scientists to select the ”best” physical laws and theories to explain the workings of the Universe and continued to guide scienti?c research, leadingtobeautifulresultsliketheminimaldescriptionlength approachtostatistical inference and the related Kolmogorov complexity approach to pattern recognition. However, notions of complexity and description length are subjective concepts anddependonthelanguage“spoken”whenpresentingideasandresults. The?eldof sparse representations, that recently underwent a Big Bang like expansion, explic itly deals with the Yin Yang interplay between the parsimony of descriptions and the “language” or “dictionary” used in them, and it became an extremely exciting area of investigation. It already yielded a rich crop of mathematically pleasing, deep and beautiful results that quickly translated into a wealth of practical engineering applications. You are holding in your hands the ?rst guide book to Sparseland, and I am sure you’ll ?nd in it both familiar and new landscapes to see and admire, as well as ex cellent pointers that will help you ?nd further valuable treasures. Enjoy the journey to Sparseland! Haifa, Israel, December 2009 Alfred M. Bruckstein vii Preface This book was originally written to serve as the material for an advanced one semester (fourteen 2 hour lectures) graduate course for engineering students at the Technion, Israel.

Inhaltsverzeichnis

Frontmatter

Sparse and Redundant Representations – Theoretical and Numerical Foundations

Frontmatter
Chapter 1. Prologue
Abstract
A central achievement of classical linear algebra is a thorough examination of the problem of solving systems of linear equations. The results – definite, timeless, and profound – give the subject a completely settled appearance. As linear systems of equations are the core engine in many engineering developments and solutions, much of this knowledge is practically and successfully deployed in applications. Surprisingly, within this well-understood arena there is an elementary problem that has to do with sparse solutions of linear systems, which only recently has been explored in depth; we will see that this problem has surprising answers, and it inspires numerous practical developments. In this chapter we shall concentrate on defining this problem carefully, and set the stage for its answers in later chapters.
Michael Elad
Chapter 2. Uniqueness and Uncertainty
Abstract
We return to the basic problem (P 0), which is at the core of our discussion,
$${\left(P_o\right):\quad \min\limits_X \parallel \mathbf{X}\parallel_0 \,{\rm subject\,\, to} \mathbf \quad \mathbf{b}=\mathbf{A\mathbf{x}}}.$$
While we shall refer hereafter to this problem as our main goal, we stress that we are quite aware of its two major shortcomings in leading to any practical tool. 1. The equality requirement b = A X is too strict, as there are small chances for any vector b to be represented by a few columns from A. A better requirement would be one that allows for small deviation. 2. The sparsity measure is too sensitive to very small entries in X, and a better measure would adopt a more forgiving approach towards such small entries.
Michael Elad
Chapter 3. Pursuit Algorithms – Practice
Abstract
It is now time to consider reliable and effcient methods for solving (P 0), as a straightforward approach seems hopeless.We now discuss methods which, it seems, have no hope of working – but which, under specific conditions, will work. Looking at the problem (P 0),
$${\left(P_o\right):\quad \min\limits_X \parallel \mathbf{X}\parallel_0 \,{\rm subject\,\, to} \mathbf \quad \mathbf{b}=\mathbf{A\mathbf{x}}},$$
one observes that the unknown X is composed of two effective parts to be found – the support of the solution, and the non-zero values over this support. Thus, one way to attack the numerical solution of (P 0) is to focus on the support, with the understanding that once found, the non-zero values of X are easily detected by plain Least-Squares. As the support is discrete in nature, algorithms that seek it are discrete as well. This line of reasoning leads to the family of greedy algorithms that will be presented hereafter.
Michael Elad
Chapter 4. Pursuit Algorithms – Guarantees
Abstract
Assume that the linear system A x = b has a sparse solution with k0 non-zeros, i.e., \(\parallel \mathbf{X}\parallel_{0}=k_0.\) Furthermore, assume that k 0 < spark(A)/2. Will matching pursuit or Basis-Pursuit succeed in recovering the sparsest solution? Clearly, such success cannot be expected for all k 0 and for all matrices A, since this would conflict with the known NP-hardness of the problem in the general case. However, if the equation actually has a "suffciently sparse" solution, the success of these algorithms in addressing the original objective (P 0) can be guaranteed.
Michael Elad
Chapter 5. From Exact to Approximate Solutions
Abstract
The exact constraint A x = b is often relaxed, with an approximated equality measured using the quadratic penalty function \(Q\left(\mathbf{X}\right)=\parallel \mathbf{A}\mathbf{x}-\mathbf{b}{\parallel^{2}_{2}}.\) Such relaxation allows us to (i) define a quasi-solution in case no exact solution exists (even in cases where A has more rows than columns); (ii) exploit ideas from optimization theory; (iii) measure the quality of a candidate solution; and more.
Michael Elad
Chapter 6. Iterative-Shrinkage Algorithms
Abstract
In this chapter, our goal is the minimization of a function of the form
$$f\left({\mathbf{x}}\right)=\lambda \mathbf{1}^T \rho{\left({\mathbf{x}}\right)}+\frac{1}{2}\parallel \mathbf{b}-\mathbf{A}\mathbf{x}{\parallel^{2}_{2}},$$
which we have seen before in several forms. The function ρ(x) operates entry-wise on the vector x. As an example, for ρ(x) = |x|p we get that \(\mathbf{1}^T \rho\left(\mathbf{X}\right)=\parallel \mathbf{X}{\parallel^{p}_{p}},\) which gives us the freedom to choose any P we find as fit. In this chapter we shall keep the discussion general, and allow any “sparsity-promoting” function ρ(·). In particular, the function posed above is a generalized version of what we have defined as (Q λ 1).
Michael Elad
Chapter 7. Towards Average PerformanceAnalysis
Abstract
The analysis presented so far presents a simple but limited portrait of the ability of concrete algorithms to find sparse solutions and near-solutions. In this chapter we briefly point to the interesting and challenging research territory that lies beyond these worst-case results. We start with some simple simulations to motivate this discussion.
Michael Elad
Chapter 8. The Dantzig-Selector Algorithm
Abstract
In this chapter we present an appealing and surprising alternative pursuit algorithm for sparse approximation. This algorithm is competitive with the Basis Pursuit Denoising (BPDN) and the OMP. This algorithm was proposed in 2007 by Candes and Tao, and termed Dantzig-Selector (DS). The name chosen pays tribute to George Dantzig, the father of the simplex algorithm that solves Linear Programming (LP) problems. The connection to LP will become evident shortly.
Michael Elad

From Theory to Practice – Signal and Image Processing Applications

Frontmatter
Chapter 9. Sparsity-Seeking Methods in Signal Processing
Abstract
All the previous chapters have shown us that the problem of finding a sparse solution to an underdetermined linear system of equation, or approximation of it, can be given a meaningful definition and, contrary to expectation, can also be computationally tractable. We now turn to discuss the applicability of these ideas to signal and image processing. As we argue in this chapter, modeling of informative signals is possible using their sparse representation over a well-chosen dictionary. This will give rise to linear systems and their sparse solution, as dealt with earlier.
Michael Elad
Chapter 10. Image Deblurring – A Case Study
Abstract
In this chapter we present an application of the Sparse-Land model to image deblurring, in order to demonstrate the applicative side of the above-discussed model and algorithms. As we show next, this long-studied problem can be handled quite effectively using the fundamentals of the model with hardly any changes. The content of this chapter follows closely with the work by M.A.T. Figueiredo and R.D. Nowak that appeared in ICIP 2005, and a later paper by M. Elad, B. Matalon, and M. Zibulevsky (2007). While there exists a more recent work that leads to somewhat improved results, the appeal in this work is the relative simplicity with which near-state-of-the-art results are obtained.
Michael Elad
Chapter 11. MAP versus MMSE Estimation
Abstract
So far we kept the description of the pursuit algorithms on a deterministic level, as an intuitive optimization procedure. We mentioned in Chapter 9 that these algorithms correspond to an approximation of the Maximum-A’posteriori-Probability (MAP) estimator, but this connection was not explicitly derived. In this chapter we make this claim exact by defining the quest for sparse representations as an estimation task. As we shall see, this calls for a clear and formal definition of the stochastic model assumed to generate the sparse representation vector. A benefit of such treatment is an ability to derive the Minimum-Mean-Squared-Error (MMSE) estimator as well, and this in turn leads to the need to approximate it. These and more are the topics we cover in this chapter.
Michael Elad
Chapter 12. The Quest for a Dictionary
Abstract
A fundamental ingredient in the definition of Sparse-Land’s signals and its deployment to applications is the dictionary A. How can we wisely choose A to perform well on the signals in question? This is the topic of this chapter, and our emphasis is put on learning methods for dictionaries, based on a group of examples.
Michael Elad
Chapter 13. Image Compression – Facial Images
Abstract
The sparse-representation viewpoint discussed so far, along with dictionary-learning, is merely that – a viewpoint. The theoretical results we have given merely tell us that sparse modeling is, in favorable cases, a mathematically well-founded enterprize with practically useful computational tools. The only way to tell if sparse modeling works in the real world is to apply it, and see how it performs! We have already seen the use of sparse representation modeling for image deblurring, and the results seem to be promising. In this and the next chapters, we review selected results applying this viewpoint to several image processing tasks.
Michael Elad
Chapter 14. Image Denoising
Abstract
Images often contain noise, which may arise due to sensor imperfection, poor illumination, or communication errors. Removing such noise is of great benefit in many applications, and this may explain the vast interest in this problem and its solution. However, the importance of the image denoising problem goes beyond the evident applications it serves. Being the simplest possible inverse problem, it provides a convenient platform over which image processing ideas and techniques can be tested and perfected. Indeed, numerous contributions in the past 50 years or so address this problem from many and diverse points of view. Statistical estimators of all sorts, spatial adaptive filters, stochastic analysis, partial differential equations, transform-domain methods, splines and other approximation theory methods, morphological analysis, differential geometry, order statistics, and more, are some of the many directions and tools explored in studying this problem.
Michael Elad
Chapter 15. Other Applications
Abstract
This book is not meant to be a comprehensive textbook on image processing, and therefore it is not our intention to show how every familiar application in image processing finds a good use for the Sparse-Land model. Indeed, such a claim would not be true to begin with, as there are image processing problems for which the relation to this model has not been (and perhaps will never be) shown.
Michael Elad
Chapter 16. Epilogue
Abstract
The field of image processing offers an unusually fertile playground for applied mathematicians, where the distance between an abstract mathematical idea and an or even a product may be small. This explains both the presence of so many mathematicians in this field, and the fact that research in it became so mathematically oriented in the past two decades. The activity on sparse and redundant representation modeling is just one manifestation of both these trends.
Michael Elad
Backmatter
Metadaten
Titel
Sparse and Redundant Representations
verfasst von
Michael Elad
Copyright-Jahr
2010
Verlag
Springer New York
Electronic ISBN
978-1-4419-7011-4
Print ISBN
978-1-4419-7010-7
DOI
https://doi.org/10.1007/978-1-4419-7011-4

Premium Partner