scroll identifier for mobile
main-content

## Über dieses Buch

At the intersection of mathematics, engineering, and computer science sits the thriving field of compressive sensing. Based on the premise that data acquisition and compression can be performed simultaneously, compressive sensing finds applications in imaging, signal processing, and many other domains. In the areas of applied mathematics, electrical engineering, and theoretical computer science, an explosion of research activity has already followed the theoretical results that highlighted the efficiency of the basic principles. The elegant ideas behind these principles are also of independent interest to pure mathematicians.

A Mathematical Introduction to Compressive Sensing gives a detailed account of the core theory upon which the field is build. With only moderate prerequisites, it is an excellent textbook for graduate courses in mathematics, engineering, and computer science. It also serves as a reliable resource for practitioners and researchers in these disciplines who want to acquire a careful understanding of the subject. A Mathematical Introduction to Compressive Sensing uses a mathematical perspective to present the core of the theory underlying compressive sensing.

## Inhaltsverzeichnis

### Chapter 1. An Invitation to Compressive Sensing

Abstract
This first chapter formulates the objectives of compressive sensing. It introduces the standard compressive problem studied throughout the book and reveals its ubiquity in many concrete situations by providing a selection of motivations, applications, and extensions of the theory. It concludes with an overview of the book that summarizes the content of each of the following chapters.
Simon Foucart, Holger Rauhut

### Chapter 2. Sparse Solutions of Underdetermined Systems

Abstract
The notions of sparsity and compressibility are formally defined in this chapter, and some useful inequalities are established along the way. Then the question about the minimal number of linear equations to solve underdetermined systems admitting s-sparse solutions is answered. This is equivalent to the question about the minimal number of linear measurements to recover s-sparse vectors via $$\ell_{0}$$-minimization, which is shown to be NP-hard in general. A practical (but unstable) procedure to recover all s-sparse vectors using the minimal number of Fourier measurements is also presented.
Simon Foucart, Holger Rauhut

### Chapter 3. Basic Algorithms

Abstract
This chapter outlines several sparse reconstruction techniques analyzed throughout the book. More precisely, we present optimization methods, greedy methods, and thresholding-based methods. In each case, only intuition and basic facts about the algorithms are provided at this point.
Simon Foucart, Holger Rauhut

### Chapter 4. Basis Pursuit

Abstract
This chapter exclusively considers the recovery of sparse vectors via 1-minimization, also known as basis pursuit. The idealized situation is investigated first, and the success of the recovery of all sparse vectors is shown to be equivalent to the null space property of the measurement matrix. In the realistic situation where sparse vectors are replaced by compressible vectors and where measurement errors occur, the analysis is extended by way of the stable and robust null space properties. Several conditions for the success of the recovery of an individual vector are given next. This is then interpreted geometrically as the survival of a face of the cross-polytope under projection. The chapter ends by pointing out the analogy with the recovery of low-rank matrices.
Simon Foucart, Holger Rauhut

### Chapter 5. Coherence

Abstract
The coherence is a simple tool to assess the quality of a measurement matrix for sparse recovery. After the coherence and the 1-coherence function are defined, a lower bound on such quantities is derived. A deterministic matrix almost meeting the lower bound on the coherence is exhibited next. Finally, it is shown that if the 1-coherence function (or simply the coherence) of a measurement matrix is small enough, then basis pursuit, orthogonal matching pursuit, and hard thresholding pursuit are all guaranteed to recover every sparse vector from their measurement vector.
Simon Foucart, Holger Rauhut

### Chapter 6. Restricted Isometry Property

Abstract
This chapter introduces the concept of restricted isometry constants. This is a more powerful tool than the less involved notion of coherence to assess the quality of a measurement matrix for sparse recovery. Some basic properties of the restricted isometry constants and of the related restricted orthogonality constants are presented first as well as relations between them. It is then established that small restricted isometry constants enable stable and robust recovery of sparse vectors via a variety of methods: 1-minimization, thresholding-based algorithms, and greedy algorithms.
Simon Foucart, Holger Rauhut

### Chapter 7. Basic Tools from Probability Theory

Abstract
This chapter provides the background on probability theory required to grasp most of the theoretical aspects involving random measurement matrices. Basic facts about random variables are recalled and the close connection between moments and tails is highlighted. After that, several deviation inequalities for sums of independent subgaussian random variables are established.
Simon Foucart, Holger Rauhut

### Chapter 8. Advanced Tools from Probability Theory

Abstract
This chapter is dedicated to advanced probability results required in the more involved arguments on random measurement matrices, notably precise bounds for Gaussian random matrices and the analysis of random partial Fourier matrices. First, norms of Gaussian vectors in expectation are discussed, followed by Rademacher sums and the symmetrization principle. Khintchine inequalities provide bounds for moments of Rademacher sums. Then decoupling inequalities are covered. They allow one to simplify the analysis of double sums of random variables by replacing one instance of a random vector by an independent copy. The noncommutative Bernstein inequality treated next bounds the tail of a sum of independent random matrices in the operator norm. Dudley’s inequality bounds the supremum of a subgaussian process by an integral over covering numbers with respect to the index set of the process. Slepian’s and Gordon’s lemma compare certain functions of Gaussian vectors in terms of their covariance structures. Together with the concentration of measure principle for Lipschitz functions of Gaussian vectors covered next, they provide powerful tools for the analysis of Gaussian random matrices. Finally, the chapter discusses Talagrand’s inequality, i.e., a Bernstein-type inequality for suprema of empirical processes.
Simon Foucart, Holger Rauhut

### Chapter 9. Sparse Recovery with Random Matrices

Abstract
In this chapter, the restricted isometry property, which guarantees the uniform recovery of sparse vectors via a variety of methods, is proved to hold with high probability for subgaussian random matrices provided the number of rows (i.e., measurements) scales like the sparsity times a logarithmic factor. For Gaussian matrices, precise estimates for the required number of measurements (including optimal or at least small values of the constants) are given both in the setting of nonuniform recovery and of uniform recovery. In the latter case, this is first done via an estimate of the restricted isometry constants and then directly through the null space property. Finally, a close relation between the restricted isometry property and the Johnson–Lindenstrauss lemma is uncovered.
Simon Foucart, Holger Rauhut

### Chapter 10. Gelfand Widths of ℓ1-Balls

Abstract
This chapter makes a detour to the geometry of Banach spaces. First, it highlights a strong connection between compressive sensing and Gelfand widths, with implications on the minimal number of measurements needed for stable sparse recovery with an arbitrary measurement matrix. Then two-sided estimates for the Gelfand widths of 1-balls are established, as well as two-sided estimates for Kolmogorov widths. Finally, compressive sensing techniques are applied to give a proof of Kashin’s decomposition theorem.
Simon Foucart, Holger Rauhut

### Chapter 11. Instance Optimality and Quotient Property

Abstract
This chapter introduces the notion of instance optimality. While uniform 1-instance optimality is a practical concept in compressive sensing, uniform 2-instance optimality is shown not to be. A new property, called quotient property, is then developed to analyze measurement–reconstruction schemes. This property of the measurement matrix, coupled with equality-constrained 1-minimization, guarantees robustness of the scheme under measurement errors as well as nonuniform 2-instance optimality. The quotient property is proved to hold with high probability for Gaussian matrices and, modulo a slight modification, for subgaussian matrices.
Simon Foucart, Holger Rauhut

### Chapter 12. Random Sampling in Bounded Orthonormal Systems

Abstract
This chapter considers the recovery of signals with a sparse expansion in a bounded orthonormal system. After an inventory of such bounded orthonormal systems including the Fourier systems, theoretical limitations specific to this situation are obtained for the minimal number of samples. Then, using this number of random samples, nonuniform sparse recovery is proved to be possible via ℓ1-minimization. Next, using a slightly higher number of random samples, uniform sparse recovery is also proved to be possible via a variety of algorithms. This is derived via establishing the restricted isometry property for the associated random sampling matrix—the random partial Fourier matrix is a particular case. Finally, a connection to the Λ 1-problem is pointed out.
Simon Foucart, Holger Rauhut

### Chapter 13. Lossless Expanders in Compressive Sensing

Abstract
A new class of measurement matrices is introduced in this chapter. It consists of adjacency matrices of certain left regular bipartite graphs called lossless expanders. After pointing out basic properties of lossless expanders, their existence is established using probabilistic methods. The adjacency matrices are then proved to satisfy the null space property, hence enable sparse recovery via basis pursuit. Sparse recovery is also shown to be achieved via a variation of iterative hard thresholding. Finally, a rudimentary example of another type of algorithms that run in sublinear time is presented.
Simon Foucart, Holger Rauhut

### Chapter 14. Recovery of Random Signals using Deterministic Matrices

Abstract
In this chapter, the traditional point of view is reversed in the sense that the measurement matrix is a fixed matrix with small coherence and that the randomness lies in the support and the sign pattern of the sparse vector to be recovered. First, the submatrix whose columns are indexed by the random support is proved to be well-conditioned. This is then used to achieve recovery of random sparse vectors via 1-minimization under a familiar condition on the number of measurements.
Simon Foucart, Holger Rauhut

### Chapter 15. Algorithms for ℓ1-Minimization

Abstract
This chapter presents a selection of three algorithms designed specifically to compute solutions of 1-minimization problems. The algorithms, chosen with simplicity of analysis and diversity of techniques in mind, are the homotopy method, Chambolle and Pock’s primal–dual algorithm, and the iteratively reweighted least squares algorithm. Other algorithms are also mentioned but discussed in less detail.
Simon Foucart, Holger Rauhut

### Backmatter

Weitere Informationen