Skip to main content

Über dieses Buch

The chapters in this volume highlight the state-of-the-art of compressed sensing and are based on talks given at the third international MATHEON conference on the same topic, held from December 4-8, 2017 at the Technical University in Berlin. In addition to methods in compressed sensing, chapters provide insights into cutting edge applications of deep learning in data science, highlighting the overlapping ideas and methods that connect the fields of compressed sensing and deep learning. Specific topics covered include:Quantized compressed sensingClassificationMachine learningOracle inequalitiesNon-convex optimizationImage reconstructionStatistical learning theoryThis volume will be a valuable resource for graduate students and researchers in the areas of mathematics, computer science, and engineering, as well as other applied scientists exploring potential applications of compressed sensing.



An Introduction to Compressed Sensing

Compressed sensing and many research activities associated with it can be seen as a framework for signal processing of low-complexity structures. A cornerstone of the underlying theory is the study of inverse problems with linear or nonlinear measurements. Whether it is sparsity, low-rankness, or other familiar notions of low complexity, the theory addresses necessary and sufficient conditions behind the measurement process to guarantee signal reconstruction with efficient algorithms. This includes consideration of robustness to measurement noise and stability with respect to signal model inaccuracies. This introduction aims to provide an overall view of some of the most important results in this direction. After discussing various examples of low-complexity signal models, two approaches to linear inverse problems are introduced which, respectively, focus on the recovery of individual signals and recovery of all low-complexity signals simultaneously. In particular, we focus on the former setting, giving rise to so-called nonuniform signal recovery problems. We discuss different necessary and sufficient conditions for stable and robust signal reconstruction using convex optimization methods. Appealing to concepts from non-asymptotic random matrix theory, we outline how certain classes of random sensing matrices, which fully govern the measurement process, satisfy certain sufficient conditions for signal recovery. Finally, we review some of the most prominent algorithms for signal recovery proposed in the literature.
Niklas Koep, Arash Behboodi, Rudolf Mathar

Quantized Compressed Sensing: A Survey

The field of quantized compressed sensing investigates how to jointly design a measurement matrix, quantizer, and reconstruction algorithm in order to accurately reconstruct low-complexity signals from a minimal number of measurements that are quantized to a finite number of bits. In this short survey, we give an overview of the state-of-the-art rigorous reconstruction results that have been obtained for three popular quantization models: one-bit quantization, uniform scalar quantization, and noise-shaping methods.
Sjoerd Dirksen

On Reconstructing Functions from Binary Measurements

We consider the problem of reconstructing a function from linear binary measurements. That is, the samples of the function are given by inner products with functions taking only the values 0 and 1. We consider three particular methods for this problem, the parameterized-background data-weak (PBDW) method, generalized sampling and infinite-dimensional compressed sensing. The first two methods are dependent on knowing the stable sampling rate when considering samples by Walsh function and wavelet reconstruction. We establish linearity of the stable sampling rate, which is sharp, allowing for optimal use of these methods. In addition, we provide recovery guaranties for infinite-dimensional compressed sensing with Walsh functions and wavelets.
Robert Calderbank, Anders Hansen, Bogdan Roman, Laura Thesing

Classification Scheme for Binary Data with Extensions

In this chapter, we present a simple classification scheme that utilizes only 1-bit measurements of the training and testing data. Our method is intended to be efficient in terms of computation and storage while also allowing for a rigorous mathematical analysis. After providing some motivation, we present our method and analyze its performance for a simple data model. We also discuss extensions of the method to the hierarchical data setting, and include some further implementation considerations. Experimental evidence provided in this chapter demonstrates that our methods yield accurate classification on a variety of synthetic and real data.
Denali Molitor, Deanna Needell, Aaron Nelson, Rayan Saab, Palina Salanevich

Generalization Error in Deep Learning

Deep learning models have lately shown great performance in various fields such as computer vision, speech recognition, speech translation, and natural language processing. However, alongside their state-of-the-art performance, it is still generally unclear what is the source of their generalization ability. Thus, an important question is what makes deep neural networks able to generalize well from the training set to new data. In this chapter, we provide an overview of the existing theory and bounds for the characterization of the generalization error of deep neural networks, combining both classical and more recent theoretical and empirical results.
Daniel Jakubovitz, Raja Giryes, Miguel R. D. Rodrigues

Deep Learning for Trivial Inverse Problems

Deep learning is producing most remarkable results when applied to some of the toughest large-scale nonlinear problems such as classification tasks in computer vision or speech recognition. Recently, deep learning has also been applied to inverse problems, in particular, in medical imaging. Some of these applications are motivated by mathematical reasoning, but a solid and at least partially complete mathematical theory for understanding neural networks and deep learning is missing. In this paper, we do not address large-scale problems but aim at understanding neural networks for solving some small and rather naive inverse problems. Nevertheless, the results of this paper highlight the particular complications of inverse problems, e.g., we show that applying a natural network design for mimicking Tikhonov regularization fails when applied to even the most trivial inverse problems. The proofs of this paper utilize basic and well-known results from the theory of statistical inverse problems. We include the proofs in order to provide some material ready to be used in student projects or general mathematical courses on data analysis. We only assume that the reader is familiar with the standard definitions of feedforward networks, e.g., the backpropagation algorithm for training such networks. We also include—without proof—numerical experiments for analyzing the influence of the network design, which include comparisons with learned iterative soft-thresholding algorithm (LISTA).
Peter Maass

Oracle Inequalities for Local and Global Empirical Risk Minimizers

The aim of this chapter is to provide an overview of general frameworks used to derive (sharp) oracle inequalities. Two extensions of a general theory for convex norm penalized empirical risk minimizers are summarized. The first one is for convex nondifferentiable loss functions. The second is for nonconvex differentiable loss functions. Theoretical understanding is required for the growing number of algorithms in statistics, machine learning, and, more recently, deep learning that are based on (combinations of) these types of loss functions. To motivate the importance of oracle inequalities, the problem of model misspecification in the linear model is first discussed. Then, the sharp oracle inequalities are stated. Finally, we show how to apply the general theory to problems from regression, classification, and dimension reduction.
Andreas Elsener, Sara van de Geer

Median-Truncated Gradient Descent: A Robust and Scalable Nonconvex Approach for Signal Estimation

Recent work has demonstrated the effectiveness of gradient descent for directly estimating high-dimensional signals via nonconvex optimization in a globally convergent manner using a proper initialization. However, the performance is highly sensitive in the presence of adversarial outliers that may take arbitrary values. In this chapter, we introduce the median-Truncated Gradient Descent (median-TGD) algorithm to improve the robustness of gradient descent against outliers, and apply it to two celebrated problems: low-rank matrix recovery and phase retrieval. Median-TGD truncates the contributions of samples that deviate significantly from the sample median in each iteration in order to stabilize the search direction. Encouragingly, when initialized in a neighborhood of the ground truth known as the basin of attraction, median-TGD converges to the ground truth at a linear rate under Gaussian designs with a near-optimal number of measurements, even when a constant fraction of the measurements are arbitrarily corrupted. In addition, we introduce a new median-truncated spectral method that ensures an initialization in the basin of attraction. The stability against additional dense bounded noise is also established. Numerical experiments are provided to validate the superior performance of median-TGD.
Yuejie Chi, Yuanxin Li, Huishuai Zhang, Yingbin Liang

Reconstruction Methods in THz Single-Pixel Imaging

The aim of this paper is to discuss some advanced aspects of image reconstruction in single-pixel cameras, focusing in particular on detectors in the THz regime. We discuss the reconstruction problem from a computational imaging perspective and provide a comparison of the effects of several state-of-the-art regularization techniques. Moreover, we focus on some advanced aspects arising in practice with THz cameras, which lead to nonlinear reconstruction problems: the calibration of the beam reminiscent of the Retinex problem in imaging and phase recovery problems. Finally, we provide an outlook to future challenges in the area.
Martin Burger, Janic Föcke, Lukas Nickel, Peter Jung, Sven Augustin


Weitere Informationen