Skip to main content

2011 | Buch

Handbook of Mathematical Methods in Imaging

insite
SUCHEN

Über dieses Buch

The Handbook of Mathematical Methods in Imaging provides a comprehensive treatment of the mathematical techniques used in imaging science. The material is grouped into two central themes, namely, Inverse Problems (Algorithmic Reconstruction) and Signal and Image Processing. Each section within the themes covers applications (modeling), mathematics, numerical methods (using a case example) and open questions. Written by experts in the area, the presentation is mathematically rigorous. The entries are cross-referenced for easy navigation through connected topics. Available in both print and electronic forms, the handbook is enhanced by more than 150 illustrations and an extended bibliography.

It will benefit students, scientists and researchers in applied mathematics. Engineers and computer scientists working in imaging will also find this handbook useful.

Inhaltsverzeichnis

Frontmatter
1. Linear Inverse Problems

This introductory treatment of linear inverse problems is aimed at students and neophytes. An historical survey of inverse problems and some examples of model inverse problems related to imaging are discussed to furnish context and texture to the mathematical theory that follows. The development takes place within the sphere of the theory of compact linear operators on Hilbert space and the singular value decomposition plays an essential role. The primary concern is regularization theory: the construction of convergent well-posed approximations to ill-posed problems. For the most part, the discussion is limited to the familiar regularization method devised by Tikhonov and Phillips.

Charles Groetsch
2. Large-Scale Inverse Problems in Imaging

Large-scale inverse problems arise in a variety of significant applications in image processing, and efficient regularization methods are needed to compute meaningful solutions. This chapter surveys three common mathematical models including a linear, a separable nonlinear, and a general nonlinear model. Techniques for regularization and large-scale implementations are considered, with particular focus on algorithms and computations that can exploit structure in the problem. Examples from image deconvolution, multi-frame blind deconvolution, and tomosynthesis illustrate the potential of these algorithms. Much progress has been made in the field of large-scale inverse problems, but many challenges still remain for future research.

Julianne Chung, Sarah Knepper, James G. Nagy
3. Regularization Methods for Ill-Posed Problems

In this chapter, we outline the mathematical theory of direct regularization methods for in general nonlinear and ill-posed inverse problems. One focus is on Tikhonov regularization in Hilbert spaces with quadratic misfit and penalty terms. Moreover, recent results of an extension of the theory to Banach spaces are presented concerning the variational regularization with convex penalty term. Five examples of parameter identification problems in integral and differential equations are given in order to show how to apply the theory of this chapter to specific inverse and ill-posed problems.

Jin Cheng, Bernd Hofmann
4. Distance Measures and Applications to Multi-Modal Variational Imaging

Today imaging is rapidly improving by increased specificity and sensitivity of measurement devices. However, even more diagnostic information can be gained by combination of data recorded with different imaging systems.

Christiane Pöschl, Otmar Scherzer
5. Energy Minimization Methods

Energy minimization methods are a very popular tool in image and signalprocessing. This chapter deals with images defined on a discrete finite set. Energyminimization methods are presented from a nonclassical standpoint: weprovide analytical results on their minimizers that reveal salient featuresof the images recovered in this way, as a function of the shape of theenergy itself. The energies under consideration can be differentiable ornot, convex or not. Examples and illustrations corroborate the presentedresults. Applications that take benefit from these results are presented as well.

Mila Nikolova
6. Compressive Sensing

Compressive sensing is a new type of sampling theory, which predicts that sparse signals and images can be reconstructed from what was previously believed to be incomplete information. As a main feature, efficient algorithms such as ℓ1-minimization can be used for recovery. The theory has many potential applications in signal processing and imaging. This chapter gives an introduction and overview on both theoretical and numerical aspects of compressive sensing.

Massimo Fornasier, Holger Rauhut
7. Duality and Convex Programming

This chapter surveys key concepts in convex duality theory and their application to the analysis and numerical solution of problem archetypes in imaging.

Jonathan M. Borwein, D. Russell Luke
8. EM Algorithms
Charles Byrne, Paul P. B. Eggermont
9. Iterative Solution Methods

This chapter deals with iterative methods for nonlinear ill-posed problems. We present gradient and Newton type methods as well as nonstandard iterative algorithms such as Kaczmarz, expectation maximization, and Bregman iterations.Our intention here is to cite convergence results in the sense of regularization and to provide further references to the literature.

Martin Burger, Barbara Kaltenbacher, Andreas Neubauer
10. Level Set Methods for Structural Inversion and Image Reconstruction

In this chapter, an introduction is given into the use of level set techniques for inverse problems and image reconstruction. Several approaches are presented which have been developed and proposed in the literature since the publication of the original (and seminal) paper by F. Santosa in 1996 on this topic. The emphasis of this chapter, however, is not so much on providing an exhaustive overview of all ideas developed so far, but on the goal of outlining the general idea of structural inversion by level sets, which means the reconstruction of complicated images with interfaces from indirectly measured data. As case studies, recent results (in 2D) from microwave breast screening, history matching in reservoir engineering, and crack detection are presented in order to demonstrate the general ideas outlined in this chapter on practically relevant and instructive examples. Various references and suggestions for further research are given as well.

Oliver Dorn, Dominique Lesselier
11. Expansion Methods

The aim of this chapter is to review recent developments in the mathematical and numerical modeling of anomaly detection and multi-physics biomedical imaging. Expansion methods are designed for anomaly detection. They provide robust and accurate reconstruction of the location and of some geometric features of the anomalies, even with moderately noisy data. Asymptotic analysis of the measured data in terms of the size of the unknown anomalies plays a key role in characterizing all the information about the anomaly that can be stably reconstructed from the measured data. In multi-physics imaging approaches, different physical types of waves are combined into one tomographic process to alleviate deficiencies of each separate type of waves, while combining their strengths. Muti-physics systems are capable of high-resolution and high-contrast imaging. Asymptotic analysis plays a key role in multi-physics modalities as well.

Habib Ammari, Hyeonbae Kang
12. Sampling Methods
Martin Hanke, Andreas Kirsch
13. Inverse Scattering

We give a survey of the mathematical basis of inverse scattering theory, concentrating on the case of time-harmonic acoustic waves. After an introduction and historical remarks we give an outline of the direct scattering problem. This is then followed by sections on uniqueness results in inverse scattering theory and iterative and decomposition methods to reconstruct the shape and material properties of the scattering object. We conclude by discussing qualitative methods in inverse scattering theory, in particular the linear sampling method and its use in obtaining lower bounds on the constitutive parameters of the scattering object.

David Colton, Rainer Kress
14. Electrical Impedance Tomography
Andy Adler, Romina Gaburro, William Lionheart
15. Synthetic Aperture Radar Imaging

The purpose of this chapter is to explain the basics of radar imaging and to list a variety of associated open problems. After a short section on the historical background, the article includes a derivation of an approximate scalar model for radar data. The basics in Inverse Synthetic-Aperture Radar (ISAR) are discussed, and a connection is made with the Radon transform. Two types of Synthetic-Aperture Radar (SAR), namely spotlight SAR and stripmap SAR, are outlined. Resolution analysis is included for ISAR and spotlight SAR. Some numerical algorithms are discussed. Finally, the chapter ends with a listing of open problems and a bibliography for further reading.

Margaret Cheney, Brett Borden
16. Tomography

We define tomography as the process of producing an image of a distribution (of some physical property) from estimates of its line integrals along a finite number of lines of known locations. We touch upon the computational and mathematical procedures underlying the data collection, image reconstruction, and image display in the practice of tomography. The emphasis is on reconstruction methods, especially the so-called series expansion reconstruction algorithms.We illustrate the use of tomography (including three-dimensional displays based on reconstructions) both in electron microscopy and in x-ray computerized tomography (CT), but concentrate on the latter. This is followed by a classification and discussion of reconstruction algorithms. In particular, we discuss how to evaluate and compare the practical efficacy of such algorithms.

Gabor T. Herman
17. Optical Imaging

This chapter discusses diffuse optical tomography. We present the origins of this method in terms of spectroscopic analysis of tissue using near-infrared light and its extension to an imaging modality. Models for light propagation at the macroscopic and mesoscopic scale are developed from the radiative transfer equation (RTE). Both time and frequency domain systems are discussed. Some formal results based on Green’s function models are presented, and numerical methods are described based on discrete finite element method (FEM) models and a Bayesian framework for image reconstruction. Finally, some open questions are discussed.

Simon R. Arridge, Jari P. Kaipio, Ville Kolehmainen, Tanja Tarvainen
18. Photoacoustic and Thermoacoustic Tomography: Image Formation Principles

Photoacoustic tomography (PAT), also known as thermoacoustic or optoacoustic tomography, is a rapidly emerging imaging technique that holds great promise for biomedical imaging. PAT is a hybrid imaging technique, and can be viewed either as an ultrasound mediated electromagnetic modality or an ultrasound modality that exploits electromagnetic-enhanced image contrast. In this chapter, we provide a review of the underlying imaging physics and contrast mechanisms in PAT. Additionally, the imaging models that relate the measured photoacoustic wavefields to the sought-after optical absorption distribution are described in their continuous and discrete forms. The basic principles of image reconstruction from discrete measurement data are presented, which includes a review of methods for modeling the measurement system response.

Kun Wang, Mark A. Anastasio
19. Mathematics of Photoacoustic and Thermoacoustic Tomography

The chapter surveys the mathematical models, problems, and algorithms of the thermoacoustic tomography (TAT) and photoacoustic tomography (PAT). TAT and PAT represent probably the most developed of the several novel “hybrid” methods of medical imaging. These new modalities combine different physical types of waves (electromagnetic and acoustic in case of TAT and PAT) in such a way that the resolution and contrast of the resulting method are much higher than those achievable using only acoustic or electromagnetic measurements.

Peter Kuchment, Leonid Kunyansky
20. Wave Phenomena

This chapter discusses imaging methods related to wave phenomena, and in particular, inverse problems for the wave equation will be considered. The first part of the chapter explains the boundary control method for determining a wave speed of a medium from the response operator, which models boundary measurements. The second part discusses the scattering relation and travel times, which are different types of boundary data contained in the response operator. The third part gives a brief introduction to curvelets in wave imaging for media with nonsmooth wave speeds. The focus will be on theoretical results and methods.

Matti Lassas, Mikko Salo, Gunther Uhlmann
21. Statistical Methods in Imaging

The theme of this chapter is statistical methods in imaging, with a marked emphasis on the Bayesian perspective. The application of statistical notions and techniques in imaging requires that images and the available data are redefined in terms of random variables, the genesis and interpretation of randomness playing a major role in deciding whether the approach will be along frequentist or Bayesian guidelines. The discussion on image formation from indirect information, which may come from non-imaging modalities, is coupled with an overview of how statistics can be used to overcome the hurdles posed by the inherent ill-posedness of the problem. The statistical counterpart to classical inverse problems and regularization approaches to contain the potentially disastrous effects of ill-posedness is the extraction and implementation of complementary information in imaging algorithms. The difficulty in expressing quantitative and uncertain notions about the imaging problem at hand in qualitative terms, which is a major challenge in a deterministic context, can be more easily overcome once the problem is expressed in probabilistic terms. An outline of how to translate some typical qualitative traits into a format which can be utilized by statistical imaging algorithms is presented. In line with the Bayesian paradigm favored in this chapter, basic principles for the construction of priors and likelihoods are presented, together with a discussion of numerous computational statistics algorithms, including Maximum Likelihood estimators, Maximum A Posteriori and Conditional Mean estimators, Expectation Maximization, Markov chain Monte Carlo, and hierarchical Bayesian models. Rather than aiming to be a comprehensive survey, the present chapter hopes to convey a wide and opinionated overview of statistical methods in imaging.

Daniela Calvetti, Erkki Somersalo
22. Supervised Learning by Support Vector Machines

During the last 2 decades support vector machine learning has become a very active field of research with a large amount of both sophisticated theoretical results and exciting real-word applications. This chapter gives a brief introduction into the basic concepts of supervised support vector learning and touches some recent developments in this broad field.

Gabriele Steidl
23. Total Variation in Imaging

The use of total variation as a regularization term in imaging problems was motivated by its ability to recover the image discontinuities. This is at the basis of its numerous applications to denoising, optical flow, stereo imaging and 3D surface reconstruction, segmentation, or interpolation to mention some of them. On one hand, we review here the main theoretical arguments that have been given to support this idea. On the other, we review the main numerical approaches to solve different models where total variation appears. We describe both the main iterative schemes and the global optimization methods based on the use of max-flow algorithms. Then, we review the use of anisotropic total variation models to solve different geometric problems and its use in finding a convex formulation of some non-convex total variation problems. Finally, we study the total variation formulation of image restoration.

V. Caselles, A. Chambolle, M. Novaga
24. Numerical Methods and Applications in Total Variation Image Restoration

Since their introduction in a classic paper by Rudin, Osher, and Fatemi [51],total variation minimizing models have become one of the most popular andsuccessful methodologies for image restoration. New developments continue toexpand the capability of the basic method in various aspects. Many fasternumerical algorithms and more sophisticated applications have been proposed.This chapter reviews some of these recent developments.

Raymond Chan, Tony Chan, Andy Yip
25. Mumford and Shah Model and its Applications to Image Segmentation andImage Restoration

We present in this chapter an overview of the Mumford and Shah model for image segmentation. We discuss its various formulations, some of its properties, the mathematical framework, and several approximations. We also present numerical algorithms and segmentation results using the Ambrosio–Tortorelli phase-field approximations on one hand, and using the level set formulations on the other hand. Several applications of the Mumford–Shah problem to image restoration are also presented.

Leah Bar, Tony F. Chan, Ginmo Chung, Miyoun Jung, Nahum Kiryati, Rami Mohieddine, Nir Sochen, Luminita A. Vese
26. Local Smoothing Neighborhood Filters
Jean-Michel Morel, Antoni Buades, Tomeu Coll
27. Neighborhood Filters and the Recovery of 3D Information

Following their success in image processing (see Chap. 26), neighborhood filters have been extended to 3D surface processing. This adaptation is not straightforward. It has led to several variants for surfaces depending on whether the surface is defined as a mesh, or as a raw data point set. The image gray level in the bilateral similarity measure is replaced by a geometric information such as the normal or the curvature. The first section of this chapter reviews the variants of 3D mesh bilateral filters and compares them to the simplest possible isotropic filter, the mean curvature motion.In a second part, this chapter reviews applications of the bilateral filter to a data composed of a sparse depth map (or of depth cues) and of the image on which they have been computed. Such sparse depth cues can be obtained by stereo vision or by psychophysical techniques. The underlying assumption to these applications is that pixels with similar intensity around a region are likely to have similar depths. Therefore, when diffusing depth information with a bilateral filter based on locality and color similarity, the discontinuities in depth are assured to be consistent with the color discontinuities, which is generally a desirable property. In the reviewed applications, this ends up with the reconstruction of a dense perceptual depth map from the joint data of an image and of depth cues.

Julie Digne, Mariella Dimiccoli, Philippe Salembier, Neus Sabater
28. Splines and Multiresolution Analysis

Splines and multiresolution are two independent concepts, which – consideredtogether – yield a vast variety of bases for image processing and image analysis.The idea of a multiresolution analysis is to construct a ladder of nested spacesthat operate as some sort of mathematical looking glass. It allows to separatecoarse parts in a signal or in an image from the details of various sizes.Spline functions are piecewise or domainwise polynomials in one dimension(1D) resp.nD.There is a variety of spline functions that generate multiresolution analyses.The viewpoint in this chapter is the modeling of such spline functions in frequency domain via Fourier decay to generate functions with specified smoothness in time domain resp. space domain. The mathematical foundations are presented and illustrated at the example of cardinal B-splines as generators of multiresolution analyses. Other spline models such as complex B-splines, polyharmonic splines, hexagonal splines, and others are considered. For all these spline families exist fast and stable multiresolution algorithms which can be elegantly implemented in frequency domain. The chapter closes with a look on open problems in the field.

Brigitte Forster
29. Gabor Analysis for Imaging
Ole Christensen, Hans G. Feichtinger, Stephan Paukner
30. Shape Spaces

This chapter describes a selection of models that have been used to build Riemannian spaces of shapes. It starts with a discussion of the finite dimensional space of point sets (or landmarks) and then provides an introduction to the more challenging issue of building spaces of shapes represented as plane curves. A special attention is devoted to constructions involving quotient spaces, since they are involved in the definition of shape spaces via the action of groups of diffeomorphisms and in the process of identifying shapes that can be related by a Euclidean transformation. The resulting structure is first described via the geometric concept of a Riemannian submersion and then reinterpreted in a Hamiltonian and optimal control framework, via momentum maps. These developments are followed by the description of algorithms and illustrated by numerical experiments.

Alain Trouvé, Laurent Younes
31. Variational Methods in Shape Analysis

The concept of a shape space is linked both to concepts from geometry and from physics. On one hand, a path-based viscous flow approach leads to Riemannian distances between shapes, where shapes are boundaries of objects that mainly behave like fluids. On the other hand, a state-based elasticity approach induces a (by construction) non-Riemannian dissimilarity measure between shapes, which is given by the stored elastic energy of deformations matching the corresponding objects. The two approaches are both based on variational principles. They are analyzed with regard to different applications, and a detailed comparison is given.

Martin Rumpf, Benedikt Wirth
32. Manifold Intrinsic Similarity

Non-rigid shapes are ubiquitous in Nature and are encountered at all levels of life, from macro to nano. The need to model such shapes and understand their behavior arises in many applications in imaging sciences, pattern recognition, computer vision, and computer graphics. Of particular importance is understanding which properties of the shape are attributed to deformations and which are invariant, i.e., remain unchanged. This chapter presents an approach to non-rigid shapes from the point of view of metric geometry. Modeling shapes as metric spaces, one can pose the problem of shape similarity as the similarity of metric spaces and harness tools from theoretical metric geometry for the computation of such a similarity.

Alexander M. Bronstein, Michael M. Bronstein
33. Image Segmentation with Shape Priors: Explicit Versus Implicit Representations
Daniel Cremers
34. Starlet Transform in Astronomical Data Processing

We begin with traditional source detection algorithms in astronomy. We then introduce the sparsity data model. The starlet wavelet transform serves as our main focus in this chapter. Sparse modeling, and noise modeling, are described. Applications to object detection and characterization, and to image filtering and deconvolution, are discussed. The multiscale vision model is a further development of this work, which can allow for image reconstruction when the point spread function is not known, or not known well. Bayesian and other algorithms are described for image restoration. A range of examples is used to illustrate the algorithms.

Jean-Luc Starck, Fionn Murtagh, Mario Bertero
35. Differential Methods for Multi-Dimensional Visual Data Analysis

Images in scientific visualization are the end-product of data processing. Starting from higher-dimensional datasets, such as scalar-, vector-, tensor- fields given on 2D, 3D, 4D domains, the objective is to reduce this complexity to two-dimensional images comprehensible to the human visual system. Various mathematical fields such as in particular differential geometry, topology (theory of discretized manifolds), differential topology, linear algebra, Geometric Algebra, vectorfield and tensor analysis, and partial differential equations contribute to the data filtering and transformation algorithms used in scientific visualization. The application of differential methods is core to all these fields. The following chapter will provide examples from current research on the application of these mathematical domains to scientific visualization and ultimately generating of images for analysis of multi-dimensional datasets.

Werner Benger, René Heinzl, Dietmar Hildenbrand, Tino Weinkauf, Holger Theisel, David Tschumperlé
Backmatter
Metadaten
Titel
Handbook of Mathematical Methods in Imaging
herausgegeben von
Otmar Scherzer
Copyright-Jahr
2011
Verlag
Springer New York
Electronic ISBN
978-0-387-92920-0
Print ISBN
978-0-387-92919-4
DOI
https://doi.org/10.1007/978-0-387-92920-0