Skip to main content

2015 | Buch

Handbook of Mathematical Methods in Imaging

insite
SUCHEN

Über dieses Buch

The Handbook of Mathematical Methods in Imaging provides a comprehensive treatment of the mathematical techniques used in imaging science. The material is grouped into two central themes, namely, Inverse Problems (Algorithmic Reconstruction) and Signal and Image Processing. Each section within the themes covers applications (modeling), mathematics, numerical methods (using a case example) and open questions. Written by experts in the area, the presentation is mathematically rigorous.

This expanded and revised second edition contains updates to existing chapters and 16 additional entries on important mathematical methods such as graph cuts, morphology, discrete geometry, PDEs, conformal methods, to name a few. The entries are cross-referenced for easy navigation through connected topics. Available in both print and electronic forms, the handbook is enhanced by more than 200 illustrations and an extended bibliography.

It will benefit students, scientists and researchers in applied mathematics. Engineers and computer scientists working in imaging will also find this handbook useful.

Inhaltsverzeichnis

Frontmatter

Inverse Problems – Methods

Frontmatter
Linear Inverse Problems

This introductory treatment of linear inverse problems is aimed at students and neophytes. A historical survey of inverse problems and some examples of model inverse problems related to imaging are discussed to furnish context and texture to the mathematical theory that follows. The development takes place within the sphere of the theory of compact linear operators on Hilbert space, and the singular value decomposition plays an essential role. The primary concern is regularization theory: the construction of convergent well-posed approximations to ill-posed problems. For the most part, the discussion is limited to the familiar regularization method devised by Tikhonov and Phillips.

Charles Groetsch
Large-Scale Inverse Problems in Imaging

Large-scale inverse problems arise in a variety of significant applications in image processing, and efficient regularization methods are needed to compute meaningful solutions. This chapter surveys three common mathematical models including a linear model, a separable nonlinear model, and a general nonlinear model. Techniques for regularization and large-scale implementations are considered, with particular focus on algorithms and computations that can exploit structure in the problem. Examples from image deconvolution, multi-frame blind deconvolution, and tomosynthesis illustrate the potential of these algorithms. Much progress has been made in the field of large-scale inverse problems, but many challenges still remain for future research.

Julianne Chung, Sarah Knepper, James G. Nagy
Regularization Methods for Ill-Posed Problems

In this chapter are outlined some aspects of the mathematical theory for direct regularization methods aimed at the stable approximate solution of nonlinear ill-posed inverse problems. The focus is on Tikhonov type variational regularization applied to nonlinear ill-posed operator equations formulated in Hilbert and Banach spaces. The chapter begins with the consideration of the classical approach in the Hilbert space setting with quadratic misfit and penalty terms, followed by extensions of the theory to Banach spaces and present assertions on convergence and rates concerning the variational regularization with general convex penalty terms. Recent results refer to the interplay between solution smoothness and nonlinearity conditions expressed by variational inequalities. Six examples of parameter identification problems in integral and differential equations are given in order to show how to apply the theory of this chapter to specific inverse and ill-posed problems.

Jin Cheng, Bernd Hofmann
Distance Measures and Applications to Multimodal Variational Imaging

Today imaging is rapidly improving by increased specificity and sensitivity of measurement devices. However, even more diagnostic information can be gained by combination of data recorded with different imaging systems.

Christiane Pöschl, Otmar Scherzer
Energy Minimization Methods

Energy minimization methods are a very popular tool in image and signal processing. This chapter deals with images defined on a discrete finite set. The energies under consideration can be differentiable or not or convex or not. Analytical results on the minimizers of different energies are provided that reveal salient features of the images recovered in this way, as a function of the shape of the energy itself. An intrinsic mutual relationship between energy minimization and modeling via the choice of the energy is thus established. Examples and illustrations corroborate the presented results. Applications that take benefit from these results are presented as well.

Mila Nikolova
Compressive Sensing

Compressive sensing is a recent type of sampling theory, which predicts that sparse signals and images can be reconstructed from what was previously believed to be incomplete information. As a main feature, efficient algorithms such as ℓ1-minimization can be used for recovery. The theory has many potential applications in signal processing and imaging. This chapter gives an introduction and overview on both theoretical and numerical aspects of compressive sensing.

Massimo Fornasier, Holger Rauhut
Duality and Convex Programming

This chapter surveys key concepts in convex duality theory and their application to the analysis and numerical solution of problem archetypes in imaging. Convex analysis, Variational analysis, Duality

Jonathan M. Borwein, D. Russell Luke
EM Algorithms

Expectation-maximization algorithms, or em algorithms for short, are iterative algorithms designed to solve maximum likelihood estimation problems. The general setting is that one observes a random sample Y1, Y2, …, Y n of a random variable Y whose probability density function (pdf) f(⋅|xo)$$f(\,\cdot \,\vert \,x_{o})$$ with respect to some (known) dominating measure is known up to an unknown “parameter” x o . The goal is to estimate x o and, one might add, to do it well. In this chapter, that means to solve the maximum likelihood problem.

Charles Byrne, Paul P. B. Eggermont
EM Algorithms from a Non-stochastic Perspective

The EM algorithm is not a single algorithm, but a template for the construction of iterative algorithms. While it is always presented in stochastic language, relying on conditional expectations to obtain a method for estimating parameters in statistics, the essence of the EM algorithm is not stochastic. The conventional formulation of the EM algorithm given in many texts and papers on the subject is inadequate. A new formulation is given here based on the notion of acceptable data.

Charles Byrne
Iterative Solution Methods

This chapter deals with iterative methods for nonlinear ill-posed problems. We present gradient and Newton type methods as well as nonstandard iterative algorithms such as Kaczmarz, expectation maximization, and Bregman iterations. Our intention here is to cite convergence results in the sense of regularization and to provide further references to the literature.

Martin Burger, Barbara Kaltenbacher, Andreas Neubauer
Level Set Methods for Structural Inversion and Image Reconstruction

In this chapter, an introduction is given into the use of level set techniques for inverse problems and image reconstruction. Several approaches are presented which have been developed and proposed in the literature since the publication of the original (and seminal) paper by F. Santosa in 1996 on this topic. The emphasis of this chapter, however, is not so much on providing an exhaustive overview of all ideas developed so far but on the goal of outlining the general idea of structural inversion by level sets, which means the reconstruction of complicated images with interfaces from indirectly measured data. As case studies, recent results (in 2D) from microwave breast screening, history matching in reservoir engineering, and crack detection are presented in order to demonstrate the general ideas outlined in this chapter on practically relevant and instructive examples. Various references and suggestions for further research are given as well.

Oliver Dorn, Dominique Lesselier

Inverse Problems – Case Examples

Frontmatter
Expansion Methods

The aim of this chapter is to review recent developments in the mathematical and numerical modeling of anomaly detection and multi-physics biomedical imaging. Expansion methods are designed for anomaly detection. They provide robust and accurate reconstruction of the location and of some geometric features of the anomalies, even with moderately noisy data. Asymptotic analysis of the measured data in terms of the size of the unknown anomalies plays a key role in characterizing all the information about the anomaly that can be stably reconstructed from the measured data. In multi-physics imaging approaches, different physical types of waves are combined into one tomographic process to alleviate deficiencies of each separate type of waves while combining their strengths. Multi-physics systems are capable of high-resolution and high-contrast imaging. Asymptotic analysis plays a key role in multi-physics modalities as well.

Habib Ammari, Hyeonbae Kang
Sampling Methods

The topic of this chapter is devoted to shape identification problems, i.e., problems where the shape of an object has to be determined from indirect measurements. In contrast to iterative methods where a sequence of forward problems has to be computed the sampling methods avoid the (usually expansive) computation of the forward problems. Instead, a class of test objects (e.g., points) are chosen and a binary criterium is constructed which depends on the measured data only, and which decides whether this test object is inside or outside of the searched for domain. In this chapter, the factorization method is explained for the impedance tomography problem with insulating or conducting inclusions, for scattering theory for time harmonic acoustic plane waves in the presence of a perfectly sound–soft obstacle, and for electromagnetic scattering by an inhomogeneous conducting medium. Brief descriptions of related sampling methods, such as the linear sampling method, MUSIC, the singular sources method, and the probe method complement this chapter.

Martin Hanke-Bourgeois, Andreas Kirsch
Inverse Scattering

We give a survey of the mathematical basis of inverse scattering theory, concentrating on the case of time-harmonic acoustic waves. After an introduction and historical remarks, we give an outline of the direct scattering problem. This is then followed by sections on uniqueness results in inverse scattering theory and iterative and decomposition methods to reconstruct the shape and material properties of the scattering object. We conclude by discussing qualitative methods in inverse scattering theory, in particular the linear sampling method and its use in obtaining lower bounds on the constitutive parameters of the scattering object.

David Colton, Rainer Kress
Electrical Impedance Tomography

This chapter reviews the state of the art and the current open problems in electrical impedance tomography (EIT), which seeks to recover the conductivity (or conductivity and permittivity) of the interior of a body from knowledge of electrical stimulation and measurements on its surface. This problem is also known as the inverse conductivity problem and its mathematical formulation is due to A. P. Calderón, who wrote in 1980, the first mathematical formulation of the problem, “On an inverse boundary value problem.” EIT has interesting applications in fields such as medical imaging (to detect air and fluid flows in the heart and lungs and imaging of the breast and brain) and geophysics (detection of conductive mineral ores and the presence of ground water). It is well known that this problem is severely ill-posed, and thus this chapter is devoted to the study of the uniqueness, stability, and reconstruction of the conductivity from boundary measurements. A detailed distinction between the isotropic and anisotropic case is made, pointing out the major difficulties with the anisotropic case. The issues of global and local measurements are studied, noting that local measurements are more appropriate for practical applications such as screening for breast cancer.

Andy Adler, Romina Gaburro, William Lionheart
Synthetic Aperture Radar Imaging

The purpose of this chapter is to explain the basics of radar imaging and to list a variety of associated open problems. After a short section on the historical background, the chapter includes a derivation of an approximate scalar model for radar data. The basics in inverse synthetic aperture radar (ISAR) are discussed, and a connection is made with the Radon transform. Two types of synthetic aperture radar (SAR), namely, spotlight SAR and stripmap SAR, are outlined. Resolution analysis is included for ISAR and spotlight SAR. Some numerical algorithms are discussed. Finally, the chapter ends with a listing of open problems and a bibliography for further reading.

Margaret Cheney, Brett Borden
Tomography

WeTomography define tomography as the process of producing an image of a distribution (of some physical property) from estimates of its line integrals along a finite number of lines of known locations. We touch upon the computational and mathematical procedures underlying the data collection, image reconstruction, and image display in the practice of tomography. The emphasis is on reconstruction methods, especially the so-called series expansion reconstruction algorithms.We illustrate the use of tomography (including three-dimensional displays based on reconstructions) both in electron microscopy and in X-ray computerized tomography (CT), but concentrate on the latter. This is followed by a classification and discussion of reconstruction algorithms. In particular, we discuss how to evaluate and compare the practical efficacy of such algorithms.

Gabor T. Herman
Microlocal Analysis in Tomography

Several limited data problems in tomography will be presented in this chapter, including ones for X-ray tomography, electron microscopy, and radar imaging. First, reconstructions from limited data will be evaluated to observe their strengths and weaknesses. Then, the basic analytic properties of the transforms will be presented. The concept of microlocal analysis will be introduced to make the notion of singularity precise. Finally, the microlocal properties of the tomographic transforms are given and then used to explain the observed strengths and limitations of the reconstructions. This will show that these limitations are intrinsic to these limited data problems themselves.

Venkateswaran P. Krishnan, Eric Todd Quinto
Mathematical Methods in PET and SPECT Imaging

In this chapter, we present the mathematical formulation of the inverse Radon transform and of the inverse attenuated Radon transform (IART), which are used in PET and SPECT image reconstruction, respectively. Using a new method for deriving transform pairs in one and two dimensions, we derive the inverse Radon transform and the IART. Furthermore, we discuss an alternative approach for computing the Hilbert transform using cubic splines. This new approach, which is referred to as spline reconstruction technique, is formulated in the physical space, in contrast to the well-known filtered backprojection (FBP) algorithm which is formulated in the Fourier space. Finally, we present the results of several rigorous studies comparing FBP with SRT for PET. These studies, which use both simulated and real data and which employ a variety of image quality measures including contrast and bias, indicate that SRT has certain advantages in comparison with FBP.

Athanasios S. Fokas, George A. Kastis
Mathematics of Electron Tomography

This survey starts with a brief description of the scientific relevance of electron tomography in life sciences followed by a survey of image formation models. In the latter, the scattering of electrons against a specimen is modeled by the Schrödinger equation, and the image formation model is completed by adding a description of the transmission electron microscope optics and detector. Electron tomography can then be phrased as an inverse scattering problem and attention is now turned to describing mathematical approaches for solving that reconstruction problem. This part starts out by explaining challenges associated with the aforementioned inverse problem, such as the extremely low signal-to-noise ratio in the data and the severe ill-posedness due to incomplete data, which naturally brings up the issue of choosing a regularization method for reconstruction. Here, the review surveys both methods that have been developed, as well as pointing to new promising approaches. Some of the regularization methods are also tested on simulated and experimental data. As a final note, this is not a traditional mathematical review in the sense that focus here is on the application to electron tomography rather than on describing mathematical techniques that underly proofs of key theorems.

Ozan Öktem
Optical Imaging

This chapter discusses diffuse optical tomography. We present the origins of this method in terms of spectroscopic analysis of tissue using near-infrared light and its extension to an imaging modality. Models for light propagation at the macroscopic and mesoscopic scale are developed from the radiative transfer equation (RTE). Both time- and frequency-domain systems are discussed. Some formal results based on Green’s function models are presented, and numerical methods are described based on discrete finite element method (FEM) models and a Bayesian framework for image reconstruction. Finally, some open questions are discussed.

Simon R. Arridge, Jari P. Kaipio, Ville Kolehmainen, Tanja Tarvainen
Photoacoustic and Thermoacoustic Tomography: Image Formation Principles

Photoacoustic tomography (PAT), also known as thermoacoustic or optoacoustic tomography, is a rapidly emerging imaging technique that holds great promise for biomedical imaging. PAT is a hybrid imaging technique, and can be viewed either as an ultrasound mediated electromagnetic modality or an ultrasound modality that exploits electromagnetic-enhanced image contrast. In this chapter, we provide a review of the underlying imaging physics and contrast mechanisms in PAT. Additionally, the imaging models that relate the measured photoacoustic wavefields to the sought-after optical absorption distribution are described in their continuous and discrete forms. The basic principles of image reconstruction from discrete measurement data are presented, which includes a review of methods for modeling the measurement system response.

Kun Wang, Mark A. Anastasio
Mathematics of Photoacoustic and Thermoacoustic Tomography

The chapter surveys the mathematical models, problems, and algorithms of the thermoacoustic tomography (TAT)Thermoacoustic tomography (TAT) and photoacoustic tomography (PAT)Photoacoustic tomography (PAT). TAT and PAT represent probably the most developed of the several novel “hybrid” methods of medical imaging. These new modalities combine different physical types of waves (electromagnetic and acoustic in case of TAT and PAT) in such a way that the resolution and contrast of the resulting method are much higher than those achievable using only acoustic or electromagnetic measurements.

Peter Kuchment, Leonid Kunyansky
Mathematical Methods of Optical Coherence Tomography

In this chapter a general mathematical model of Optical Coherence Tomography (OCT) is presented on the basis of the electromagnetic theory. OCT produces high-resolution images of the inner structure of biological tissues. Images are obtained by measuring the time delay and the intensity of the backscattered light from the sample considering also the coherence properties of light. The scattering problem is considered for a weakly scattering medium located far enough from the detector. The inverse problem is to reconstruct the susceptibility of the medium given the measurements for different positions of the mirror. Different approaches are addressed depending on the different assumptions made about the optical properties of the sample. This procedure is applied to a full field OCT system and an extension to standard (time and frequency domain) OCT is briefly presented.

Peter Elbau, Leonidas Mindrinos, Otmar Scherzer
Wave Phenomena

This chapter discusses imaging methods related to wave phenomena, and in particular, inverse problems for the wave equation will be considered. The first part of the chapter explains the boundary control method for determining a wave speed of a medium from the response operator, which models boundary measurements. The second part discusses the scattering relation and travel times, which are different types of boundary data contained in the response operator. The third part gives a brief introduction to curvelets in wave imaging for media with nonsmooth wave speeds. The focus will be on theoretical results and methods.

Matti Lassas, Mikko Salo, Gunther Uhlmann
Sonic Imaging

This paper deals with the inverse problem of the wave equation, which is of relevance in fields such as ultrasound tomography, seismic imaging, and nondestructive testing. We study the linearized problem by Fourier analysis, and we describe an iterative reconstruction method for the fully nonlinear problem in the time domain. We discuss practical problems such as the spectral incompleteness in reflection imaging and finding a good initial approximation. We demonstrate by numerical reconstructions from synthetic data what can be achieved.

Frank Natterer
Imaging in Random Media

We give a self-contained presentation of coherent array imaging in random media, which are mathematical models of media with uncertain small-scale features (inhomogeneities). We describe the challenges of imaging in random media and discuss the coherent interferometric (CINT) imaging approach. It is designed to image with partially coherent waves, so it works at distances that do not exceed a transport mean-free path. The waves are incoherent when they travel longer distances, due to strong cumulative scattering by the inhomogeneities, and coherent imaging becomes impossible. In this article we base the presentation of coherent imaging on a simple geometrical optics model of wave propagation with randomly perturbed travel time. The model captures the canonical form of the second statistical moments of the wave field, which describe the loss of coherence and decorrelation of the waves due to scattering in random media. We use it to give an explicit resolution analysis of CINT which includes the assessment of statistical stability of the images.

Liliana Borcea

Image Restoration and Analysis

Frontmatter
Statistical Methods in Imaging

The theme of this chapter is statistical methods in imaging, with a marked emphasis on the Bayesian perspective. The application of statistical notions and techniques in imaging requires that images and the available data are redefined in terms of random variables, the genesis and interpretation of randomness playing a major role in deciding whether the approach will be along frequentist or Bayesian guidelines. The discussion on image formation from indirect information, which may come from non-imaging modalities, is coupled with an overview of how statistics can be used to overcome the hurdles posed by the inherent ill-posedness of the problem. The statistical counterpart to classical inverse problems and regularization approaches to contain the potentially disastrous effects of ill-posedness is the extraction and implementation of complementary information in imaging algorithms. The difficulty in expressing quantitative and uncertain notions about the imaging problem at hand in qualitative terms, which is a major challenge in a deterministic context, can be more easily overcome once the problem is expressed in probabilistic terms. An outline of how to translate some typical qualitative traits into a format which can be utilized by statistical imaging algorithms is presented. In line with the Bayesian paradigm favored in this chapter, basic principles for the construction of priors and likelihoods are presented, together with a discussion of numerous computational statistics algorithms, including maximum likelihood estimators, maximum a posteriori and conditional mean estimators, expectation maximization, Markov chain Monte Carlo, and hierarchical Bayesian models. Rather than aiming to be a comprehensive survey, the present chapter hopes to convey a wide and opinionated overview of statistical methods in imaging.

Daniela Calvetti, Erkki Somersalo
Supervised Learning by Support Vector Machines

During the last two decades, support vector machine learning has become a very active field of research with a large amount of both sophisticated theoretical results and exciting real-world applications. This paper gives a brief introduction into the basic concepts of supervised support vector learning and touches some recent developments in this broad field.

Gabriele Steidl
Total Variation in Imaging

The use of total variation as a regularization term in imaging problems was motivated by its ability to recover the image discontinuities. This is on the basis of his numerous applications to denoising, optical flow, stereo imaging and 3D surface reconstruction, segmentation, or interpolation, to mention some of them. On one hand, we review here the main theoretical arguments that have been given to support this idea. On the other hand, we review the main numerical approaches to solve different models where total variation appears. We describe both the main iterative schemes and the global optimization methods based on the use of max-flow algorithms. Then we review the use of anisotropic total variation models to solve different geometric problems and its use in finding a convex formulation of some non-convex total variation problems. Finally we study the total variation formulation of image restoration.

V. Caselles, A. Chambolle, M. Novaga
Numerical Methods and Applications in Total Variation Image Restoration

Since their introduction in a classic paper by Rudin, Osher, and Fatemi (Physica D 60:259–268, 1992), total variation minimizing models have become one of the most popular and successful methodologies for image restoration. New developments continue to expand the capability of the basic method in various aspects. Many faster numerical algorithms and more sophisticated applications have been proposed. This chapter reviews some of these recent developments.

Raymond Chan, Tony F. Chan, Andy Yip
Mumford and Shah Model and Its Applications to Image Segmentation and Image Restoration

This chapter presents an overview of the Mumford and Shah model for image segmentation. It discusses its various formulations, some of its properties, the mathematical framework, and several approximations. It also presents numerical algorithms and segmentation results using the Ambrosio-Tortorelli phase-field approximations on one hand and level set formulations on the other hand. Several applications of the Mumford-Shah problem to image restoration are also presented.

Leah Bar, Tony F. Chan, Ginmo Chung, Miyoun Jung, Luminita A. Vese, Nahum Kiryati, Nir Sochen
Local Smoothing Neighborhood Filters

Denoising images can be achieved by a spatial averaging of nearby pixels. However, although this method removes noise, it creates blur. Hence, neighborhood filters are usually preferred. These filters perform an average of neighboring pixels, but only under the condition that their gray level is close enough to the one of the pixel in restoration. This very popular method unfortunately creates shocks and staircasing effects. It also excessivelly blurs texture and fine structures when noise dominates the signal.In this chapter, we perform an asymptotic analysis of neighborhood filters as the size of the neighborhood shrinks to zero. We prove that these filters are asymptotically equivalent to the Perona-Malik equation, one of the first nonlinear PDEs proposed for image restoration. As a solution to the shock effect, we propose an extremely simple variant of the neighborhood filter using a linear regression instead of an average. By analyzing its subjacent PDE, we prove that this variant does not create shocks: it is actually related to the mean curvature motion.We also present a generalization of neighborhood filters, the nonlocal means (NL-means) algorithm, addressing the preservation of structure in a digital image. The NL-means algorithm tries to take advantage of the high degree of redundancy of any natural image. By this, we simply mean that every small window in a natural image has many similar windows in the same image. Now in a very general sense inspired by the neighborhood filters, one can define as “neighborhood of a pixel” any set of pixels with a similar window around. All pixels in that neighborhood can be used for predicting its denoised value.We finally analyze the recently introduced variational formulations of neighborhood filters and their application to segmentation and seed diffusion.

Jean-Michel Morel, Antoni Buades, Tomeu Coll
Neighborhood Filters and the Recovery of 3D Information

Following their success in image processing (see Chapter Local Smoothing Neighborhood Filters), neighborhood filters have been extended to 3D surface processing. This adaptation is not straightforward. It has led to several variants for surfaces depending on whether the surface is defined as a mesh, or as a raw data point set. The image gray level in the bilateral similarity measure is replaced by a geometric information such as the normal or the curvature. The first section of this chapter reviews the variants of 3D mesh bilateral filters and compares them to the simplest possible isotropic filter, the mean curvature motion.In a second part, this chapter reviews applications of the bilateral filter to a data composed of a sparse depth map (or of depth cues) and of the image on which they have been computed. Such sparse depth cues can be obtained by stereovision or by psychophysical techniques. The underlying assumption to these applications is that pixels with similar intensity around a region are likely to have similar depths. Therefore, when diffusing depth information with a bilateral filter based on locality and color similarity, the discontinuities in depth are assured to be consistent with the color discontinuities, which is generally a desirable property. In the reviewed applications, this ends up with the reconstruction of a dense perceptual depth map from the joint data of an image and of depth cues.

Julie Digne, Mariella Dimiccoli, Neus Sabater, Philippe Salembier
Splines and Multiresolution Analysis

Splines and multiresolution are two independent concepts, which – considered together – yield a vast variety of bases for image processing and image analysis. The idea of a multiresolution analysis is to construct a ladder of nested spaces that operate as some sort of mathematical looking glass. It allows to separate coarse parts in a signal or in an image from the details of various sizes. Spline functions are piecewise or domainwise polynomials in one dimension (1D) resp. nD. There is a variety of spline functions that generate multiresolution analyses. The viewpoint in this chapter is the modeling of such spline functions in frequency domain via Fourier decay to generate functions with specified smoothness in time domain resp. space domain. The mathematical foundations are presented and illustrated at the example of cardinal B-splines as generators of multiresolution analyses. Other spline models such as complex B-splines, polyharmonic splines, hexagonal splines, and others are considered. For all these spline families exist fast and stable multiresolution algorithms which can be elegantly implemented in frequency domain. The chapter closes with a look on open problems in the field.

Brigitte Forster
Gabor Analysis for Imaging
Ole Christensen, Hans G. Feichtinger, Stephan Paukner
Shape Spaces

This chapter describes a selection of models that have been used to build Riemannian spaces of shapes. It starts with a discussion of the finite-dimensional space of point sets (or landmarks) and then provides an introduction to the more challenging issue of building spaces of shapes represented as plane curves. A special attention is devoted to constructions involving quotient spaces, since they are involved in the definition of shape spaces via the action of groups of diffeomorphisms and in the process of identifying shapes that can be related by a Euclidean transformation. The resulting structure is first described via the geometric concept of a Riemannian submersion and then reinterpreted in a Hamiltonian and optimal control framework, via momentum maps. These developments are followed by the description of algorithms and illustrated by numerical experiments.

Alain Trouvé, Laurent Younes
Variational Methods in Shape Analysis

The concept of a shape space is linked both to concepts from geometry and from physics. On one hand, a path-based viscous flow approach leads to Riemannian distances between shapes, where shapes are boundaries of objects that mainly behave like fluids. On the other hand, a state-based elasticity approach induces a (by construction) non-Riemannian dissimilarity measure between shapes, which is given by the stored elastic energy of deformations matching the corresponding objects. The two approaches are both based on variational principles. They are analyzed with regard to different applications, and a detailed comparison is given.

Martin Rumpf, Benedikt Wirth
Manifold Intrinsic Similarity

Nonrigid shapes are ubiquitous in nature and are encountered at all levels of life, from macro to nano. The need to model such shapes and understand their behavior arises in many applications in imaging sciences, pattern recognition, computer vision, and computer graphics. Of particular importance is understanding which properties of the shape are attributed to deformations and which are invariant, i.e., remain unchanged. This chapter presents an approach to nonrigid shapes from the point of view of metric geometry. Modeling shapes as metric spaces, one can pose the problem of shape similarity as the similarity of metric spaces and harness tools from theoretical metric geometry for the computation of such a similarity.

Alexander M. Bronstein, Michael M. Bronstein
Image Segmentation with Shape Priors: Explicit Versus Implicit Representations

Image segmentation is among the most studied problems in image understanding and computer vision. The goal of image segmentation is to partition the image plane into a set of meaningful regions. Here meaningful typically refers to a semantic partitioning where the computed regions correspond to individual objects in the observed scene. Unfortunately, generic purely low-level segmentation algorithms often do not provide the desired segmentation results, because the traditional low-level assumptions like intensity or texture homogeneity and strong edge contrast are not sufficient to separate objects in a scene.To overcome these limitations, researchers have proposed to impose prior knowledge into low-level segmentation methods. In the following, we will review methods which allow to impose knowledge about the shape of objects of interest into segmentation processes.

Daniel Cremers
Optical Flow

Motions of physical objects relative to a camera as observer naturally occur in everyday lives and in many scientific applications. Optical flow represents the corresponding motion induced on the image plane. This paper describes the basic problems and concepts related to optical flow estimation together with mathematical models and computational approaches to solve them. Emphasis is placed on common and different modeling aspects and to relevant research directions from a broader perspective. The state of the art and corresponding deficiencies are reported along with directions of future research. The presentation aims at providing an accessible guide for practitioners as well as stimulating research work in relevant fields of mathematics and computer vision.

Florian Becker, Stefania Petra, Christoph Schnörr
Non-linear Image Registration

Image registration is to automatically establish geometrical correspondences between two images. It is an essential task in almost all areas involving imaging. This chapter reviews mathematical techniques for nonlinear image registration and presents a general, unified, and flexible approach. Taking into account that image registration is an ill-posed problem, the presented approach is based on a variational formulation and particular emphasis is given to regularization functionals motivated by mathematical elasticity. Starting out from one of the most commonly used linear elastic models, its limitations and extensions to nonlinear regularization functionals based on the theory of hyperelastic materials are considered. A detailed existence proof for hyperelastic image registration problems illustrates key concepts of polyconvex variational calculus. Numerical challenges in solving hyperelastic registration problems are discussed and a stable discretization that guarantees meaningful solutions is derived. Finally, two case studies highlight the potential of hyperelastic image registration for medical imaging applications.

Lars Ruthotto, Jan Modersitzki
Starlet Transform in Astronomical Data Processing

We begin with traditional source detection algorithms in astronomy. We then introduce the sparsity data model. The starlet wavelet transform serves as our main focus in this article. Sparse modeling and noise modeling are described. Applications to object detection and characterization, and to image filtering and deconvolution, are discussed. The multiscale vision model is a further development of this work, which can allow for image reconstruction when the point spread function is not known or not known well. Bayesian and other algorithms are described for image restoration. A range of examples is used to illustrate the algorithms.

Jean-Luc Starck, Fionn Murtagh, Mario Bertero
Differential Methods for Multi-dimensional Visual Data Analysis

Images in scientific visualization are the end product of data processing. Starting from higher-dimensional data sets such as scalar, vector, and tensor fields given on 2D, 3D, and 4D domains, the objective is to reduce this complexity to two-dimensional images comprehensible to the human visual system. Various mathematical fields such as in particular differential geometry, topology (theory of discretized manifolds), differential topology, linear algebra, Geometric Algebra, vector field and tensor analysis, and partial differential equations contribute to the data filtering and transformation algorithms used in scientific visualization. The application of differential methods is core to all these fields. The following chapter will provide examples from current research on the application of these mathematical domains to scientific visualization. Ultimately the use of these methods allows for a systematic approach for image generation resulting from the analysis of multidimensional datasets.

Werner Benger, René Heinzl, Dietmar Hildenbrand, Tino Weinkauf, Holger Theisel, David Tschumperlé
Backmatter
Metadaten
Titel
Handbook of Mathematical Methods in Imaging
herausgegeben von
Otmar Scherzer
Copyright-Jahr
2015
Verlag
Springer New York
Electronic ISBN
978-1-4939-0790-8
Print ISBN
978-1-4939-0789-2
DOI
https://doi.org/10.1007/978-1-4939-0790-8