Skip to main content
main-content

Über dieses Buch

This book constitutes the refereed proceedings of the 4th International Conference on Scale Space Methods and Variational Methods in Computer Vision, SSVM 2013, held in Schloss Seggau near Graz, Austria, in June 2013. The 42 revised full papers presented were carefully reviewed and selected 69 submissions. The papers are organized in topical sections on image denoising and restoration, image enhancement and texture synthesis, optical flow and 3D reconstruction, scale space and partial differential equations, image and shape analysis, and segmentation.

Inhaltsverzeichnis

Frontmatter

Image Denoising and Restoration

Targeted Iterative Filtering

The assessment of image denoising results depends on the respective application area,

i.e.

image compression, still-image acquisition, and medical images require entirely different behavior of the applied denoising method. In this paper we propose a novel, nonlinear diffusion scheme that is derived from a linear diffusion process in a value space determined by the application. We show that application-driven linear diffusion in the transformed space compares favorably with existing nonlinear diffusion techniques.

Freddie Åström, Michael Felsberg, George Baravdish, Claes Lundström

Generalized Gradient on Vector Bundle – Application to Image Denoising

We introduce a gradient operator that generalizes the Euclidean and Riemannian gradients. This operator acts on sections of vector bundles and is determined by three geometric data: a Riemannian metric on the base manifold, a Riemannian metric and a covariant derivative on the vector bundle. Under the assumption that the covariant derivative is compatible with the metric of the vector bundle, we consider the problems of minimizing the L2 and L1 norms of the gradient. In the L2 case, the gradient descent for reaching the solutions is a heat equation of a differential operator of order two called connection Laplacian. We present an application to color image denoising by replacing the regularizing term in the Rudin-Osher-Fatemi (ROF) denoising model by the L1 norm of a generalized gradient associated with a well-chosen covariant derivative. Experiments are validated by computations of the PSNR and Q-index.

Thomas Batard, Marcelo Bertalmío

Expert Regularizers for Task Specific Processing

This study is concerned with constructing expert regularizers for specific tasks. We discuss the general problem of what is desired from a regularizer, when one knows the type of images to be processed. The aim is to improve the processing quality and to reduce artifacts created by standard, general-purpose, regularizers, such as total-variation or nonlocal functionals.

Fundamental requirements for the theoretic expert regularizer are formulated. A simplistic regularizer is then presented, which approximates in some sense the ideal requirements.

Guy Gilboa

A Spectral Approach to Total Variation

The total variation (TV) functional is explored from a spectral perspective. We formulate a TV transform based on the second time derivative of the total variation flow, scaled by time. In the transformation domain disks yield impulse responses. This transformation can be viewed as a spectral domain, with somewhat similar intuition of classical Fourier analysis. A simple reconstruction formula from the TV spectral domain to the spatial domain is given. We can then design low-pass, high-pass and band-pass TV filters and obtain a TV spectrum of signals and images.

Guy Gilboa

Convex Generalizations of Total Variation Based on the Structure Tensor with Applications to Inverse Problems

We introduce a generic convex energy functional that is suitable for both grayscale and vector-valued images. Our functional is based on the eigenvalues of the structure tensor, therefore it penalizes image variation at every point by taking into account the information from its neighborhood. It generalizes several existing variational penalties, such as the Total Variation and vectorial extensions of it. By introducing the concept of patch-based Jacobian operator, we derive an equivalent formulation of the proposed regularizer that is based on the Schatten norm of this operator. Using this new formulation, we prove convexity and develop a dual definition for the proposed energy, which gives rise to an efficient and parallelizable minimization algorithm. Moreover, we establish a connection between the minimization of the proposed convex regularizer and a generic type of nonlinear anisotropic diffusion that is driven by a spatially regularized and adaptive diffusion tensor. Finally, we perform extensive experiments with image denoising and deblurring for grayscale and color images. The results show the effectiveness of the proposed approach as well as its improved performance compared to Total Variation and existing vectorial extensions of it.

Stamatios Lefkimmiatis, Anastasios Roussos, Michael Unser, Petros Maragos

Adaptive Second-Order Total Variation: An Approach Aware of Slope Discontinuities

Total variation (TV) regularization, originally introduced by Rudin, Osher and Fatemi in the context of image denoising, has become widely used in the field of inverse problems. Two major directions of modifications of the original approach were proposed later on. The first concerns

adaptive

variants of TV regularization, the second focuses on

higher-order

TV models. In the present paper, we combine the ideas of both directions by proposing

adaptive second-order

TV models, including one

anisotropic

model. Experiments demonstrate that introducing adaptivity results in an improvement of the reconstruction error.

Frank Lenzen, Florian Becker, Jan Lellmann

Variational Methods for Motion Deblurring with Still Background

Motion deblurring problems are considered, however, as additional difficulty we consider that the motion occurs in front of a still background. First we propose a model for the formation of this kind of partly blurred images which involve four unknown quantities: The object, the background, the blur kernel and a mask that encodes the shape of the object. Then we propose variational methods to solve the deblurring problem. We show that the method performs well if three of the sought-after quantities are known. Finally we show that the method even works for real world examples as soon as the user makes a crude selection of the blurred region in the image.

Eileen Laue, Dirk A. Lorenz

Blind Deblurring Using a Simplified Sharpness Index

It was shown recently that the phase of the Fourier Transform of an image could lead to interesting no-reference image quality measures. The Global Phase Coherence, and its recent Gaussian variant called Sharpness Index, rate the sharpness of an image in contrast not only with blur, but also noise, ringing, etc. In this work, we introduce a new variant of these indices, that can be computed with one Fourier Transform only, hence four times quicker than the Sharpness Index. We use this new index

S

to build an image restoration algorithm that, in a stochastic framework, selects a radial-unimodal deconvolution kernel for which the

S

-value of the restored image is optimal. Experiments are discussed, and comparison is made with a radial oracle deconvolution filter and the recent blind deconvolution algorithm of Levin et al.

Arthur Leclaire, Lionel Moisan

A Cascadic Alternating Krylov Subspace Image Restoration Method

This paper describes a cascadic image restoration method which at each level applies a two-way alternating denoising and deblurring procedure. Denoising is carried out with a wavelet transform, which also provides an estimate of the noise-level. The latter is used to determine a suitable regularization parameter for the Krylov subspace iterative deblurring method. The cascadic multilevel method proceed from coarse to fine image resolution, using suitable restriction and prolongation operators. The choice of the latter is critical for the performance of the multilevel method. We introduce a special deblurring prolongation procedure based on TV regularization. Computed examples demonstrate the effectiveness of the method proposed for determining image restorations of high quality.

Serena Morigi, Lothar Reichel, Fiorella Sgallari

B-SMART: Bregman-Based First-Order Algorithms for Non-negative Compressed Sensing Problems

We introduce and study Bregman functions as objectives for non-negative sparse compressed sensing problems together with a related first-order iterative scheme employing non-quadratic proximal terms. This scheme yields closed-form multiplicative updates and handles constraints implicitly. Its analysis does not rely on global Lipschitz continuity in contrast to established state-of-the-art gradient-based methods, hence it is attractive for dealing with very large systems. Convergence and a

O

(

k

− 1

) rate are proved. We also introduce an iterative two-step extension of the update scheme that accelerates convergence. Comparative numerical experiments for non-negativity and box constraints provide evidence for a

O

(

k

− 2

) rate and reveal competitive and also superior performance.

Stefania Petra, Christoph Schnörr, Florian Becker, Frank Lenzen

Epigraphical Projection for Solving Least Squares Anscombe Transformed Constrained Optimization Problems

This paper deals with the restoration of images corrupted by a non-invertible or ill-conditioned linear transform and Poisson noise. Poisson data typically occur in imaging processes where the images are obtained by counting particles, e.g., photons, that hit the image support. By using the Anscombe transform, the Poisson noise can be approximated by an additive Gaussian noise with zero mean and unit variance. Then, the least squares difference between the Anscombe transformed corrupted image and the original image can be estimated by the number of observations. We use this information by considering an Anscombe transformed constrained model to restore the image. The advantage with respect to corresponding penalized approaches lies in the existence of a simple model for parameter estimation. We solve the constrained minimization problem by applying a primal-dual algorithm together with a projection onto the epigraph of a convex function related to the Anscombe transform. We show that this epigraphical projection can be efficiently computed by Newton’s methods with an appropriate initialization. Numerical examples demonstrate the good performance of our approach, in particular, its close behaviour with respect to the

I

-divergence constrained model.

Stanislav Harizanov, Jean-Christophe Pesquet, Gabriele Steidl

Image Enhancement and Texture Synthesis

Static and Dynamic Texture Mixing Using Optimal Transport

This paper tackles the problem of mixing static and dynamic texture by combining the statistical properties of an input set of images or videos. We focus on Spot Noise textures that follow a stationary and Gaussian model which can be learned from the given exemplars. From here, we define, using Optimal Transport, the distance between texture models, derive the geodesic path, and define the barycenter between several texture models. These derivations are useful because they allow the user to navigate inside the set of texture models, interpolating a new one at each element of the set. From these new interpolated models, new textures can be synthesized of arbitrary size in space and time. Numerical results obtained from a library of exemplars show the ability of our method to generate new complex and realistic static and dynamic textures.

Sira Ferradans, Gui-Song Xia, Gabriel Peyré, Jean-François Aujol

A TGV Regularized Wavelet Based Zooming Model

We propose and state a novel scheme for image magnification. It is formulated as a minimization problem which incorporates a data fidelity and a regularization term. Data fidelity is modeled using a wavelet transformation operator while the

Total Generalized Variation

functional of second order is applied for regularization. Well-posedness is obtained in a function space setting and an efficient numerical algorithm is developed. Numerical experiments confirm a high quality of the magnified images. In particular, with an appropriate choice of wavelets, geometrical information is preserved.

Kristian Bredies, Martin Holler

Anisotropic Third-Order Regularization for Sparse Digital Elevation Models

We consider the problem of interpolating a surface based on sparse data such as individual points or level lines. We derive interpolators satisfying a list of desirable properties with an emphasis on preserving the geometry and characteristic features of the contours while ensuring smoothness across level lines. We propose an anisotropic third-order model and an efficient method to adaptively estimate both the surface and the anisotropy. Our experiments show that the approach outperforms AMLE and higher-order total variation methods qualitatively and quantitatively on real-world digital elevation data.

Jan Lellmann, Jean-Michel Morel, Carola-Bibiane Schönlieb

A Fast Algorithm for Exact Histogram Specification. Simple Extension to Colour Images

In [12] a variational method using

${\mathcal C}^2$

-smoothed ℓ

1

-TV functionals was proposed to process digital (quantized) images so that the obtained minimizer is quite close to the input image but its pixels are all different from each other. These minimizers were shown to enable exact histogram specification outperforming the state-of-the-art methods [6], [19] in terms of

faithful total strict ordering

. They need to be computed with a high numerical precision. However the relevant functionals are difficult to minimize using standard tools because their gradient is nearly flat over vast regions.

Here we present a specially designed fixed-point algorithm enabling to attain the minimizer with remarkable speed and precision. This variational method applied with the new proposed algorithm is actually the best way (in terms of quality and speed) to order the pixels in digital images. This assertion is corroborated by exhaustive numerical tests.

We extend the method to color images where the luminance channel is exactly fitted to a prescribed histogram. We propose a new fast algorithm to compute the modified color values which preserves the hue and do not yield gamut problem. Numerical tests confirm the performance of the latter algorithm.

Mila Nikolova

Constrained Sparse Texture Synthesis

This paper presents a novel texture synthesis algorithm that performs a sparse expansion of the patches of the image in a dictionary learned from an input exemplar. The synthesized texture is computed through the minimization of a non-convex energy that takes into account several constraints. Our first contribution is the computation of a sparse expansion of the patches imposing that the dictionary atoms are used in the same proportions as in the exemplar. This is crucial to enable a fair representation of the features of the input image during the synthesis process. Our second contribution is the use of additional penalty terms in the variational formulation to maintain the histogram and the low frequency content of the input. Lastly we introduce a non-linear reconstruction process that stitches together patches without introducing blur. Numerical results illustrate the importance of each of these contributions to achieve state of the art texture synthesis.

Guillaume Tartavel, Yann Gousseau, Gabriel Peyré

Outlier Removal Power of the L1-Norm Super-Resolution

Super-resolution combines several low resolution images having different sampling into a high resolution image. L1-norm data fit minimization has been proposed to solve this problem in a robust way. The outlier rejection capability of this methods has been shown experimentally for super-resolution. However, existing approaches add a regularization term to perform the minimization while it may not be necessary. In this paper, we recall the link between robustness to outliers and the sparse recovery framework. We use a slightly weaker Null Space Property to characterize this capability. Then, we apply these results to super resolution and show both theoretically and experimentally that we can quantify the robustness to outliers with respect to the number of images.

Yann Traonmilin, Saïd Ladjal, Andrés Almansa

Optical Flow and 3D Reconstruction

Why Is the Census Transform Good for Robust Optic Flow Computation?

The census transform is becoming increasingly popular in the context of optic flow computation in image sequences. Since it is invariant under monotonically increasing grey value transformations, it forms the basis of an illumination-robust constancy assumption. However, its underlying mathematical concepts have not been studied so far. The goal of our paper is to provide this missing theoretical foundation. We study the continuous limit of the inherently discrete census transform and embed it into a variational setting. Our analysis shows two surprising results: The census-based technique enforces matchings of extrema, and it induces an anisotropy in the data term by acting along level lines. Last but not least, we establish links to the widely-used gradient constancy assumption and present experiments that confirm our findings.

David Hafner, Oliver Demetz, Joachim Weickert

Generalised Perspective Shape from Shading in Spherical Coordinates

In the last four decades there has been enormous progress in Shape from Shading (SfS) with respect to both modelling and numerics. In particular approaches based on advanced model assumptions such as perspective cameras and non-Lambertian surfaces have become very popular. However, regarding the positioning of the light source, almost all recent approaches still follow the simplest geometric configuration one can think of: The light source is assumed to be located exactly at the optical centre of the camera. In our paper, we refrain from this unrealistic and severe restriction. Instead we consider a much more general SfS scenario based on a perspective camera, where the light source can be positioned

anywhere

in the scene. To this end, we propose a novel SfS model that is based on a Hamilton-Jacobi equation (HJE) which in turn is formulated in terms of spherical coordinates. This particular choice of the modelling framework and the coordinate system comes along with two fundamental contributions: While on the modelling side, the spherical coordinate system allows us to derive a generalised brightness equation – a compact and elegant generalisation of the standard image irradiance equation to arbitrary configurations of the light source, on the numerical side, the formulation as Hamilton-Jacobi equation enables us to develop a specifically tailored variant of the fast marching (FM) method – one of the most efficient numerical solvers in the entire SfS literature. Results on synthetic and real-world data confirm our theoretical considerations. They clearly demonstrate the feasibility and efficiency of the generalised SfS approach.

Silvano Galliani, Yong Chul Ju, Michael Breuß, Andrés Bruhn

Weighted Patch-Based Reconstruction: Linking (Multi-view) Stereo to Scale Space

Surface reconstruction using patch-based multi-view stereo commonly assumes that the underlying surface is locally planar. This is typically not true so that least-squares fitting of a planar patch leads to systematic errors which are of particular importance for multi-scale surface reconstruction. In a recent paper [12], we determined the modulation transfer function of a classical patch-based stereo system. Our key insight was that the reconstructed surface is a box-filtered version of the original surface. Since the box filter is not a true low-pass filter this causes high-frequency artifacts. In this paper, we propose an extended reconstruction model by weighting the least-squares fit of the 3D patch. We show that if the weighting function meets specified criteria the reconstructed surface is the convolution of the original surface with that weighting function. A choice of particular interest is the Gaussian which is commonly used in image and signal processing but left unexploited by many multi-view stereo algorithms. Finally, we demonstrate the effects of our theoretic findings using experiments on synthetic and real-world data sets.

Ronny Klowsky, Arjan Kuijper, Michael Goesele

Optical Flow on Evolving Surfaces with an Application to the Analysis of 4D Microscopy Data

We extend the concept of optical flow to a dynamic non-Euclidean setting. Optical flow is traditionally computed from a sequence of flat images. It is the purpose of this paper to introduce variational motion estimation for images that are defined on an evolving surface. Volumetric microscopy images depicting a live zebrafish embryo serve as both biological motivation and test data.

Clemens Kirisits, Lukas F. Lang, Otmar Scherzer

Perspective Photometric Stereo with Shadows

High resolution reconstruction of 3D surfaces from images remains an active area of research since most of the methods in use are based on practical assumptions that limit their applicability. Furthermore, an additional complication in all active illumination 3D reconstruction methods is the presence of shadows, whose presence cause loss of information in the image data. We present an approach for the reconstruction of surfaces via Photometric Stereo, based on the perspective formulation of the Shape from Shading problem, solved via partial differential equations. Unlike many photometric stereo solvers that use computationally costly variational methods or a two-step approach, we use a novel, well-posed, differential formulation of the problem that enables us to solve a first order partial differential equation directly via an alternating directions raster scanning scheme. The resulting formulation enables surface computation for very large images and allows reconstruction in the presence of shadows.

Roberto Mecca, Guy Rosman, Ron Kimmel, Alfred M. Bruckstein

Solving the Uncalibrated Photometric Stereo Problem Using Total Variation

In this paper we propose a new method to solve the problem of uncalibrated photometric stereo, making very weak assumptions on the properties of the scene to be reconstructed. Our goal is to solve the generalized bas-relief ambiguity (GBR) by performing a total variation regularization of both the estimated normal field and albedo. Unlike most of the previous attempts to solve this ambiguity, our approach does not rely on any prior information about the shape or the albedo, apart from its piecewise smoothness. We test our method on real images and obtain results comparable to the state-of-the-art algorithms.

Yvain Quéau, François Lauze, Jean-Denis Durou

Minimizing TGV-Based Variational Models with Non-convex Data Terms

We introduce a method to approximately minimize variational models with Total Generalized Variation regularization (TGV) and non-convex data terms. Our approach is based on a decomposition of the functional into two subproblems, which can be both solved globally optimal. Based on this decomposition we derive an iterative algorithm for the approximate minimization of the original non-convex problem. We apply the proposed algorithm to a state-of-the-art stereo model that was previously solved using coarse-to-fine warping, where we are able to show significant improvements in terms of accuracy.

Rene Ranftl, Thomas Pock, Horst Bischof

A Mathematically Justified Algorithm for Shape from Texture

In this paper we propose a new continuous Shape from Texture (SfT) model for piecewise planar surfaces. It is based on the assumptions of texture homogeneity and perspective camera projection. We show that in this setting an unidirectional texture analysis suffices for performing SfT. With carefully chosen approximations and a separable representation, novel closed-form formulas for the surface orientation in terms of texture gradients are derived. On top of this model, we propose a SfT algorithm based on spatial derivatives of the dominant local spatial frequency in the source image. The method is motivated geometrically and it is justified rigorously by error estimates. The reliability of the algorithm is evaluated by synthetic and real world experiments.

Helge Rhodin, Michael Breuß

Scale Space and Partial Differential Equations

Multi Scale Shape Index for 3D Object Recognition

We present Multi Scale Shape Index (MSSI), a novel feature for 3D object recognition. Inspired by the scale space filtering theory and Shape Index measure proposed by Koenderink & Van Doorn [6], this feature associates different forms of shape, such as umbilics, saddle regions, parabolic regions to a real valued index. This association is useful for representing an object based on its constituent shape forms. We derive closed form scale space equations which computes a characteristic scale at each 3D point in a point cloud without an explicit mesh structure. This characteristic scale is then used to estimate the Shape Index. We quantitatively evaluate the robustness and repeatability of the MSSI feature for varying object scales and changing point cloud density. We also quantify the performance of MSSI for object category recognition on a publicly available dataset.

Ujwal Bonde, Vijay Badrinarayanan, Roberto Cipolla

Compression of Depth Maps with Segment-Based Homogeneous Diffusion

The efficient compression of depth maps is becoming more and more important. We present a novel codec specifically suited for this task. In the encoding step we segment the image and extract between-pixel contours. Subsequently we optimise the grey values at carefully selected mask points, including both hexagonal grid locations as well as freely chosen points. We use a chain code to store the contours. For the decoding we apply a segment-based homogeneous diffusion inpainting. The segmentation allows parallel processing of the individual segments. Experiments show that our compression algorithm outperforms comparable methods such as JPEG or JPEG2000, while being competitive with HEVC (High Efficiency Video Coding).

Sebastian Hoffmann, Markus Mainberger, Joachim Weickert, Michael Puhl

Scale Space Operators on Hierarchies of Segmentations

A hierarchy of segmentations(partitions) is a multiscale set representation of the image. This paper introduces a new set of scale space operators or transformations on the

space of hierarchies

of partitions. An ordering of hierarchies is proposed which is endowed by an

ω

-ordering based on a global energy over the classes of the hierarchy. A class of Matheron semigroups are shown to exists in this ordering of hierarchies. A second contribution is the saliency transformation which fuses a saliency function corresponding to a hierarchy, with an external function, rendering a new or transformed saliency function. The results are demonstrated on the Berkeley dataset.

B. Ravi Kiran, Jean Serra

Discrete Deep Structure

The discrete scale space representation

L

of

f

is continuous in scale

t

. A computational investigation of

L

however must rely on a finite number of sampled scales. There are multiple approaches to sampling

L

differing in accuracy, runtime complexity and memory usage. One apparent approach is given by the definition of

L

via discrete convolution with a scale space kernel. The scale space kernel is of infinite domain and must be truncated in order to compute an individual scale, thus introducing truncation errors. A periodic boundary condition for

f

further complicates the computation. In this case, circular convolution with a Laplacian kernel provides for an elegant but still computationally complex solution. Applied in its eigenspace however, the circular convolution operator reduces to a simple and much less complex scaling transformation. This paper details how to efficiently decompose a scale of

L

and its derivative ∂ 

t

L

into a sum of eigenimages of the Laplacian circular convolution operator and provides a simple solution of the discretized diffusion equation, enabling for fast and accurate sampling of

L

.

Martin Tschirsich, Arjan Kuijper

Image Matching Using Generalized Scale-Space Interest Points

The performance of matching and object recognition methods based on interest points depends on both the properties of the underlying interest points and the associated image descriptors. This paper demonstrates the advantages of using generalized scale-space interest point detectors when computing image descriptors for image-based matching. These generalized scale-space interest points are based on linking of image features over scale and scale selection by weighted averaging along feature trajectories over scale and allow for a higher ratio of correct matches and a lower ratio of false matches compared to previously known interest point detectors within the same class. Specifically, it is shown how a significant increase in matching performance can be obtained in relation to the underlying interest point detectors in the SIFT and the SURF operators. We propose that these generalized scale-space interest points when accompanied by associated scale-invariant image descriptors should allow for better performance of interest point based methods for image-based matching, object recognition and related vision tasks.

Tony Lindeberg

A Fully Discrete Theory for Linear Osmosis Filtering

Osmosis filters are based on drift–diffusion processes. They offer nontrivial steady states with a number of interesting applications. In this paper we present a fully discrete theory for linear osmosis filtering that follows the structure of Weickert’s discrete framework for diffusion filters. It regards the positive initial image as a vector and expresses its evolution in terms of iterative matrix–vector multiplications. The matrix differs from its diffusion counterpart by the fact that it is unsymmetric. We assume that it satisfies four properties: vanishing column sums, nonnegativity, irreducibility, and positive diagonal elements. Then the resulting filter class preserves the average grey value and the positivity of the solution. Using the Perron–Frobenius theory we prove that the process converges to the unique eigenvector of the iteration matrix that is positive and has the same average grey value as the initial image. We show that our theory is directly applicable to explicit and implicit finite difference discretisations. We establish a stability condition for the explicit scheme, and we prove that the implicit scheme is absolutely stable. Both schemes converge to a steady state that solves the discrete elliptic equation. This steady state can be reached efficiently when the implicit scheme is equipped with a BiCGStab solver.

Oliver Vogel, Kai Hagenburg, Joachim Weickert, Simon Setzer

L 2-Stable Nonstandard Finite Differences for Anisotropic Diffusion

Anisotropic diffusion filters with a diffusion tensor are successfully used in many image processing and computer vision applications, ranging from image denoising over compression to optic flow computation. However, finding adequate numerical schemes is difficult: Implementations may suffer from dissipative artifacts, poor approximation of rotation invariance, and they may lack provable stability guarantees. In our paper we propose a general framework for finite difference discretisations of anisotropic diffusion filters on a 3 ×3 stencil. It is based on a gradient descent of a discrete quadratic energy where the occurring derivatives are replaced by classical as well as the widely unknown nonstandard finite differences in the sense of Mickens. This allows a large class of space discretisations with two free parameters. Combining it with an explicit or semi-implicit time discretisation, we establish a general and easily applicable stability theory in terms of a decreasing Euclidean norm. Our framework comprises as many as seven existing space discretisations from the literature. However, we show that also novel schemes are possible that offer a better performance than existing ones. Our experimental evaluation confirms that the space discretisation can have a very substantial and often underestimated impact on the quality of anisotropic diffusion filters.

Joachim Weickert, Martin Welk, Marco Wickert

Relations between Amoeba Median Algorithms and Curvature-Based PDEs

This paper is concerned with the theoretical analysis of structure-adaptive median filter algorithms that approximate curvature-based PDEs for image filtering and segmentation. These so-called morphological amoeba filters, introduced by Lerallut et al. and further developped by Welk et al., achieve similar results as the well-known geodesic active contour and self-snakes PDEs. In the present work, the PDE approximated by amoeba active contours is derived in the general case. This PDE is structurally similar but not identical to the geodesic active contour equation. Implications for the qualitative behaviour of amoeba active contours as well as for the approximation of the pre-smoothed self-snakes equation are investigated.

Martin Welk

Image and Shape Analysis, Segmentation

Scale and Edge Detection with Topological Derivatives

A typical task of image segmentation is to partition a given image into regions of homogeneous property. In this paper we focus on the problem of further detecting scales of discontinuities of the image. The approach uses a recently developed iterative numerical algorithm for minimizing the Mumford-Shah functional which is based on topological derivatives. For the scale selection we use a squared norm of the gradient at edge points. During the iteration progress, the square norm, as a function varied with iteration numbers, provides information about different scales of the discontinuity sets. For realistic image data, the graph of the norm function is regularized by using total variation minimization to provide stable separation. We present the details of the algorithm and document various numerical experiments.

Guozhi Dong, Markus Grasmair, Sung Ha Kang, Otmar Scherzer

Active Contours for Multi-region Image Segmentation with a Single Level Set Function

Segmenting the image into an arbitrary number of parts is at the core of image understanding. Many formulations of the task have been suggested over the years. Among these are axiomatic functionals, which are hard to implement and analyze, while graph-based alternatives impose a non-geometric metric on the problem.

We propose a novel approach to tackle the problem of multiple-region segmentation for an arbitrary number of regions. The proposed framework allows generic region appearance models while avoiding metrication errors. Updating the segmentation in this framework is done by level set evolution. Yet, unlike most existing methods, evolution is executed using a

single non-negative

level set function, through the Voronoi Implicit Interface Method for a multi-phase interface evolution. We apply the proposed framework to synthetic and real images, with various number of regions, and compare it to state-of-the-art image segmentation algorithms.

Anastasia Dubrovina, Guy Rosman, Ron Kimmel

Regularized Discrete Optimal Transport

This article introduces a generalization of discrete Optimal Transport that includes a regularity penalty and a relaxation of the bijectivity constraint. The corresponding transport plan is solved by minimizing an energy which is a convexification of an integer optimization problem. We propose to use a proximal splitting scheme to perform the minimization on large scale imaging problems. For un-regularized relaxed transport, we show that the relaxation is tight and that the transport plan is an assignment. In the general case, the regularization prevents the solution from being an assignment, but we show that the corresponding map can be used to solve imaging problems. We show an illustrative application of this discrete regularized transport to color transfer between images. This imaging problem cannot be solved in a satisfying manner without relaxing the bijective assignment constraint because of mass variation across image color palettes. Furthermore, the regularization of the transport plan helps remove colorization artifacts due to noise amplification.

Sira Ferradans, Nicolas Papadakis, Julien Rabin, Gabriel Peyré, Jean-François Aujol

Variational Method for Computing Average Images of Biological Organs

In this paper, we develop a variational method for the computation of average images of biological organs in three-dimensional Euclidean space. The average of three-dimensional biological organs is an essential feature to discriminate abnormal organs from normal organs. We combine the diffusion registration technique and optical flow computation for the computation of spatial deformation field between the averages and each input organ. We define the average as the shape which minimises the total deformation.

Shun Inagaki, Atsushi Imiya, Hidekata Hontani, Shouhei Hanaoka, Yoshitaka Masutani

A Hierarchical Approach to Optimal Transport

A significant class of variational models in connection with matching general data structures and comparison of metric measure spaces, lead to computationally intensive dense linear assignment and mass transportation problems. To accelerate the computation we present an extension of the auction algorithm that exploits the regularity of the otherwise arbitrary cost function. The algorithm only takes into account a sparse subset of possible assignment pairs while still guaranteeing global optimality of the solution. These subsets are determined by a multiscale approach together with a hierarchical consistency check in order to solve problems at successively finer scales. While the theoretical worst-case complexity is limited, the average-case complexity observed for a variety of realistic experimental scenarios yields a significant gain in computation time that increases with the problem size.

Bernhard Schmitzer, Christoph Schnörr

Layered Mean Shift Methods

Segmentation is one of the most discussed problems in image processing. Many various methods for image segmentation exist. The mean-shift method is one of them and it was widely developed in recent years and it is still being developed. In this paper, we propose a new method called Layered Mean Shift that uses multiple mean-shift segmentations with different bandwidths stacked for elimination of the over-segmentation problem and finding the most appropriate segment boundaries. This method effectively reduces the need for the use of large kernels in the mean-shift method. Therefore, it also significantly reduces the computational complexity.

Milan Šurkala, Karel Mozdřeň, Radovan Fusek, Eduard Sojka

Partial Optimality via Iterative Pruning for the Potts Model

We propose a novel method to obtain a part of an optimal

non-relaxed integral

solution for energy minimization problems with Potts interactions, known also as the minimal partition problem. The method empirically outperforms previous approaches likeMQPBO and Kovtun’s method in most of our test instances and especially in hard ones. As a starting point our approach uses the solution of a commonly accepted convex relaxation of the problem. This solution is then iteratively pruned until our criterion for partial optimality is satisfied. Due to its generality our method can employ any solver for the considered relaxed problem.

Paul Swoboda, Bogdan Savchynskyy, Jörg Kappes, Christoph Schnörr

Wimmelbild Analysis with Approximate Curvature Coding Distance Images

We consider a task of tracing out target figures hidden in teeming figure pictures come to known as

Wimmelbild(er)

. Wimmelbild is a popular genre of visual puzzles; a timeless classic for children, artists and cognitive scientists.

Particularly suited to the considered task, we propose a diffuse representation which serves as a heuristic approximation mimicking curvature coding distance images. Curvature coding distance images received increased attention in recent years. Typically, they are computed as solutions to variants of Poisson PDE. The proposed approximation is based on erosion of the white space (background) followed by isotropic averaging, hence, does not require solving a PDE.

Julia Bergbauer, Sibel Tari

Defect Classification on Specular Surfaces Using Wavelets

In many practical problems wavelet theory offers methods to handle data in different scales. It is highly adaptable to represent data in a compact and sparse way without loss of information. We present an approach to find and classify defects on specular surfaces using pointwise extracted features in scale space. Our results confirm the presumption that the stationary wavelet transform is better suited to localize surface defects than the classical decimated transform. The classification is based on a support vector machine (SVM) and furthermore applicable to empirically evaluate given wavelets for specific classification tasks and can therefore be used as quality measure.

Andreas Hahn, Mathias Ziebarth, Michael Heizmann, Andreas Rieder

Backmatter

Weitere Informationen

Premium Partner

Neuer Inhalt

BranchenIndex Online

Die B2B-Firmensuche für Industrie und Wirtschaft: Kostenfrei in Firmenprofilen nach Lieferanten, Herstellern, Dienstleistern und Händlern recherchieren.

Whitepaper

- ANZEIGE -

Product Lifecycle Management im Konzernumfeld – Herausforderungen, Lösungsansätze und Handlungsempfehlungen

Für produzierende Unternehmen hat sich Product Lifecycle Management in den letzten Jahrzehnten in wachsendem Maße zu einem strategisch wichtigen Ansatz entwickelt. Forciert durch steigende Effektivitäts- und Effizienzanforderungen stellen viele Unternehmen ihre Product Lifecycle Management-Prozesse und -Informationssysteme auf den Prüfstand. Der vorliegende Beitrag beschreibt entlang eines etablierten Analyseframeworks Herausforderungen und Lösungsansätze im Product Lifecycle Management im Konzernumfeld.
Jetzt gratis downloaden!

Bildnachweise