Skip to main content
main-content

Über dieses Buch

This book constitutes the proceedings of the 8th International Conference on Scale Space and Variational Methods in Computer Vision, SSVM 2021, which took place during May 16-20, 2021. The conference was planned to take place in Cabourg, France, but changed to an online format due to the COVID-19 pandemic.
The 45 papers included in this volume were carefully reviewed and selected from a total of 64 submissions. They were organized in topical sections named as follows: scale space and partial differential equations methods; flow, motion and registration; optimization theory and methods in imaging; machine learning in imaging; segmentation and labelling; restoration, reconstruction and interpolation; and inverse problems in imaging.

Inhaltsverzeichnis

Frontmatter

Scale Space and Partial Differential Equations Methods

Frontmatter

Scale-Covariant and Scale-Invariant Gaussian Derivative Networks

This paper presents a hybrid approach between scale-space theory and deep learning, where a deep learning architecture is constructed by coupling parameterized scale-space operations in cascade. By sharing the learnt parameters between multiple scale channels, and by using the transformation properties of the scale-space primitives under scaling transformations, the resulting network becomes provably scale covariant. By in addition performing max pooling over the multiple scale channels, a resulting network architecture for image classification also becomes provably scale invariant. We investigate the performance of such networks on the MNISTLargeScale dataset, which contains rescaled images from original MNIST over a factor of 4 concerning training data and over a factor of 16 concerning testing data. It is demonstrated that the resulting approach allows for scale generalization, enabling good performance for classifying patterns at scales not present in the training data.

Tony Lindeberg

Quantisation Scale-Spaces

Recently, sparsification scale-spaces have been obtained as a sequence of inpainted images by gradually removing known image data. Thus, these scale-spaces rely on spatial sparsity. In the present paper, we show that sparsification of the co-domain, the set of admissible grey values, also constitutes scale-spaces with induced hierarchical quantisation techniques. These quantisation scale-spaces are closely tied to information theoretical measures for coding cost, and therefore particularly interesting for inpainting-based compression. Based on this observation, we propose a sparsification algorithm for the grey-value domain that outperforms uniform quantisation as well as classical clustering approaches.

Pascal Peter

Equivariant Deep Learning via Morphological and Linear Scale Space PDEs on the Space of Positions and Orientations

We present PDE-based Group Convolutional Neural Networks (PDE-G-CNNs) that generalize Group equivariant Convolutional Neural Networks (G-CNNs). In PDE-G-CNNs a network layer is a set of PDE-solvers where geometrically meaningful PDE-coefficients become trainable weights. The underlying PDEs are morphological and linear scale space PDEs on the homogeneous space $$\mathbb {M}_d$$ M d of positions and orientations. They provide an equivariant, geometrical PDE-design and model interpretability of the network.The network is implemented by morphological convolutions with approximations to kernels solving morphological $$\alpha $$ α -scale-space PDEs, and to linear convolutions solving linear $$\alpha $$ α -scale-space PDEs. In the morphological setting, the parameter $$\alpha $$ α regulates soft max-pooling over balls, whereas in the linear setting the cases $$\alpha = 1/2$$ α = 1 / 2 and $$\alpha = 1$$ α = 1 correspond to Poisson and Gaussian scale spaces respectively.We show that our analytic approximation kernels are accurate and practical. We build on techniques introduced by Weickert and Burgeth who revealed a key isomorphism between linear and morphological scale spaces via the Fourier-Cramér transform. It maps linear $$\alpha $$ α -stable Lévy processes to Bellman processes. We generalize this to $$\mathbb {M}_{d}$$ M d and exploit this relation between linear and morphological scale-space kernels.We present blood vessel segmentation experiments that show the benefits of PDE-G-CNNs compared to state-of-the-art G-CNNs: increase of performance along with a huge reduction in network parameters.

Remco Duits, Bart Smets, Erik Bekkers, Jim Portegies

Nonlinear Spectral Processing of Shapes via Zero-Homogeneous Flows

In this work we extend the spectral total-variation framework, and use it to analyze and process 2D manifolds embedded in 3D. Analysis is performed in the embedding space - thus “spectral arithmetics” manipulate the shape directly. This makes our approach highly versatile and accurate for feature control. We propose three such methods, based on non-Euclidean zero-homogeneous p-Laplace operators. Each method satisfies distinct characteristics, demonstrated through smoothing, enhancing and exaggerating filters.

Jonathan Brokman, Guy Gilboa

Total-Variation Mode Decomposition

In this work we analyze the Total Variation (TV) flow applied to one dimensional signals. We formulate a relation between Dynamic Mode Decomposition (DMD), a dimensionality reduction method based on the Koopman operator, and the spectral TV decomposition. DMD is adapted by time rescaling to fit linearly decaying processes, such as the TV flow. For the flow with finite subgradient transitions, a closed form solution of the rescaled DMD is formulated. In addition, a solution to the TV-flow is presented, which relies only on the initial condition and its corresponding subgradient. A very fast numerical algorithm is obtained which solves the entire flow by elementary subgradient updates.

Ido Cohen, Tom Berkov, Guy Gilboa

Fast Morphological Dilation and Erosion for Grey Scale Images Using the Fourier Transform

The basic filters in mathematical morphology are dilation and erosion. They are defined by a flat or non-flat structuring element that is usually shifted pixel-wise over an image and a comparison process that takes place within the corresponding mask. Existing fast algorithms that realise dilation and erosion for grey value images are often limited with respect to size or shape of the structuring element. Usually their algorithmic complexity depends on these aspects. Many fast methods only address flat morphology.In this paper we propose a novel way to make use of the fast Fourier transform for the computation of dilation and erosion. Our method is by design highly flexible, as it can be used with flat and non-flat structuring elements of any size and shape. Moreover, its complexity does not depend on size or shape of the structuring element, but only on the number of pixels in the filtered images. We show experimentally that we obtain results of very reasonable quality with the proposed method.

Marvin Kahra, Vivek Sridhar, Michael Breuß

Diffusion, Pre-smoothing and Gradient Descent

Nonlinear diffusion of images, both isotropic and anisotropic, has become a well-established and well-understood denoising tool during the last three decades. Moreover, it is a component of partial differential equation methods for various further tasks in image analysis. For the analysis of such methods, their understanding as gradient descents of energy functionals often plays an important role. Often the diffusivity or diffusion tensor field for nonlinear diffusion is computed from pre-smoothed image gradients. What was not clear so far was whether nonlinear diffusion with this pre-smoothing step still is the gradient descent for some energy functional. This question is answered to the negative in the present paper. Triggered by this result, possible modifications of the pre-smoothing step to retain the gradient descent property of diffusion are discussed.

Martin Welk

Local Culprits of Shape Complexity

Quantifying shape complexity is useful in several practical problems in addition to being interesting from a theoretical point of view. In this paper, instead of assigning a single global measure of complexity, we propose a distributed coding where to each point on the shape domain a measure of its contribution to complexity is assigned. We define the shape simplicity as the expressibility of the shape via a prototype shape. To keep discussions concrete we focus on a case where the prototype is a rectangle. Nevertheless, the constructions in the paper is valid in higher dimensions where the prototype is a hyper-cuboid. Thanks to the connection between differential operators and mathematical morphology, the proposed construction naturally extends to the case where diamonds serve as the prototypes.

Mazlum Ferhat Arslan, Sibel Tari

Extension of Mathematical Morphology in Riemannian Spaces

Mathematical morphology remains an efficient image analysis tool due to its morphological scale-spaces capability. It can be formulated using partial differential equations that are in fact a particular case of the first order Hamilton-Jacobi equations for which viscosity solutions exist and are given by Hopf-Lax-Oleinik (HLO) formulas. In this study, we propose an extension of HLO formulas in Riemannian manifolds by considering a general Cauchy problem, and prove the existence of a unique viscosity solution. Some properties are derived, and example on the hyperbolic ball is also provided.

El Hadji S. Diop, Alioune Mbengue, Bakary Manga, Diaraf Seck

Flow, Motion and Registration

Frontmatter

Multiscale Registration

In the seminal paper E. Tadmor, S. Nezzar and L. Vese, A multiscale image representation using hierarchical $$(BV, L^2)$$ ( B V , L 2 ) decompositions, Multiscale Model. Simul., 2(4), 554–579, (2004), the authors introduce a multiscale image decomposition model providing a hierarchical decomposition of a given image into the sum of scale-varying components. In line with this framework, we extend the approach to the case of registration, task which consists of mapping salient features of an image onto the corresponding ones in another, the underlying goal being to obtain such a kind of hierarchical decomposition of the deformation relating the two considered images (—from the coarser one that encodes the main structural/geometrical deformation, to the more refined one—). To achieve this goal, we introduce a functional minimisation problem in a hyperelasticity setting by viewing the shapes to be matched as Ogden materials. This approach is complemented by hard constraints on the $$L^{\infty }$$ L ∞ -norm of both the Jacobian and its inverse, ensuring that the deformation is a bi-Lipschitz homeomorphism. Theoretical results emphasising the mathematical soundness of the model are provided, among which the existence of minimisers, a $$\varGamma $$ Γ -convergence result and an analysis of a suitable numerical algorithm, along with numerical simulations demonstrating the ability of the model to produce accurate hierarchical representations of deformations.

Noémie Debroux, Carole Le Guyader, Luminita A. Vese

Challenges for Optical Flow Estimates in Elastography

In this paper, we consider visualization of displacement fields via optical flow methods in elastographic experiments consisting of a static compression of a sample. We propose an elastographic optical flow method (EOFM) which takes into account experimental constraints, such as appropriate boundary conditions, the use of speckle information, as well as the inclusion of structural information derived from knowledge of the background material. We present numerical results based on both simulated and experimental data from an elastography experiment in order to demonstrate the relevance of our proposed approach.

Ekaterina Sherina, Lisa Krainz, Simon Hubmer, Wolfgang Drexler, Otmar Scherzer

An Anisotropic Selection Scheme for Variational Optical Flow Methods with Order-Adaptive Regularisation

Approaches based on order-adaptive regularisation belong to the most accurate variational methods for computing the optical flow. By locally deciding between first- and second-order regularisation, they are applicable to scenes with both fronto-parallel and ego-motion. So far, however, existing order-adaptive methods have a decisive drawback. While the involved first- and second-order smoothness terms already make use of anisotropic concepts, the underlying selection process itself is still isotropic in that sense that it locally chooses the same regularisation order for all directions. In our paper, we address this shortcoming. We propose a generalised order-adaptive approach that allows to select the local regularisation order for each direction individually. To this end, we split the order-adaptive regularisation across and along the locally dominant direction and perform an energy competition for each direction separately. This in turn offers another advantage. Since the parameters can be chosen differently for both directions, the approach allows for a better adaption to the underlying scene. Experiments for MPI Sintel and KITTI 2015 demonstrate the usefulness of our approach. They not only show improvements compared to an isotropic selection scheme. They also make explicit that our approach is able to improve the results from state-of-the-art learning-based approaches, if applied as a final refinement step – thereby achieving top results in both benchmarks.

Lukas Mehl, Cedric Beschle, Andrea Barth, Andrés Bruhn

Low-Rank Registration of Images Captured Under Unknown, Varying Lighting

Photometric stereo infers the 3D-shape of a surface from a sequence of images captured under moving lighting and a static camera. However, in real-world scenarios the viewing angle may slightly vary, due to vibrations induced by the camera shutter, or when the camera is hand-held. In this paper, we put forward a low-rank affine registration technique for images captured under unknown, varying lighting. Optimization is carried out using convex relaxation and the alternating direction method of multipliers. The proposed method is shown to significantly improve 3D-reconstruction by photometric stereo on unaligned real-world data, and an open-source implementation is made available.

Matthieu Pizenberg, Yvain Quéau, Abderrahim Elmoataz

Towards Efficient Time Stepping for Numerical Shape Correspondence

The computation of correspondences between shapes is a principal task in shape analysis. To this end, methods based on partial differential equations (PDEs) have been established, encompassing e.g. the classic heat kernel signature as well as numerical solution schemes for geometric PDEs. In this work we focus on the latter approach.We consider here several time stepping schemes. The goal of this investigation is to assess, if one may identify a useful property of methods for time integration for the shape analysis context. Thereby we investigate the dependence on time step size, since the class of implicit schemes that are useful candidates in this context should ideally yield an invariant behaviour with respect to this parameter.To this end we study integration of heat and wave equation on a manifold. In order to facilitate this study, we propose an efficient, unified model order reduction framework for these models. We show that specific $$l_0$$ l 0 stable schemes are favourable for numerical shape analysis. We give an experimental evaluation of the methods at hand of classical TOSCA data sets.

Alexander Köhler, Michael Breuß

First Order Locally Orderless Registration

First Order Locally Orderless Registration (FLOR) is a scale-space framework for image density estimation used for defining image similarity, mainly for Image Registration. The Locally Orderless Registration framework was designed in principle to use zeroth-order information, providing image density estimates over three scales: image scale, intensity scale, and integration scale. We extend it to take first-order information into account and hint at higher-order information. We show how standard similarity measures extend into the framework. We study especially Sum of Squared Differences (SSD) and Normalized Cross-Correlation (NCC) but present the theory of how Normalised Mutual Information (NMI) can be included.

Sune Darkner, José D. T. Vidarte, François Lauze

Optimization Theory and Methods in Imaging

Frontmatter

First-Order Geometric Multilevel Optimization for Discrete Tomography

Discrete tomography (DT) naturally leads to a hierarchy of models of varying discretization levels. We employ multilevel optimization (MLO) to take advantage of this hierarchy: while working at the fine level we compute the search direction based on a coarse model. Importing concepts from information geometry to the n-orthotope, we propose a smoothing operator that only uses first-order information and incorporates constraints smoothly. We show that the proposed algorithm is well suited to the ill-posed reconstruction problem in DT, compare it to a recent MLO method that nonsmoothly incorporates box constraints and demonstrate its efficiency on several large-scale examples.

Jan Plier, Fabrizio Savarino, Michal Kočvara, Stefania Petra

Bregman Proximal Gradient Algorithms for Deep Matrix Factorization

A typical assumption for the convergence of first order optimization methods is the Lipschitz continuity of the gradient of the objective function. However, for many practical applications this assumption is violated. To overcome this issue extensions based on generalized proximity measures, known as Bregman distances, were introduced. This initiated the development of the Bregman Proximal Gradient (BPG) algorithms, which, however, rely on problem dependent Bregman distances. In this paper, we develop Bregman distances for deep matrix factorization problems, which yields a BPG algorithm with theoretical convergence guarantees, while allowing for a constant step size strategy. Moreover, we demonstrate that the algorithms based on the developed Bregman distance outperform their Euclidean counterparts as well as alternating minimization based approaches.

Mahesh Chandra Mukkamala, Felix Westerkamp, Emanuel Laude, Daniel Cremers, Peter Ochs

Hessian Initialization Strategies for -BFGS Solving Non-linear Inverse Problems

$$\ell $$ ℓ -BFGS is the state-of-the-art optimization method for many large scale inverse problems. It has a small memory footprint and achieves superlinear convergence. The method approximates Hessian based on an initial approximation and an update rule that models current local curvature information. The initial approximation greatly affects the scaling of a search direction and the overall convergence of the method.We propose a novel, simple, and effective way to initialize the Hessian. Typically, the objective function is a sum of a data-fidelity term and a regularizer. Often, the Hessian of the data-fidelity is computationally challenging, but the regularizer’s Hessian is easy to compute. We replace the Hessian of the data-fidelity with a scalar and keep the Hessian of the regularizer to initialize the Hessian approximation at every iteration. The scalar satisfies the secant equation in the sense of ordinary and total least squares and geometric mean regression.Our new strategy not only leads to faster convergence, but the quality of the numerical solutions is generally superior to simple scaling based strategies. Specifically, the proposed schemes based on ordinary least squares formulation and geometric mean regression outperform the state-of-the-art schemes.The implementation of our strategy requires only a small change of a standard $$\ell $$ ℓ -BFGS code. Our experiments on convex quadratic problems and non-convex image registration problems confirm the effectiveness of the proposed approach.

Hari Om Aggrawal, Jan Modersitzki

Inverse Scale Space Iterations for Non-convex Variational Problems Using Functional Lifting

Non-linear filtering approaches allow to obtain decompositions of images with respect to a non-classical notion of scale. The associated inverse scale space flow can be obtained using the classical Bregman iteration applied to a convex, absolutely one-homogeneous regularizer. In order to extend these approaches to general energies with non-convex data term, we apply the Bregman iteration to a lifted version of the functional with sublabel-accurate discretization. We provide a condition for the subgradients of the regularizer under which this lifted iteration reduces to the standard Bregman iteration. We show experimental results for the convex and non-convex case.

Danielle Bednarski, Jan Lellmann

A Scaled and Adaptive FISTA Algorithm for Signal-Dependent Sparse Image Super-Resolution Problems

We propose a scaled adaptive version of the Fast Iterative Soft-Thresholding Algorithm, named S-FISTA, for the efficient solution of convex optimization problems with sparsity-enforcing regularization. S-FISTA couples a non-monotone backtracking procedure with a scaling strategy for the proximal–gradient step, which is particularly effective in situations where signal-dependent noise is present in the data. The proposed algorithm is tested on some image super-resolution problems where a sparsity-promoting regularization term is coupled with a weighted- $$\ell _2$$ ℓ 2 data fidelity. Our numerical experiments show that S-FISTA allows for faster convergence in function values with respect to standard FISTA, as well as being an efficient inner solver for iteratively reweighted $$\ell _1$$ ℓ 1 algorithms, thus reducing the overall computational times.

Marta Lazzaretti, Simone Rebegoldi, Luca Calatroni, Claudio Estatico

Convergence Properties of a Randomized Primal-Dual Algorithm with Applications to Parallel MRI

The Stochastic Primal-Dual Hybrid Gradient (SPDHG) was proposed by Chambolle et al. (2018) and is an efficient algorithm to solve some nonsmooth large-scale optimization problems. In this paper we prove its almost sure convergence for convex but not necessarily strongly convex functionals. We also look into its application to parallel Magnetic Resonance Imaging reconstruction in order to test performance of SPDHG. Our numerical results show that for a range of settings SPDHG converges significantly faster than its deterministic counterpart.

Eric B. Gutiérrez, Claire Delplancke, Matthias J. Ehrhardt

Machine Learning in Imaging

Frontmatter

Wasserstein Generative Models for Patch-Based Texture Synthesis

This work addresses texture synthesis by relying on the local representation of images through their patch distributions. The main contribution is a framework that imposes the patch distributions at several scales using optimal transport. This leads to two formulations. First, a pixel-based optimization method is proposed, based on discrete optimal transport. We show that it generalizes a well-known texture optimization method that uses iterated patch nearest-neighbor projections, while avoiding some of its shortcomings. Second, in a semi-discrete setting, we exploit differential properties of Wasserstein distances to learn a fully convolutional network for texture generation. Once estimated, this network produces realistic and arbitrarily large texture samples in real time. By directly dealing with the patch distribution of synthesized images, we also overcome limitations of state-of-the-art techniques, such as patch aggregation issues that usually lead to low frequency artifacts (e.g. blurring) in traditional patch-based approaches, or statistical inconsistencies (e.g. color or patterns) in machine learning approaches.

Antoine Houdard, Arthur Leclaire, Nicolas Papadakis, Julien Rabin

Sketched Learning for Image Denoising

The Expected Patch Log-Likelihood algorithm (EPLL) and its extensions have shown good performances for image denoising. It estimates a Gaussian mixture model (GMM) from a training database of image patches and it uses the GMM as a prior for denoising. In this work, we adapt the sketching framework to carry out the compressive estimation of Gaussian mixture models with low rank covariances for image patches. With this method, we estimate models from a compressive representation of the training data with a learning cost that does not depend on the number of items in the database. Our method adds another dimension reduction technique (low-rank modeling of covariances) to the existing sketching methods in order to reduce the dimension of model parameters and to add flexibility to the modeling. We test our model on synthetic data and real large-scale data for patch-based image denoising. We show that we can produce denoising performance close to the models estimated from the original training database, opening the way for the study of denoising strategies using huge patch databases.

Hui Shi, Yann Traonmilin, Jean-François Aujol

Translating Numerical Concepts for PDEs into Neural Architectures

We investigate what can be learned from translating numerical algorithms into neural networks. On the numerical side, we consider explicit, accelerated explicit, and implicit schemes for a general higher order nonlinear diffusion equation in 1D, as well as linear multigrid methods. On the neural network side, we identify corresponding concepts in terms of residual networks (ResNets), recurrent networks, and U-nets. These connections guarantee Euclidean stability of specific ResNets with a transposed convolution layer structure in each block. We present three numerical justifications for skip connections: as time discretisations in explicit schemes, as extrapolation mechanisms for accelerating those methods, and as recurrent connections in fixed point solvers for implicit schemes. Last but not least, we also motivate uncommon design choices such as nonmonotone activation functions. Our findings give a numerical perspective on the success of modern neural network architectures, and they provide design criteria for stable networks.

Tobias Alt, Pascal Peter, Joachim Weickert, Karl Schrader

CLIP: Cheap Lipschitz Training of Neural Networks

Despite the large success of deep neural networks (DNN) in recent years, most neural networks still lack mathematical guarantees in terms of stability. For instance, DNNs are vulnerable to small or even imperceptible input perturbations, so called adversarial examples, that can cause false predictions. This instability can have severe consequences in applications which influence the health and safety of humans, e.g., biomedical imaging or autonomous driving. While bounding the Lipschitz constant of a neural network improves stability, most methods rely on restricting the Lipschitz constants of each layer which gives a poor bound for the actual Lipschitz constant.In this paper we investigate a variational regularization method named CLIP for controlling the Lipschitz constant of a neural network, which can easily be integrated into the training procedure. We mathematically analyze the proposed model, in particular discussing the impact of the chosen regularization parameter on the output of the network. Finally, we numerically evaluate our method on both a nonlinear regression problem and the MNIST and Fashion-MNIST classification databases, and compare our results with a weight regularization approach.

Leon Bungert, René Raab, Tim Roith, Leo Schwinn, Daniel Tenbrinck

Variational Models for Signal Processing with Graph Neural Networks

This paper is devoted to signal processing on point-clouds by means of neural networks. Nowadays, state-of-the-art in image processing and computer vision is mostly based on training deep convolutional neural networks on large datasets. While it is also the case for the processing of point-clouds with Graph Neural Networks (GNN), the focus has been largely given to high-level tasks such as classification and segmentation using supervised learning on labeled datasets such as ShapeNet. Yet, such datasets are scarce and time-consuming to build depending on the target application. In this work, we investigate the use of variational models for such GNN to process signals on graphs for unsupervised learning.Our contributions are two-fold. We first show that some existing variational - based algorithms for signals on graphs can be formulated as Message Passing Networks (MPN), a particular instance of GNN, making them computationally efficient in practice when compared to standard gradient-based machine learning algorithms. Secondly, we investigate the unsupervised learning of feed-forward GNN, either by direct optimization of an inverse problem or by model distillation from variational-based MPN.

Amitoz Azad, Julien Rabin, Abderrahim Elmoataz

Synthetic Images as a Regularity Prior for Image Restoration Neural Networks

Deep neural networks have recently surpassed other image restoration methods which rely on hand-crafted priors. However, such networks usually require large databases and need to be retrained for each new modality. In this paper, we show that we can reach near-optimal performances by training them on a synthetic dataset made of realizations of a dead leaves model, both for image denoising and super-resolution. The simplicity of this model makes it possible to create large databases with only a few parameters. We also show that training a network with a mix of natural and synthetic images does not affect results on natural images while improving the results on dead leaves images, which are classically used for evaluating the preservation of textures. We thoroughly describe the image model and its implementation, before giving experimental results.

Raphaël Achddou, Yann Gousseau, Saïd Ladjal

Geometric Deformation on Objects: Unsupervised Image Manipulation via Conjugation

A novel two-stage approach is proposed for image manipulation and generation. User-interactive image deformation is performed through editing of contours. This is performed in the latent edge space with both color and gradient information. The output of editing is then fed into a multi-scale representation of the image to recover quality output. The model is flexible in terms of transferability and training efficiency.

Changqing Fu, Laurent D. Cohen

Learning Local Regularization for Variational Image Restoration

In this work, we propose a framework to learn a local regularization model for solving general image restoration problems. This regularizer is defined with a fully convolutional neural network that sees the image through a receptive field corresponding to small image patches. The regularizer is then learned as a critic between unpaired distributions of clean and degraded patches using a Wasserstein generative adversarial networks based energy. This yields a regularization function that can be incorporated in any image restoration problem. The efficiency of the framework is finally shown on denoising and deblurring applications.

Jean Prost, Antoine Houdard, Andrés Almansa, Nicolas Papadakis

Segmentation and Labelling

Frontmatter

On the Correspondence Between Replicator Dynamics and Assignment Flows

Assignment flows are smooth dynamical systems for data labeling on graphs. Although they exhibit structural similarities with the well-studied class of replicator dynamics, it is nontrivial to apply existing tools to their analysis. We propose an embedding of the underlying assignment manifold into the interior of a single probability simplex. Under this embedding, a large class of assignment flows are pushed to much higher-dimensional replicator dynamics. We demonstrate the applicability of this result by transferring a spectral decomposition of replicator dynamics to assignment flows.

Bastian Boll, Jonathan Schwarz, Christoph Schnörr

Learning Linear Assignment Flows for Image Labeling via Exponential Integration

We introduce a novel algorithm for estimating optimal parameters of linear assignment flows for image labeling. This flow is determined by the solution of a linear ODE in terms of a high-dimensional integral. A formula of the gradient of the solution with respect to the flow parameters is derived and approximated using Krylov subspace techniques. Riemannian descent in the parameter space enables to determine optimal parameters for a $$512 \times 512$$ 512 × 512 image in less than 10 s, without the need to backpropagate errors or to solve an adjoint equation. Numerical experiments demonstrate a high generative model expressivity despite the linearity of the assignment flow parametrization.

Alexander Zeilmann, Stefania Petra, Christoph Schnörr

On the Geometric Mechanics of Assignment Flows for Metric Data Labeling

Assignment flows are a general class of dynamical models for context dependent data classification on graphs. These flows evolve on the product manifold of probability simplices, called assignment manifold, and are governed by a system of coupled replicator equations. In this paper, we adopt the general viewpoint of Lagrangian mechanics on manifolds and show that assignment flows satisfy the Euler-Lagrange equations associated with an action functional. Besides providing a novel interpretation of assignment flows, our result rectifies the analogous statement of a recent paper devoted to uncoupled replicator equations evolving on a single simplex, and generalizes it to coupled replicator equations and assignment flows.

Fabrizio Savarino, Peter Albers, Christoph Schnörr

A Deep Image Prior Learning Algorithm for Joint Selective Segmentation and Registration

Effective variational models exist for either image segmentation or image registration for a given class of problems, though robustness is a longstanding issue. This paper proposes a new and effective variational model that aims to segment a pair of images through a joint model with registration, with the advantage of only requiring geometric prior information on one image (instead of two images) and obtaining selective segmentation on both images. Moreover we develop a deep image prior based learning algorithm to achieve the same segmentation and registration results by dropping the regularisation terms from the loss function. Numerical experiments show quality results obtained from the new approach.

Liam Burrows, Ke Chen, Francesco Torella

Restoration, Reconstruction and Interpolation

Frontmatter

Inpainting-Based Video Compression in FullHD

Compression methods based on inpainting are an evolving alternative to classical transform-based codecs for still images. Attempts to apply these ideas to video compression are rare, since reaching real-time performance is very challenging. Therefore, current approaches focus on simplified frame-by-frame reconstructions that ignore temporal redundancies. As a remedy, we propose a highly efficient, real-time capable prediction and correction approach that fully relies on partial differential equations (PDEs) in all steps of the codec: Dense variational optic flow fields yield accurate motion-compensated predictions, while homogeneous diffusion inpainting is applied for intra prediction. To compress residuals, we introduce a new highly efficient block-based variant of pseudodifferential inpainting. Our novel architecture outperforms other inpainting-based video codecs in terms of both quality and speed. For the first time in inpainting-based video compression, we can decompress FullHD (1080p) videos in real-time with a fully CPU-based implementation, outperforming previous approaches by roughly one order of magnitude.

Sarah Andris, Pascal Peter, Rahul Mohideen Kaja Mohideen, Joachim Weickert, Sebastian Hoffmann

Sparsity-Aided Variational Mesh Restoration

We propose a variational method for recovering discrete surfaces from noisy observations which promotes sparsity in the normal variation more accurately than $$\ell _1$$ ℓ 1 norm (total variation) and $$\ell _0$$ ℓ 0 pseudo-norm regularization methods by incorporating a parameterized non-convex penalty function. This results in denoised surfaces with enhanced flat regions and maximally preserved sharp features, including edges and corners. Unlike the classical two-steps mesh denoising approaches, we propose a unique, effective optimization model which is efficiently solved by an instance of Alternating Direction Method of Multipliers. Experiments are presented which strongly indicate that using the sparsity-aided formulation holds the potential for accurate restorations even in the presence of high noise.

Martin Huska, Serena Morigi, Giuseppe Antonio Recupero

Lossless PDE-based Compression of 3D Medical Images

Inpainting with Partial Differential Equations (PDEs) has previously been used as a basis for lossy image compression. For medical images, lossless compression is often considered to be safer, given that even subtle details could be diagnostically relevant. In this work, we introduce a PDE-based codec that achieves competitive compression rates for lossless image compression. It is based on coding the differences between the original image and its PDE-based reconstruction. These differences often have lower entropy than the original image, and can therefore be coded more efficiently. We optimize this idea via an iterative reconstruction scheme, and a separate coding of empty space, which takes up a considerable fraction of the field of view in many 3D medical images. We demonstrate that our PDE-based codec compares favorably to previously established lossless codecs. We also investigate the individual benefit from each ingredient of our codec on multiple examples, explore the effect of using homogeneous, edge enhancing, and fourth-order anisotropic diffusion, and discuss the choice of contrast parameters.

Ikram Jumakulyyev, Thomas Schultz

Splines for Image Metamorphosis

Cubic splines are a classical tool for higher order interpolation of points in Euclidean space known to minimize the integral of the squared acceleration along the interpolation path. This paper transfers this method to the smooth interpolation of key frames in the space of images. To this end the metamorphosis model based on a simultaneous transport of image intensities and a modulation of intensities along motion trajectories is generalized. The proposed spline energy combines quadratic functionals of the Eulerian motion acceleration and of the second material derivative representing an acceleration in the change of intensities along motion paths. A variational time discretization of this spline model is proposed and the convergence to a suitably relaxed time continuous model is discussed using the tool of $$\varGamma $$ Γ -convergence. In particular, this also allows to establish the existence of an optimal spline path interpolating given key frame images. The spatial discretization is based on a finite difference and a stable spline interpolation. A variety of numerical examples demonstrates the robustness and versatility of the proposed method for real images using a variant of the iPALM algorithm for the minimization of the fully discrete energy functional.

Jorge Justiniano, Marko Rajković, Martin Rumpf

Residual Whiteness Principle for Automatic Parameter Selection in - Image Super-Resolution Problems

We propose an automatic parameter selection strategy for variational image super-resolution of blurred and down-sampled images corrupted by additive white Gaussian noise (AWGN) with unknown standard deviation. By exploiting particular properties of the operators describing the problem in the frequency domain, our strategy selects the optimal parameter as the one optimising a suitable residual whiteness measure. Numerical tests show the effectiveness of the proposed strategy for generalised $$\ell _2$$ ℓ 2 - $$\ell _2$$ ℓ 2 Tikhonov problems.

Monica Pragliola, Luca Calatroni, Alessandro Lanza, Fiorella Sgallari

Inverse Problems in Imaging

Frontmatter

Total Deep Variation for Noisy Exit Wave Reconstruction in Transmission Electron Microscopy

Transmission electron microscopes (TEMs) are ubiquitous devices for high-resolution imaging on an atomic level. A key problem related to TEMs is the reconstruction of the exit wave, which is the electron signal at the exit plane of the examined specimen. Frequently, this reconstruction is cast as an ill-posed nonlinear inverse problem. In this work, we integrate the data-driven total deep variation regularizer to reconstruct the exit wave in this inverse problem. In several numerical experiments, the applicability of the proposed method is demonstrated for different materials.

Thomas Pinetz, Erich Kobler, Christian Doberstein, Benjamin Berkels, Alexander Effland

GMM Based Simultaneous Reconstruction and Segmentation in X-Ray CT Application

In this paper, we propose a new simultaneous reconstruction and segmentation (SRS) model in X-ray computed tomography (CT). The new SRS model is based on the Gaussian mixture model (GMM). In order to transform non-separable log-sum term in GMM into a form that can be easy solved, we introduce an auxiliary variable, which in fact plays a segmentation role. The new SRS model is much simpler comparing with the models derived from the hidden Markov measure field model (HMMFM). Numerical results show that the proposed model achieves improved results than other methods, and the CPU time is greatly reduced.

Shi Yan, Yiqiu Dong

Phase Retrieval via Polarization in Dynamical Sampling

In this paper we consider the nonlinear inverse problem of phase retrivial in the context of dynamical sampling. Where phase retrieval deals with the recovery of signals & images from phaseless measurements, dynamical sampling was introduced by Aldroubi et al. in 2015 as a tool to recover diffusion fields from spatiotemporal samples. Considering finite-dimensional signals evolving in time under the action of a known matrix, our aim is to recover the signal up to global phase in a stable way from the absolute value of certain space-time measurements. First, we state necessary conditions for the dynamical system of sampling vectors to make the recovery of the unknown signal possible. The conditions deal with the spectrum of the given matrix and the initial sampling vector. Then, assuming that we have access to a specific set of further measurements related to aligned sampling vectors, we provide a feasible procedure to recover almost every signal up to global phase using polarization techniques. Moreover, we show that by adding extra conditions like full spark, the recovery of all signals is possible without exceptions.

Robert Beinert, Marzieh Hasannasab

Invertible Neural Networks Versus MCMC for Posterior Reconstruction in Grazing Incidence X-Ray Fluorescence

Grazing incidence X-ray fluorescence is a non-destructive technique for analyzing the geometry and compositional parameters of nanostructures appearing e.g. in computer chips. In this paper, we propose to reconstruct the posterior parameter distribution given a noisy measurement generated by the forward model by an appropriately learned invertible neural network. This network resembles the transport map from a reference distribution to the posterior. We demonstrate by numerical comparisons that our method can compete with established Markov Chain Monte Carlo approaches, while being more efficient and flexible in applications.

Anna Andrle, Nando Farchmin, Paul Hagemann, Sebastian Heidenreich, Victor Soltwisch, Gabriele Steidl

Adversarially Learned Iterative Reconstruction for Imaging Inverse Problems

In numerous practical applications, especially in medical image reconstruction, it is often infeasible to obtain a large ensemble of ground-truth/measurement pairs for supervised learning. Therefore, it is imperative to develop unsupervised learning protocols that are competitive with supervised approaches in performance. Motivated by the maximum-likelihood principle, we propose an unsupervised learning framework for solving ill-posed inverse problems. Instead of seeking pixel-wise proximity between the reconstructed and the ground-truth images, the proposed approach learns an iterative reconstruction network whose output matches the ground-truth in distribution. Considering tomographic reconstruction as an application, we demonstrate that the proposed unsupervised approach not only performs on par with its supervised variant in terms of objective quality measures, but also successfully circumvents the issue of over-smoothing that supervised approaches tend to suffer from. The improvement in reconstruction quality comes at the expense of higher training complexity, but, once trained, the reconstruction time remains the same as its supervised counterpart.

Subhadip Mukherjee, Ozan Öktem, Carola-Bibiane Schönlieb

Towards Off-the-grid Algorithms for Total Variation Regularized Inverse Problems

We introduce an algorithm to solve linear inverse problems regularized with the total (gradient) variation in a gridless manner. Contrary to most existing methods, that produce an approximate solution which is piecewise constant on a fixed mesh, our approach exploits the structure of the solutions and consists in iteratively constructing a linear combination of indicator functions of simple polygons.

Yohann De Castro, Vincent Duval, Romain Petit

Multi-frame Super-Resolution from Noisy Data

Obtaining high resolution images from low resolution data with clipped noise is algorithmically challenging due to the ill-posed nature of the problem. So far such problems have hardly been tackled, and the few existing approaches use simplistic regularisers. We show the usefulness of two adaptive regularisers based on anisotropic diffusion ideas: Apart from evaluating the classical edge-enhancing anisotropic diffusion regulariser, we introduce a novel non-local one with one-sided differences and superior performance. It is termed sector diffusion. We combine it with all six variants of the classical super-resolution observational model that arise from permutations of its three operators for warping, blurring, and downsampling. Surprisingly, the evaluation in a practically relevant noisy scenario produces a different ranking than the one in the noise-free setting in our previous work (SSVM 2017).

Kireeti Bodduna, Joachim Weickert, Marcelo Cárdenas

Backmatter

Weitere Informationen

Premium Partner

    Bildnachweise