Skip to main content

Über dieses Buch

This book constitutes the thoroughly refereed proceedings of the 18th International Conference on Discrete Geometry for Computer Imagery, DGCI 2014, held in Siena, Italy, September 2014. The 34 revised full papers presented were carefully selected from 60 submissions. The papers are organized in topical sections on Models for Discrete Geometry, Discrete and Combinatorial Topology, Geometric Transforms, Discrete Shape Representation, Recognition and Analysis, Discrete Tomography, Morphological Analysis, Discrete Modelling and Visualization, Discrete and Combinatorial Tools for Image Segmentation and Analysis.



Models for Discrete Geometry

Facet Connectedness of Discrete Hyperplanes with Zero Intercept: The General Case

A digital discrete hyperplane in ℤ


is defined by a normal vector v, a shift


, and a thickness


. The set of thicknesses


for which the hyperplane is connected is a right unbounded interval of ℝ


. Its lower bound, called the

connecting thickness

of v with shift


, may be computed by means of the fully subtractive algorithm. A careful study of the behaviour of this algorithm allows us to give exhaustive results about the connectedness of the hyperplane at the connecting thickness in the case


 = 0. We show that it is connected if and only if the sequence of vectors computed by the algorithm reaches in finite time a specific set of vectors which has been shown to be Lebesgue negligible by Kraaikamp & Meester.

Eric Domenjoud, Xavier Provençal, Laurent Vuillon

About the Maximum Cardinality of the Digital Cover of a Curve with a Given Length

We prove that the number of pixels -with pixels as unit lattice squares- of the digitization of a curve Γ of Euclidean length


is less than

$ 3 \lfloor \frac{l}{\sqrt 2} \rfloor +4$

which improves by a ratio of

$ \frac{ 4 \sqrt 2 }{3} $

the previous known bound in

$4 \lfloor l \rfloor$

[3]. This new bound is the exact maximum that can be reached. Moreover, we prove that for a given number of squares


, the Minimal Length Covering Curves of


squares are polygonal curves with integer vertices, an appropriate number of diagonal steps and 0, 1 or 2 vertical or horizontal steps. It allows to express the functions




), the maximum number of squares that can be crossed by a curve of length


, and




), the minimal length necessary to cross


squares. Extensions of these results are discussed with other distances, in higher dimensions and with other digitization schemes.

Yan Gérard, Antoine Vacavant

Binary Pictures with Excluded Patterns

The notion of a pattern within a binary picture (polyomino) has been introduced and studied in [3], and resembles the notion of pattern containment within permutations. The main goal of this paper is to extend the studies of [3] by adopting a more geometrical approach: we use the notion of pattern avoidance in order to recognize or describe families of polyominoes defined by means of geometrical constraints or combinatorial properties. Moreover, we extend the notion of pattern in a polyomino, by introducing generalized polyomino patterns, so that to be able to describe more families of polyominoes known in the literature.

Daniela Battaglino, Andrea Frosini, Veronica Guerrini, Simone Rinaldi, Samanta Socci

Discrete and Combinatorial Topology

2D Topological Map Isomorphism for Multi-Label Simple Transformation Definition

A 2D topological map allows one to fully describe the topology of a labeled image. In this paper we introduce new tools for comparing the topology of two labeled images. First we define 2D topological map isomorphism. We show that isomorphic topological maps correspond to homeomorphic embeddings in the plane and we give a polynomial-time algorithm for deciding of topological map isomorphism. Then we use this notion to give a generic definition of multi-label simple transformation as a set of transformations of labels of pixels which does not modify the topology of the labeled image. We illustrate the interest of multi-label simple transformation by generating look-up tables of small transformations preserving the topology.

Guillaume Damiand, Tristan Roussillon, Christine Solnon

Isthmus-Based Parallel and Asymmetric 3D Thinning Algorithms

Critical kernels constitute a general framework settled in the context of abstract complexes for the study of parallel thinning in any dimension. We take advantage of the properties of this framework, to propose a generic thinning scheme for obtaining “thin” skeletons from objects made of voxels. From this scheme, we derive algorithms that produce curvilinear or surface skeletons, based on the notion of 1D or 2D isthmus.

Michel Couprie, Gilles Bertrand

Completions and Simple Homotopy

We propose an extension of simple homotopy by considering

homotopic pairs

. Intuitively, a homotopic pair is a couple of objects (




) such that


is included in


and (




) may be transformed to a trivial couple by simple homotopic deformations that keep




. Thus, these objects are linked by a “relative homotopy relation”.

We formalize these notions by means of completions, which are inductive properties expressed in a declarative way. In a previous work, through the notion of a


, we showed that completions were able to handle couples of objects that are linked by a certain “relative homology relation”.

The main result of the paper is a theorem that makes clear the link between homotopic pairs and dyads. Thus, we prove that, in the unified framework of completions, it is possible to handle notions relative to both homotopy and homology.

Gilles Bertrand

Geometric Transforms

2D Subquadratic Separable Distance Transformation for Path-Based Norms

In many applications, separable algorithms have demonstrated their efficiency to perform high performance and parallel volumetric computations, such as distance transformation or medial axis extraction. In the literature, several authors have discussed about conditions on the metric to be considered in a separable approach. In this article, we present generic separable algorithms to efficiently compute Voronoi maps and distance transformations for a large class of metrics. Focusing on path based norms (chamfer masks, neighborhood sequences, ...), we detail a subquadratic algorithm to compute such volumetric transformation in dimension 2. More precisely, we describe a








) algorithm for shapes in a




domain with chamfer norm of size



David Coeurjolly

Anti-Aliased Euclidean Distance Transform on 3D Sampling Lattices

The Euclidean distance transform (EDT) is used in many essential operations in image processing, such as basic morphology, level sets, registration and path finding. The anti-aliased Euclidean distance transform (AAEDT), previously presented for two-dimensional images, uses the gray-level information in, for example, area sampled images to calculate distances with sub-pixel precision. Here, we extend the studies of AAEDT to three dimensions, and to the Body-Centered Cubic (BCC) and Face-Centered Cubic (FCC) lattices, which are, in many respects, considered the optimal three-dimensional sampling lattices. We compare different ways of converting gray-level information to distance values, and find that the lesser directional dependencies of optimal sampling lattices lead to better approximations of the true Euclidean distance.

Elisabeth Linnér, Robin Strand

Efficient Neighbourhood Computing for Discrete Rigid Transformation Graph Search

Rigid transformations are involved in a wide variety of image processing applications, including image registration. In this context, we recently proposed to deal with the associated optimization problem from a purely discrete point of view, using the notion of discrete rigid transformation (DRT) graph. In particular, a local search scheme within the DRT graph to compute a locally optimal solution without any numerical approximation was formerly proposed. In this article, we extend this study, with the purpose to reduce the algorithmic complexity of the proposed optimization scheme. To this end, we propose a novel algorithmic framework for just-in-time computation of sub-graphs of interest within the DRT graph. Experimental results illustrate the potential usefulness of our approach for image registration.

Yukiko Kenmochi, Phuc Ngo, Hugues Talbot, Nicolas Passat

The Minimum Barrier Distance – Stability to Seed Point Position

Distance and path-cost functions have been used for image segmentation at various forms, e.g., region growing or live-wire boundary tracing using interactive user input. Different approaches are associated with different fundamental advantages as well as difficulties. In this paper, we investigate the stability of segmentation with respect to perturbations in seed point position for a recently introduced pseudo-distance method referred to as the minimum barrier distance. Conditions are sought for which segmentation results are invariant with respect to the position of seed points and a proof of their correctness is presented. A notion of


-interface is introduced defining the object-background interface at various gradations and its relation to stability of segmentation is examined. Finally, experimental results are presented examining different aspects of stability of segmentation results to seed point position.

Robin Strand, Filip Malmberg, Punam K. Saha, Elisabeth Linnér

Discrete Shape Representation, Recognition and Analysis

Efficient Computation of the Outer Hull of a Discrete Path

We present here a linear time and space algorithm for computing the outer hull of any discrete path encoded by its Freeman chain code. The basic data structure uses an enriched version of the data structure introduced by Brlek, Koskas and Provençal: using quadtrees for representing points in the discrete plane ℤ×ℤ with neighborhood links, deciding path intersection is achievable in linear time and space. By combining the well-known wall follower algorithm for traversing mazes, we obtain the desired result with two passes resulting in a global linear time and space algorithm. As a byproduct, the convex hull is obtained as well.

Srecko Brlek, Hugo Tremblay, Jérôme Tremblay, Romaine Weber

Voronoi-Based Geometry Estimator for 3D Digital Surfaces

We propose a robust estimator of geometric quantities such as normals, curvature directions and sharp features for 3D digital surfaces. This estimator only depends on the digitisation gridstep and is defined using a digital version of the Voronoi Covariance Measure, which exploits the robust geometric information contained in the Voronoi cells. It has been proved in [1] that the Voronoi Covariance Measure is resilient to Hausdorff noise. Our main theorem explicits the conditions under which this estimator is multigrid convergent for digital data. Moreover, we determine what are the parameters which maximise the convergence speed of this estimator, when the normal vector is sought. Numerical experiments show that the digital VCM estimator reliably estimates normals, curvature directions and sharp features of 3D noisy digital shapes.

Louis Cuel, Jacques-Olivier Lachaud, Boris Thibert

An Arithmetical Characterization of the Convex Hull of Digital Straight Segments

In this paper, we arithmetically describe the convex hull of a digital straight segment by three recurrence relations. This characterization gives new insights into the combinatorial structure of digital straight segments of arbitrary length and intercept. It also leads to two on-line algorithms that computes a part of the convex hull of a given digital straight segment. They both run in constant space and constant time per vertex. Due to symmetries, they are enough to reconstruct the whole convex hull. Moreover, these two algorithms provide efficient solutions to the subsegment problem, which consists in computing the minimal parameters of a segment of a digital straight line of known parameters.

Tristan Roussillon

Parameter-Free and Multigrid Convergent Digital Curvature Estimators

In many geometry processing applications, the estimation of differential geometric quantities such as curvature or normal vector field is an essential step. Focusing on multigrid convergent estimators, most of them require a user specified parameter to define the scale at which the analysis is performed (size of a convolution kernel, size of local patches for polynomial fitting, etc). In a previous work, we have proposed a new class of estimators on digital shape boundaries based on Integral Invariants. In this paper, we propose new variants of these estimators which are parameter-free and ensure multigrid convergence in 2D. As far as we know, these are the first parameter-free multigrid convergent curvature estimators.

Jérémy Levallois, David Coeurjolly, Jacques-Olivier Lachaud

Freeman Digitization and Tangent Word Based Estimators

This paper deals with the digitization of smooth or regular curves (beyond algebraic, analytic or locally convex ones). The first part explains why the Freeman square box quantization is not well-defined for such curves, and discuss possible workarounds to deal with them. In the second part, we prove that first-order differential estimators (tangent, normal, length) based on tangent words are multi-grid convergent, for any (



) regular curve, without assuming any form of convexity.

Thierry Monteil

Determination of Length and Width of a Line-Segment by Using a Hough Transform

The standard Hough transform does not provide length and width of a line-segment detected in an image; it just detects the normal parameters of the line. We present a novel method for determining also length and width of a line segment by using the Hough transform. Our method uses statistical analysis of voting cells around a peak in the Hough space. In image space, the voting cells and voting values are analysed. The functional relationship between voting variance and voting angle is deduced. We approximate this relationship by a quadratic polynomial curve. In Hough space, the statistical variances of columns around a peak are computed and used to fit a quadratic polynomial function. The length and width of a line segment are determined simultaneously by resolving the equations generated by comparing the corresponding coefficients of two functions. We tested and verified the proposed method on simulated and real-world images. Obtained experimental results demonstrate the accuracy of our novel method for determining length and width of detected line segments.

Zezhong Xu, Bok-Suk Shin, Reinhard Klette

Stable Shape Comparison of Surfaces via Reeb Graphs

Reeb graphs are combinatorial signatures that capture shape properties from the perspective of a chosen function. One of the most important questions is whether Reeb graphs are robust against function perturbations that may occur because of noise and approximation errors in the data acquisition process. In this work we tackle the problem of stability by providing an editing distance between Reeb graphs of orientable surfaces in terms of the cost necessary to transform one graph into another by edit operations. Our main result is that the editing distance between two Reeb graphs is upper bounded by the extent of the difference of the associated functions, measured by the maximum norm. This yields the stability property under function perturbations.

Barbara Di Fabio, Claudia Landi

About Multigrid Convergence of Some Length Estimators

An interesting property for curve length digital estimators is the convergence toward the continuous length and the associate convergence speed when the digitization step


tends to 0. On the one hand, it has been proved that the local estimators do not verify this convergence. On the other hand, DSS and MLP based estimators have been proved to converge but only under some convexity and smoothness or polygonal assumptions. In this frame, a new estimator class, the so called

semi-local estimators

, has been introduced by Daurat

et al.

in [4]. For this class, the pattern size depends on the resolution but not on the digitized function. The semi-local estimator convergence has been proved for functions of class


with an optimal convergence speed that is a

$\mathcal{O}(h^{\frac 1 2})$

without convexity assumption (here, optimal means with the best estimation parameter setting). A semi-local estimator subclass, that we call

sparse estimators

, is exhibited here. The sparse estimators are proved to have the same convergence speed as the semi-local estimators under the weaker assumptions. Besides, if the continuous function that is digitized is concave, the sparse estimators are proved to have an optimal convergence speed in


. Furthermore, assuming a sequence of functions

$G_h\colon h\mspace{1.0mu}\mathbb{Z} \to h\mspace{1.0mu}\mathbb{Z}$

discretizing a given Euclidean function as


tends to 0, sparse length estimation computational complexity in the optimal setting is a



Loïc Mazo, Étienne Baudrier

Discrete Tomography

Non-additive Bounded Sets of Uniqueness in ℤ n

A main problem in discrete tomography consists in looking for theoretical models which ensure uniqueness of reconstruction. To this, lattice sets of points, contained in a multidimensional grid

$\mathcal{A}=[m_1]\times[m_2]\times\dots \times[m_n]$

(where for


 ∈ ℕ, [


] = {0,1,...,


 − 1}), are investigated by means of


-rays in a given set


of lattice directions. Without introducing any noise effect, one aims in finding the minimal cardinality of


which guarantees solution to the uniqueness problem.

In a previous work the matter has been completely settled in dimension two, and later extended to higher dimension. It turns out that


 + 1 represents the minimal number of directions one needs in ℤ






 ≥ 3), under the requirement that such directions span a


-dimensional subspace of ℤ


. Also, those sets of


 + 1 directions have been explicitly characterized.

However, in view of applications, it might be quite difficult to decide whether the uniqueness problem has a solution, when


-rays are taken in a set of more than two lattice directions. In order to get computational simpler approaches, some prior knowledge is usually required on the object to be reconstructed. A powerful information is provided by additivity, since additive sets are reconstructible in polynomial time by using linear programming.

In this paper we compute the proportion of non-additive sets of uniqueness with respect to additive sets in a given grid

$\mathcal{A}\subset \mathbb{Z}^n$

, in the important case when


coordinate directions are employed.

Sara Brunetti, Paolo Dulio, Carla Peri

Back-Projection Filtration Inversion of Discrete Projections

We present a new, robust discrete back-projection filtration algorithm to reconstruct digital images from close-to-minimal sets of arbitrarily oriented discrete projected views. The discrete projections are in the Mojette format, with either Dirac or Haar pixel sampling. The strong aliasing in the raw image reconstructed by direct back-projection is corrected via a de-convolution using the Fourier transform of the discrete point-spread function (PSF) that was used for the forward projection. The de-convolution is regularised by applying an image-sized digital weighting function to the raw PSF. These weights are obtained from the set of back-projected points that partially tile the image area to be reconstructed. This algorithm produces high quality reconstructions at and even below the Katz sufficiency limit, which defines a minimal criterion for projection sets that permit a unique discrete reconstruction for noise-free data. As the number of input discrete projected views increases, the PSF more fully tiles the discrete region to be reconstructed, the de-convolution and its weighting mask become progressively less important. This algorithm then merges asymptotically with the perfect reconstruction method found by Servières et al in 2004. However the Servières approach, for which the PSF must exactly tile the full area of the reconstructed image, requires





) uniformly distributed projection angles to reconstruct




data. The independence of each (back-) projected view makes our algorithm robust to random, symmetrically distributed noise. We present, as results, images reconstructed from sets of




) projected view angles that are either uniformly distributed, randomly selected, or clustered about orthogonal axes.

Imants Svalbe, Andrew Kingston, Nicolas Normand, Henri Der Sarkissian

Discrete Tomography Reconstruction Algorithms for Images with a Blocking Component

We study a problem involving the reconstruction of an image from its horizontal and vertical projections in the case where some parts of these projections are unavailable. The desired goal is to model applications where part of the x-rays used for the analysis of an object are blocked by particularly dense components that do not allow the rays to pass through the material. This is a common issue in many tomographic scans, and while there are several heuristics to handle quite efficiently the problem in applications, the underlying theory has not been extensively developed. In this paper, we study the properties of consistency and uniqueness of this problem, and we propose an efficient reconstruction algorithm. We also show how this task can be reduced to a network flow problem, similarly to the standard reconstruction algorithm, allowing the determination of a solution even in the case where some pixels of the output image must have some prescribed values.

Stefano Bilotta, Stefano Brocchi

An Entropic Perturbation Approach to TV-Minimization for Limited-Data Tomography

The reconstruction problem of discrete tomography is studied using novel techniques from compressive sensing. Recent theoretical results of the authors enable to predict the number of measurements required for the unique reconstruction of a class of cosparse dense 2D and 3D signals in severely undersampled scenarios by convex programming. These results extend established ℓ


-related theory based on sparsity of the signal itself to novel scenarios not covered so far, including tomographic projections of 3D solid bodies composed of few different materials. As a consequence, the large-scale optimization task based on total-variation minimization subject to tomographic projection constraints is considerably more complex than basic ℓ


-programming for sparse regularization. We propose an entropic perturbation of the objective that enables to apply efficient methodologies from unconstrained optimization to the perturbed dual program. Numerical results validate the theory for large-scale recovery problems of integer-valued functions that exceed the capacity of the commercial MOSEK software.

Andreea Deniţiu, Stefania Petra, Claudius Schnörr, Christoph Schnörr

Fourier Inversion of the Mojette Transform

The Mojette transform is a form of discrete Radon transform that maps a 2D image (




pixels) to a set of


1D projections. Several fast inversion methods exist that require O(


) operations but those methods are ill-conditioned. Several robust (or well-conditioned) inversion methods exist, but they are slow, requiring O(






) operations. Ideally we require an inversion scheme that is both fast and robust to deal with noisy projections. Noisy projection data can arise from data that is corrupted in storage or by errors in data transmission, quantisation errors in image compression, or through noisy acquisition of physical projections, such as in X-ray computed tomography. This paper presents a robust reconstruction method, performed in the Fourier domain, that requires O(






) operations.

Andrew Kingston, Heyang Li, Nicolas Normand, Imants Svalbe

Uniqueness Regions under Sets of Generic Projections in Discrete Tomography

In Discrete Tomography, objects are reconstructed by means of their projections along certain directions. It is known that, for any given lattice grid, special sets of four valid projections exist that ensure uniqueness of reconstruction in the whole grid. However, in real applications, some physical or mechanical constraints could prevent the use of such theoretical uniqueness results, and one must employ projections fitting some further constraints. It turns out that global uniqueness cannot be guaranteed, even if, in some special areas included in the grid, uniqueness might be still preserved.

In this paper we address such a question of local uniqueness. In particular, we wish to focus on the problem of characterizing, in a sufficiently large lattice rectangular grid, the sub-region which is uniquely determined under a set


of generic projections. It turns out that the regions of local uniqueness consist of some curious twisting of rectangular areas. This deserves a special interest even from the pure combinatorial point of view, and can be explained by means of numerical relations among the entries of the employed directions.

Paolo Dulio, Andrea Frosini, Silvia M. C. Pagani

Adaptive Grid Refinement for Discrete Tomography

Discrete tomography has proven itself as a powerful approach to image reconstruction from limited data. In recent years, algebraic reconstruction methods have been applied successfully to a range of experimental data sets. However, the computational cost of such reconstruction techniques currently prevents routine application to large data-sets. In this paper we investigate the use of adaptive refinement on QuadTree grids to reduce the number of pixels (or voxels) needed to represent an image. Such locally refined grids match well with the domain of discrete tomography as they are optimally suited for representing images containing large homogeneous regions. Reducing the number of pixels ultimately promises a reduction in both the computation time of discrete algebraic reconstruction techniques as well as reduced memory requirements. At the same time, a reduction of the number of unknowns can reduce the influence of noise on the reconstruction. The resulting refined grid can be used directly for further post-processing (such as segmentation, feature extraction or metrology). The proposed approach can also be used in a non-adaptive manner for region-of-interest tomography. We present a computational approach for automatic determination of the locations where the grid must be defined. We demonstrate how algebraic discrete tomography algorithms can be constructed based on the QuadTree data structure, resulting in reconstruction methods that are fast, accurate and memory efficient.

Tristan van Leeuwen, K. Joost Batenburg

Morphological Analysis

Exact Evaluation of Stochastic Watersheds: From Trees to General Graphs


stochastic watershed

is a method for identifying salient contours in an image, with applications to image segmentation. The method computes a probability density function (PDF), assigning to each piece of contour in the image the probability to appear as a segmentation boundary in seeded watershed segmentation with randomly selected seedpoints. Contours that appear with high probability are assumed to be more important. This paper concerns an efficient method for computing the stochastic watershed PDF exactly, without performing any actual seeded watershed computations. A method for exact evaluation of stochastic watersheds was proposed by Meyer and Stawiaski (2010). Their method does not operate directly on the image, but on a compact tree representation where each edge in the tree corresponds to a watershed partition of the image elements. The output of the exact evaluation algorithm is thus a PDF defined over the edges of the tree. While the compact tree representation is useful in its own right, it is in many cases desirable to convert the results from this abstract representation back to the image, e.g, for further processing. Here, we present an efficient linear time algorithm for performing this conversion.

Filip Malmberg, Bettina Selig, Cris L. Luengo Hendriks

On Making nD Images Well-Composed by a Self-dual Local Interpolation

Natural and synthetic discrete images are generally not well-composed, leading to many topological issues: connectivities in binary images are not equivalent, the Jordan Separation theorem is not true anymore, and so on. Conversely, making images well-composed solves those problems and then gives access to many powerful tools already known in mathematical morphology as the Tree of Shapes which is of our principal interest. In this paper, we present two main results: a characterization of 3D well-composed gray-valued images; and a counter-example showing that no local self-dual interpolation satisfying a classical set of properties makes well-composed images with one subdivision in 3D, as soon as we choose the mean operator to interpolate in 1D. Then, we briefly discuss various constraints that could be interesting to change to make the problem solvable in



Nicolas Boutry, Thierry Géraud, Laurent Najman

Discrete Modelling and Visualization

Implicit Digital Surfaces in Arbitrary Dimensions

In this paper we introduce a notion of digital implicit surface in arbitrary dimensions. The digital implicit surface is the result of a morphology inspired digitization of an implicit surface {x ∈ ℝ


: f(x) = 0} which is the boundary of a given closed subset of ℝ


, {x ∈ ℝ


: f(x) ≤ 0}. Under some constraints, the digital implicit surface has some interesting properties, such as


-tunnel freeness. Furthermore, for a large class of the digital implicit surfaces, there exists a very simple analytical characterization.

Jean-Luc Toutant, Eric Andres, Gaelle Largeteau-Skapin, Rita Zrour

Algorithms for Fast Digital Straight Segments Union

Given two Digital Straight Segments (DSS for short) of known minimal characteristics, we investigate the union of these DSSs: is it still a DSS ? If yes, what are its minimal characteristics ? We show that the problem is actually easy and can be solved in, at worst, logarithmic time using a state-of-the-art algorithm. We moreover propose a new algorithm of logarithmic worst-case complexity based on arithmetical properties. But when the two DSSs are connected, the time complexity of this algorithm is lowered to


and experiments show that it outperforms the state-of-the art one in any case.

Isabelle Sivignon

Digital Geometry from a Geometric Algebra Perspective

To model Euclidean spaces in computerized geometric calculations, the Geometric Algebra framework is becoming popular in computer vision, image analysis, etc. Focusing on the Conformal Geometric Algebra, the claim of the paper is that this framework is useful in digital geometry too. To illustrate this, this paper shows how the Conformal Geometric Algebra allow to simplify the description of digital objects, such as k-dimensional circles in any n-dimensional discrete space. Moreover, the notion of duality is an inherent part of the Geometric Algebra. This is particularly useful since many algorithms are based on this notion in digital geometry. We illustrate this important aspect with the definition of k-dimensional spheres.

Lilian Aveneau, Laurent Fuchs, Eric Andres

Discrete and Combinatorial Tools for Image Segmentation and Analysis

Segmentation of 3D Articulated Components by Slice-Based Vertex-Weighted Reeb Graph

A fast and efficient algorithm for segmentation of the articulated components of 3D objects is proposed. The algorithm is marked by several novel features, such as DCEL-based fast

orthogonal slicing


weighted Reeb graph

with slice areas as vertex weights, and graph cut by

exponential averaging

. Each of the three sets of orthogonal slices obtained from the object is represented by a vertex-weighted Reeb graph of low complexity, as the slicing is done with an appropriate grid resolution. Each linear subgraph in a Reeb graph is traversed from its leaf node up to an articulation node or up to a node whose weight exceeds a dynamically-set threshold, based on exponential averaging of the predecessor weights in the concerned subgraph. The nodes visited in each linear subgraph are marked by a unique component number, thereby helping the inverse mapping for marking the articulated regions during final segmentation. Theoretical analysis shows that the algorithm runs faster for objects with smaller surface area and for larger grid resolutions. The algorithm is stable, invariant to rotation, and leads to natural segmentation, as evidenced by experimentation with a varied dataset.

Nilanjana Karmakar, Partha Bhowmick, Arindam Biswas

Taylor Optimal Kernel for Derivative Etimation

In many geometry processing applications, the estimation of differential geometric quantities such as curvature or normal vector field is an essential step. In this paper, we investigate new estimators for the first and second order derivatives of a real continuous function


based on convolution of the values of noisy digitalizations of


. More precisely, we provide both proofs of multigrid convergence of the estimators (with a maximal error


in the unnoisy case, where


 = 1 for first order and


 = 2 for second order derivatives and


is a parameter to be choosed

ad libitum

). Then, we use this derivative estimators to provide estimators for normal vectors and curvatures of a planar curve, and give some experimental evidence of the practical usefullness of all these estimators. Notice that these estimators have a better complexity than the ones of the same type previously introduced (cf. [4] and [8]).

Henri-Alex Esbelin, Remy Malgouyres

On Finding Spherical Geodesic Paths and Circles in ℤ3


discrete spherical geodesic path

between two voxels




lying on a discrete sphere is a/the 1-connected shortest path from




, comprising voxels of the discrete sphere intersected by the real plane passing through




, and the center of the sphere. We show that the set of sphere voxels intersected by the aforesaid real plane always contains a

1-connected cycle

passing through




, and each voxel in this set lies within an isothetic distance of


from the concerned plane. Hence, to compute the path, the algorithm starts from


, and iteratively computes each voxel


of the path from the predecessor of


. A novel number-theoretic property and the 48-symmetry of discrete sphere are used for searching the


voxels comprising the path. The algorithm is


, having its time and space complexities both linear in the length of the path. It can be extended for constructing 1-connected discrete 3D circles of arbitrary orientations, specified by a few appropriate input parameters. Experimental results and related analysis demonstrate its efficiency and versatility.

Ranita Biswas, Partha Bhowmick

Discrete Curve Evolution on Arbitrary Triangulated 3D Mesh

Discrete Curve Evolution (DCE) algorithm is used to eliminate objects’ contour distortions while at the same time preserve the perceptual appearance at a level sufficient for object recognition. This method is widely used for shape similarity measure, skeleton pruning and salient point detection of binary objects on a regular 2D grid. Our paper aims at describing a new DCE algorithm for an arbitrary triangulated 3D mesh. The difficulty lies in the calculation of a vertex cost function for an object contour, as on a 3D surface the notion of Euclidean distance cannot be used. It is also very difficult to compute a geodesic angle between lines connecting vertices. We introduce a new cost function for border vertex which is only based on geodesic distances. We apply the proposed algorithm on vertex sets to compute an approximation of original contours, extract salient points and prune skeletons. The experimental results justify the robustness of our method with respect to noise distortions.

Sergii Poltaretskyi, Jean Chaoui, Chafiaa Hamitouche-Djabou


Weitere Informationen