Skip to main content

2020 | Buch

Combinatorial Image Analysis

20th International Workshop, IWCIA 2020, Novi Sad, Serbia, July 16–18, 2020, Proceedings

herausgegeben von: Dr. Tibor Lukić, Prof. Dr. Reneta P. Barneva, Prof. Dr. Valentin E. Brimkov, Lidija Čomić, Dr. Nataša Sladoje

Verlag: Springer International Publishing

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of the 20th International Workshop on Combinatorial Image Analysis, IWCIA 2020, held in Novi Sad, Serbia, in July 2020.

The 20 full papers presented were carefully reviewed and selected from 23 submissions. The papers are grouped into two sections. The first one includes twelve papers devoted to theoretical foundations of combinatorial image analysis, including digital geometry and topology, array grammars, picture languages, digital tomography, and other technical tools for image analysis. The second part includes eight papers presenting application-driven research on topics such as image repairing, annotation of images, image reconstruction, forgery detection, and dealing with noise in images.

Inhaltsverzeichnis

Frontmatter

Theoretical Foundations

Frontmatter
Euler Well-Composedness
Abstract
In this paper, we define a new flavour of well-composedness, called Euler well-composedness, in the general setting of regular cell complexes: A regular cell complex is Euler well-composed if the Euler characteristic of the link of each boundary vertex is 1. A cell decomposition of a picture I is a pair of regular cell complexes \(\big (K(I),K(\bar{I})\big )\) such that K(I) (resp. \(K(\bar{I})\)) is a topological and geometrical model representing I (resp. its complementary, \(\bar{I}\)). Then, a cell decomposition of a picture I is self-dual Euler well-composed if both K(I) and \(K(\bar{I})\) are Euler well-composed. We prove in this paper that, first, self-dual Euler well-composedness is equivalent to digital well-composedness in dimension 2 and 3, and second, in dimension 4, self-dual Euler well-composedness implies digital well-composedness, though the converse is not true.
Nicolas Boutry, Rocio Gonzalez-Diaz, Maria-Jose Jimenez, Eduardo Paluzo-Hildago
On Connectedness of Discretized Sets
Abstract
Constructing a discretization of a given set is a major problem in various theoretical and applied disciplines. An offset discretization of a set X is obtained by taking the integer points inside a closed neighborhood of X of a certain radius. In this note we determine a minimum threshold for the offset radius, beyond which the discretization of an arbitrary (possibly disconnected) set is always connected. The results hold for a broad class of disconnected subsets of \(\mathbb {R}^n\), and generalize several previous results.
Boris Brimkov, Valentin E. Brimkov
Persistent Homology as Stopping-Criterion for Voronoi Interpolation
Abstract
In this study the Voronoi interpolation is used to interpolate a set of points drawn from a topological space with higher homology groups on its filtration. The technique is based on Voronoi tessellation, which induces a natural dual map to the Delaunay triangulation. Advantage is taken from this fact calculating the persistent homology on it after each iteration to capture the changing topology of the data. The boundary points are identified as critical. The Bottleneck and Wasserstein distance serve as a measure of quality between the original point set and the interpolation. If the norm of two distances exceeds a heuristically determined threshold, the algorithm terminates. We give the theoretical basis for this approach and justify its validity with numerical experiments.
Luciano Melodia, Richard Lenz
Atomic Super-Resolution Tomography
Abstract
We consider the problem of reconstructing a nanocrystal at atomic resolution from electron microscopy images taken at a few tilt angles. A popular reconstruction approach called discrete tomography confines the atom locations to a coarse spatial grid, which is inspired by the physical a priori knowledge that atoms in a crystalline solid tend to form regular lattices. Although this constraint has proven to be powerful for solving this very under-determined inverse problem in many cases, its key limitation is that, in practice, defects may occur that cause atoms to deviate from regular lattice positions. Here we propose a grid-free discrete tomography algorithm that allows for continuous deviations of the atom locations similar to super-resolution approaches for microscopy. The new formulation allows us to define atomic interaction potentials explicitly, which results in a both meaningful and powerful incorporation of the available physical a priori knowledge about the crystal’s properties. In computational experiments, we compare the proposed grid-free method to established grid-based approaches and show that our approach can indeed recover the atom positions more accurately for common lattice defects.
Poulami Somanya Ganguly, Felix Lucka, Hermen Jan Hupkes, Kees Joost Batenburg
Characterizations of Simple Points on the Body-Centered Cubic Grid
Abstract
A frequently investigated problem in various applications of binary image processing is to ensure the topology preservation of image operators, where the concept of simple points plays a key role. The literature primarily focuses on 2D and 3D images that are sampled on the conventional square and cubic grids, respectively. This paper presents some new characterizations of simple points on the body-centered cubic grid.
Péter Kardos
A 4D Counter-Example Showing that DWCness Does Not Imply CWCness in nD
Abstract
In this paper, we prove that the two flavours of well-composedness called Continuous Well-Composedness (shortly CWCness), stating that the boundary of the continuous analog of a discrete set is a manifold, and Digital Well-Composedness (shortly DWCness), stating that a discrete set does not contain any critical configuration, are not equivalent in dimension 4. To prove this, we exhibit the example of a configuration of 8 tesseracts (4D cubes) sharing a common corner (vertex), which is DWC but not CWC. This result is surprising since we know that CWCness and DWCness are equivalent in 2D and 3D. To reach our goal, we use local homology.
Nicolas Boutry, Rocio Gonzalez-Diaz, Laurent Najman, Thierry Géraud
3D-Array Token Petri Nets Generating Tetrahedral Picture Languages
Abstract
The study of two-dimensional picture languages has a wide application in image analysis and pattern recognition [5, 7, 13, 17]. There are various models such as grammars, automata, P systems and Petri Nets to generate different picture languages available in the literature [14, 6, 8, 1012, 14, 15, 18]. In this paper we consider Petri Nets generating tetrahedral picture languages. The patterns generated are interesting, new and are applicable in floor design, wall design and tiling. We compare the generative power of these Petri Nets with that of other recent models [9, 16] developed by our group.
T. Kalyani, K. Sasikala, D. G. Thomas, Thamburaj Robinson, Atulya K. Nagar, Meenakshi Paramasivan
Simulating Parallel Internal Column Contextual Array Grammars Using Two-Dimensional Parallel Restarting Automata with Multiple Windows
Abstract
The connection between picture languages and restarting automata has been established in Otto (2014). An interesting class of picture languages generated by parallel contextual array grammars was studied with application in image generation and analysis in Subramanian et al. (2008). In this paper, we introduce a variant of two dimensional restarting automata that accepts a subclass of parallel internal contextual array languages. We show that these automata can simulate parallel internal column contextual array grammars in reverse order.
Abhisek Midya, Frits Vaandrager, D. G. Thomas, Chandrima Ghosh
Grayscale Uncertainty of Projection Geometries and Projections Sets
Abstract
In some cases of tomography, the projection acquisition process has limits, and thus one cannot gain enough projections for an exact reconstruction. In this case, the low number of projections leads to a lack of information, and uncertainty in the reconstructions. In practice this means that the pixel values of the reconstruction are not uniquely determined by the measured data and thus can have variable values. In this paper, we provide a theoretically proven uncertainty measure that can be used for measuring the variability of pixel values in grayscale reconstructions. The uncertainty values are based on linear algebra and measure the slopes of the hyperplane of solutions in the algebraic formulation of tomography. The methods can also be applied for any linear equation system, that satisfy a given set of conditions. Using the uncertainty measure, we also derive upper and lower limits on the possible pixel values in tomographic reconstructions.
László G. Varga, Gábor Lékó, Péter Balázs
Finding the Maximum Empty Axis-Parallel Rectangular Annulus
Abstract
An annulus is basically a ring-shaped region between two concentric disks on the same plane. However, it can be defined on any other geometrical shapes, for example, a rectangular annulus is defined as the area between two rectangles with one rectangle enclosing the other. The area of the annulus is the area of the region between the two shapes. An axis-parallel rectangular annulus is an annulus where the sides of the rectangles are parallel to the co-ordinate axes. This paper presents a combinatorial technique to find the largest empty axis-parallel rectangular annulus from a given set of n points and runs in \(O(n\log n)\) time. It uses two balanced binary search trees to store the points and reduces the complexity of the existing algorithm in the literature.
Raina Paul, Apurba Sarkar, Arindam Biswas
Parallel Contextual Array Insertion Deletion Grammar and (Context-Free : Context-Free) Matrix Grammar
Abstract
Siromoney et al. introduced Siromoney matrix grammars (1973) which are of sequential-parallel type in the sense that first a horizontal string of nonterminals is derived sequentially by applying the horizontal production rules and then vertical productions are applied in parallel to get the intended two-dimensional picture. In 1999, Radhakrishnan et al. introduced and studied a variant of Siromoney matrix grammars called (X:Y)MG where \(X, Y \in \{Context-Free (CF), Regular(R)\}\). James et al. in 2018 introduced Parallel Contextual Array Insertion Deletion Grammar (PCAIDG) to generate two-dimensional array languages using insertion and deletion operations and parallel contextual mappings. In this paper, we prove that this family of languages generated by PCAIDGs properly includes the family (CF : CF) ML.
S. Jayasankar, D. G. Thomas, S. James Immanuel, Meenakshi Paramasivan, T. Robinson, Atulya K. Nagar
Digital Hyperplane Fitting
Abstract
This paper addresses the hyperplane fitting problem of discrete points in any dimension (i.e. in \(\mathbb {Z}^d\)). For that purpose, we consider a digital model of hyperplane, namely digital hyperplane, and present a combinatorial approach to find the optimal solution of the fitting problem. This method consists in computing all possible digital hyperplanes from a set \(\mathbf {S}\) of n points, then an exhaustive search enables us to find the optimal hyperplane that best fits \(\mathbf {S}\). The method has, however, a high complexity of \(O(n^d)\), and thus can not be applied for big datasets. To overcome this limitation, we propose another method relying on the Delaunay triangulation of \(\mathbf {S}\). By not generating and verifying all possible digital hyperplanes but only those from the elements of the triangulation, this leads to a lower complexity of \(O(n^{\lceil \frac{d}{2} \rceil + 1})\). Experiments in 2D, 3D and 4D are shown to illustrate the efficiency of the proposed method.
Phuc Ngo

Methods and Applications

Frontmatter
Repairing Binary Images Through the 2D Diamond Grid
Abstract
A 2D binary image is well-composed if it does not contain a \(2\times 2\) configuration of two diagonal black and two diagonal white squares. We propose a simple repairing algorithm to construct two well-composed images \(I_4\) and \(I_8\) starting from an image I, and we prove that \(I_4\) and \(I_8\) are homotopy equivalent to I with 4- and 8-adjacency, respectively. This is achieved by passing from the original square grid to another one, rotated by \(\pi /4\), whose pixels correspond to the original pixels and to their vertices. The images \(I_4\) and \(I_8\) are double in size with respect to the image I. Experimental comparisons and applications are also shown.
Lidija Čomić, Paola Magillo
Transmission Based Adaptive Automatic Tube Voltage Selection for Computed Tomography
Abstract
Computed Tomography (CT) is a widely used x-ray based imaging modality in radiodiagnostics and non-destructive testing. For medical considerations as well as for economical and environmental reasons, it is desirable to reduce the dose of radiation and to optimize energy used for acquiring x-ray projection images. Especially, in case of elongated objects, using a constant energy spectrum radiation may not provide realistic information about the interior of the examined object. In this paper we provide an adaptive tube voltage selection method, which determines the proper amount of radiation energy on-the-fly, during the acquisition, based on projection information. By experiments on software phantom images we show that this adaptive approach can produce CT images of better quality, and in the same time, by consuming less energy.
Gábor Lékó, Péter Balázs
MicrAnt: Towards Regression Task Oriented Annotation Tool for Microscopic Images
Abstract
Annotating a dataset for training a Supervised Machine Learning algorithm is time and annotator’s attention intensive. Our goal was to create a tool that would enable us to create annotations of the dataset with minimal demands on expert’s time. Inspired by applications such as Tinder, we have created an annotation tool for describing microscopic images. A graphical user interface is used to select from a couple of images the one with the higher value of the examined parameter. Two experiments were performed. The first compares the speed of annotation of our application with the commonly used tool for processing microscopic images. In the second experiment, the texture description was compared with the annotations from MicrAnt application and commonly used application. The results showed that the processing time using our application is 3 times lower and the Spearman coefficient increases by 0.05 than using a commonly used application. In an experiment, we have shown that the annotations processed using our application increase the correlation of the studied parameter and texture descriptors compared with manual annotations .
Miroslav Jirik, Vladimira Moulisova, Claudia Schindler, Lenka Cervenkova, Richard Palek, Jachym Rosendorf, Janine Arlt, Lukas Bolek, Jiri Dejmek, Uta Dahmen, Kamila Jirikova, Ivan Gruber, Vaclav Liska, Milos Zelezny
Graph Cuts Based Tomography Enhanced by Shape Orientation
Abstract
The topic of this paper includes graph cuts based tomography reconstruction methods in binary and multi-gray level cases. A energy-minimization based reconstruction method for binary tomography is introduced. This approach combines the graph cuts and a gradient based method, and applies a shape orientation as an a priori information. The new method is capable for reconstructions in cases of limited projection view availability. Results of experimental evaluation of the considered graph cuts type reconstruction methods for both binary and multi level tomography are presented .
Marina Marčeta, Tibor Lukić
Dealing with Noise in Cluster Pattern Interface
Abstract
Cluster Pattern Interface (CLUSPI) [6] is an original 2D encoding and a related technology for direct point-and-click interfaces. It allows determining the objects printed on a surface pointed out by the user. The surface is covered by a layer of code composed of tiny dots, which blends with the original surface texture. In practice, however, some of the dots of the code may be damaged or obscured, which creates decoding challenges. In this work we develop theoretical solutions to some of the challenges related to decoding noisy patterns.
Valentin E. Brimkov, Reneta P. Barneva, Kamen Kanev
Local Q-Convexity Histograms for Shape Analysis
Abstract
In this paper we propose a novel local shape descriptor based on Q-convexity histograms. We investigate three different variants: (1) focusing only on the background points, (2) examining all the points and (3) omitting the zero bin. We study the properties of the variants on a shape and on a texture dataset. In an illustrative example, we compare the classification accuracy of the introduced local descriptor to its global counterpart, and also to a variant of Local Binary Patterns which is similar to our descriptor in the sense that its histogram collects frequencies of local configurations. We show that our descriptor can reach in many cases higher classification accuracy than the others .
Judit Szűcs, Péter Balázs
k-Attempt Thinning
Abstract
Thinning is a frequently used approach to produce all kinds of skeleton-like shape features in a topology-preserving way. It is an iterative object reduction: some border points of binary objects are deleted, and the entire process is repeated until stability is reached. In the conventional implementation of thinning algorithms, we have to investigate the deletability of all border points in each iteration step. In this paper, we introduce the concept of k-attempt thinning (\(k\ge 1\)). In the case of a k-attempt algorithm, if a border point ‘survives’ at least k successive iterations, it is ‘immortal’ (i.e., it belongs to the produced feature). We propose a computationally efficient implementation scheme for k-attempt thinning. It is shown that an existing parallel thinning algorithm is 5-attempt, and the advantage of the new implementation scheme over the conventional one is also illustrated.
Kálmán Palágyi, Gábor Németh
Fuzzy Metaheuristic Algorithm for Copy - Move Forgery Detection in Image
Abstract
One of the most common methods for counterfeiting digital images is copy-move forgery detection (CMFD). It implies that part of the image is copied and pasted to another part of the same image. The purpose of such changes is to hide certain image content, or to duplicate image content. The aim of this paper is to propose a new clustering algorithm for edited images. The image is divided into non-overlapping blocks. New fuzzy metric is used to calculate the distance between the blocks. In this research the metaheuristic method of the variable neighbourhood search (VNS) is used for the classification of the block. The aim of the classification is that the division should be on the changed and unchanged blocks. The results of this research are compared with the latest results from the literature dealing with this problem and it is shown that the proposed algorithm gives better results. Publicly available image databases were used. The proposed algorithm was implemented in the Python programming language.
Nataša S. Milosavljević, Nebojša M. Ralević
Backmatter
Metadaten
Titel
Combinatorial Image Analysis
herausgegeben von
Dr. Tibor Lukić
Prof. Dr. Reneta P. Barneva
Prof. Dr. Valentin E. Brimkov
Lidija Čomić
Dr. Nataša Sladoje
Copyright-Jahr
2020
Electronic ISBN
978-3-030-51002-2
Print ISBN
978-3-030-51001-5
DOI
https://doi.org/10.1007/978-3-030-51002-2