Skip to main content

Über dieses Buch

This book constitutes the refereed proceedings of the 10th IAPR-TC-15 International Workshop on Graph-Based Representations in Pattern Recognition, GbRPR 2015, held in Beijing, China, in May 2015. The 36 papers presented in this volume were carefully reviewed and selected from 53 submissions. The accepted papers cover diverse issues of graph-based methods and applications, with 7 in graph representation, 15 in graph matching, 7 in graph clustering and classification, and 7 in graph-based applications.



Graph-Based Representation


Approximation of Graph Edit Distance in Quadratic Time

The basic idea of a recent graph matching framework is to reduce the problem of graph edit distance (GED) to an instance of a linear sum assignment problem (LSAP). The optimal solution for this simplified GED problem can be computed in cubic time and is eventually used to derive a suboptimal solution for the original GED problem. Yet, for large scale graphs and/or large scale graph sets the cubic time complexity remains a severe handicap of this procedure. Therefore, we propose to use suboptimal algorithms – with quadratic rather than cubic time complexity – for solving the underlying LSAP. In particular, we introduce several greedy assignment algorithms for approximating GED. In an experimental evaluation we show that there is great potential for further speeding up the GED computation. Moreover, we empirically confirm that the distances obtained by this procedure remain sufficiently accurate for graph based pattern classification.

Kaspar Riesen, Miquel Ferrer, Andreas Fischer, Horst Bunke

Data Graph Formulation as the Minimum-Weight Maximum-Entropy Problem

Consider a point-set coming from an object which was sampled using a digital sensor (depth range, camera, etc). We are interested in finding a graph that would represent that point-set according to some properties. Such a representation would allow us to match two objects (graphs) by exploiting topological properties instead of solely relying on geometrical properties. The Delaunay triangulation is a common out off-the-shelf strategy to triangulate a point-set and it is used by many researchers as the standard way to create the so called data-graph and despite its positive properties, there are also some drawbacks. We are interested in generating a graph with the following properties: the graph is (i) as unique as possible, (ii) and as discriminative as possible regarding the degree distribution. We pose a combinatorial optimization problem (Min-Weight Max-Entropy Problem) to build such a graph by minimizing the total weight cost of the edges and at the same time maximizing the entropy of the degree distribution. Our optimization approach is based on Dynamic Programming (DP) and yields a polynomial time algorithm.

Samuel de Sousa, Walter G. Kropatsch

An Entropic Edge Assortativity Measure

Assortativity or assortative mixing is the tendency of a network’s vertices to connect to others with similar characteristics, and has been shown to play a vital role in the structural properties of complex networks. Most of the existing assortativity measures have been developed on the basis of vertex degree information. However, there is a significant amount of additional information residing in the edges in a network, such as the edge directionality and weights. Moreover, the von Neumann entropy has proved to be an efficient entropic complexity level characterization of the structural and functional properties of both undirected and directed networks. Hence, in this paper we aim to combine these two methods and propose a novel edge assortativity measure which quantifies the entropic preference of edges to form connections between similar vertices in undirected and directed graphs. We apply our novel assortativity characterization to both artificial random graphs and real-world networks. The experimental results demonstrate that our measure is effective in characterizing the structural complexity of networks and classifying networks that belong to different complexity classes.

Cheng Ye, Richard C. Wilson, Edwin R. Hancock

A Subpath Kernel for Learning Hierarchical Image Representations

Tree kernels have demonstrated their ability to deal with hierarchical data, as the intrinsic tree structure often plays a discriminative role. While such kernels have been successfully applied to various domains such as nature language processing and bioinformatics, they mostly concentrate on ordered trees and whose nodes are described by symbolic data. Meanwhile, hierarchical representations have gained increasing interest to describe image content. This is particularly true in remote sensing, where such representations allow for revealing different objects of interest at various scales through a tree structure. However, the induced trees are unordered and the nodes are equipped with numerical features. In this paper, we propose a new structured kernel for hierarchical image representations which is built on the concept of subpath kernel. Experimental results on both artificial and remote sensing datasets show that the proposed kernel manages to deal with the hierarchical nature of the data, leading to better classification rates.

Yanwei Cui, Laetitia Chapel, Sébastien Lefèvre

Coupled-Feature Hypergraph Representation for Feature Selection

Real-world objects and their features tend to exhibit multiple relationships rather than simple pairwise ones, and as a result basic graph representation can lead to substantial loss of information. Hypergraph representations, on the other hand, allow vertices to be multiply connected by hyperedges and can hence capture multiple or higher order relationships among features. Due to their effectiveness in representing multiple relationships, in this paper, we draw on recent work on hypergraph clustering to select the relevant feature subset (RFS) from a set of features using high-order (rather than pairwise) similarities. Specifically, we first devise a coupled feature representation to represent the data by utilizing self-coupled and inter-feature coupling relationships, which can be more effective to capture the intrinsic linear and nonlinear information on data structure. Regarding the new data representation, we use a new information theoretic criterion referred to as multivariate mutual information to measure the high-order feature combinations with respect to the class labels. Therefore, we construct a coupled feature hypergraph to model the high-order relations among features. Finally, we locate the relevant feature subset (RFS) from feature hypergraph by maximizing features’ average relevance, which has both low redundancy and strong discriminating power. The size of the relevant feature subset (RFS) is determined automatically. Experimental results demonstrate the effectiveness of our feature selection method on a number of standard data-sets.

Zhihong Zhang, Jianbing Xiahou, Lu Bai, Edwin R. Hancock

Reeb Graphs Through Local Binary Patterns

This paper presents an approach to derive critical points of a shape, the basis of a Reeb graph, using a combination of a medial axis skeleton and features along this skeleton. A Reeb graph captures the topology of a shape. The nodes in the graph represent critical points (positions of change in the topology), while edges represent topological persistence. We present an approach to compute such critical points using Local Binary Patterns. For one pixel the Local Binary Pattern feature vector is derived comparing this pixel to its neighbouring pixels in an environment of a certain radius. We start with an initial segmentation and a medial axis representation. Along this axis critical points are computed using Local Binary Patterns with the radius, defining the neighbouring pixels, set a bit larger than the radius according to the medial axis transformation. Critical points obtained in this way form the node set in a Reeb graph, edges are given through the connectivity of the skeleton. This approach aims at improving the representation of flawed segmented data. In the same way segmentation artefacts, as for example single pixels representing noise, may be corrected based on this analysis.

Ines Janusch, Walter G. Kropatsch

Incremental Embedding Within a Dissimilarity-Based Framework

Structural pattern recognition methods based on strings or graphs provide a natural encoding of objects’ relationships but can usually be combined only with a few set of machine learning methods. This last decade has seen majors advancements aiming to link these two fields. The two majors research fields in this direction concern the design of new graph and string kernels and different explicit embedding schemes of structural data. Explicit embedding of structural data can be combined with any machine learning methods. Dissimilarity representation methods are important because they allow an explicit embedding and the connection with the kernel framework. However these methods require the whole universe to be known during the learning phase and to obtain a Euclidean embedding, the matrix of dissimilarity encoding dissimilarities between any pair of objects should be regularized. This last point somehow violates the usual separation between training and test sets since both sets should be jointly processed and is an important limitation in many practical applications where the test set is unbounded and unknown during the learning phase. Moreover, requiring the whole universe represents a bottleneck for the processing of massive dataset. In this paper, we propose to overcome these limitations following an incremental embedding based on dissimilarity representations. We study in this paper, the pros and cons of two methods, which allow computing implicitly, and separately the embedding of points in the test set and in the learning set. Conclusions are set following experiments performed on different datasets.

Rachid Hafiane, Luc Brun, Salvatore Tabbone

Graph Matching


A First Step Towards Exact Graph Edit Distance Using Bipartite Graph Matching

In recent years, a powerful approximation framework for graph edit distance computation has been introduced. This particular approximation is based on an optimal assignment of local graph structures which can be established in polynomial time. However, as this approach considers the local structural properties of the graphs only, it yields sub-optimal solutions that overestimate the true edit distance in general. Recently, several attempts for reducing this overestimation have been made. The present paper is a starting point towards the study of sophisticated heuristics that can be integrated in these reduction strategies. These heuristics aim at further improving the overall distance quality while keeping the low computation time of the approximation framework. We propose an iterative version of one of the existing improvement strategies. An experimental evaluation clearly shows that there is large space for further substantial reductions of the overestimation in the existing approximation framework.

Miquel Ferrer, Francesc Serratosa, Kaspar Riesen

Consensus of Two Graph Correspondences Through a Generalisation of the Bipartite Graph Matching

One of the most important processes related to structural pattern recognition is to compare the involved objects through representing them as attributed graphs and using error-tolerant graph matching methods. To do so, it is needed a first step to extract the graphs given the original objects and deduct the whole attribute values of nodes and edges. Depending on the application, there are several methods to obtain these graphs and so, the object at hand can be represented by several graphs, not only with different nodes and edges but also with different attribute domains. In the case that we have several graphs to represent the same object, we can deduct several correspondences between graphs. In this work, we want to solve the problem of having these correspondences by exploding this diversity to announce a final correspondence in which the incongruences introduced in the graph extraction and also the graph matching could be reduced. We present a consensus method which, given two correspondences between two pairs of attributed graphs generated by separate entities and with different attribute domains, enounces a final correspondence consensus considering the existence of outliers. Our method is based on a generalisation of the Bipartite graph matching algorithm that minimises the Edit cost of the consensus correspondence while forcing (to the most) to be the mean correspondence of the two original correspondences.

Carlos Francisco Moreno-García, Francesc Serratosa, Xavier Cortés

Revisiting Volgenant-Jonker for Approximating Graph Edit Distance

Although it is agreed that the Volgenant-Jonker (VJ) algorithm provides a fast way to approximate graph edit distance (GED), until now nobody has reported how the VJ algorithm can be tuned for this task. To this end, we revisit VJ and propose a series of refinements that improve both the speed and memory footprint without sacrificing accuracy in the GED approximation. We quantify the effectiveness of these optimisations by measuring distortion between control-flow graphs: a problem that arises in malware matching. We also document an unexpected behavioural property of VJ in which the time required to find shortest paths to unassigned nodes decreases as graph size increases, and explain how this phenomenon relates to the birthday paradox.

William Jones, Aziem Chawdhary, Andy King

A Hypergraph Matching Framework for Refining Multi-source Feature Correspondences

In this paper, we develop a hypergraph matching framework which enables feature correspondence refinement for multi-source images. For images obtained from different sources (e.g., RGB images and infrared images), we first extract feature points by using one feature extraction scheme. We then establish feature point correspondences in terms of feature similarities. In this scenario, mismatches tend to occur because the feature extraction scheme may exhibit certain ambiguity in characterizing feature similarities for multi-source images. To eliminate this ineffectiveness, we establish an association hypergraph based on the feature point correspondences, where one vertex represents a feature point pair resulted from the feature matching and one hyperedge reflects the higher-order structural similarity among feature point tuples. We then reject the mismatches by identifying outlier vertices of the hypergraph through higher order clustering. Our method is invariant to scale variation of objects because of its capability for characterizing higher order structure. Furthermore, our method is computationally more efficient than existing hypergraph matching methods because the feature matching heavily reduces the enumeration of possible point tuples for establishing hypergraph models. Experimental results show the effectiveness of our method for refining feature matching.

He Zhang, Bin Du, Yanjiang Wang, Peng Ren

Kite Recognition by Means of Graph Matching

Kites are remnants of long stone walls that outline the shape of a child’s kite. But the kites are huge, their big size makes them often clearly visible on high-resolution satellite images. Identified at first in the Near East, their area of distribution is getting larger and larger. This wide distribution gives new dimensions in the interpretation of these structures. Consequently, a large scale recognition of kites will help archeologists to understand the functionality of these enigmatic constructions. In this paper, we investigate how the satellite imagery can be exploited in this purpose using a graph representation of the kites. We propose a similarity measure and a kite identification process that can highlights the preservation state of the kites. We also construct from real images a benchmark of kite graphs that can be used by other researchers.

Kamel Madi, Hamida Seba, Hamamache Kheddouci, Charles-Edmont Bichot, Olivier Barge, Christine Chataigner, Remy Crassard, Emmanuelle Reganon, Emmanuelle Vila

GEM++: A Tool for Solving Substitution-Tolerant Subgraph Isomorphism

The substitution-tolerant subgraph isomorphism is a particular error-tolerant subgraph matching that allows label substitutions for both vertices and edges. Such a matching is often required in pattern recognition applications since graphs extracted from images are generally labeled with features vectors computed from raw data which are naturally subject to noise. This paper describes an extended version of a Binary Linear Program (BLP) for solving this class of graph matching problem. The paper also presents GEM++, a software framework that implements the BLP and that we have made available for the research community. GEM++ allows the processing of different sub-problems (induced isomorphism or not, directed graphs or not) with complex labelling of vertices and edges. We also present some datasets available for evaluating future contributions in this field.

Julien Lerouge, Pierre Le Bodic, Pierre Héroux, Sébastien Adam

A Graph Database Repository and Performance Evaluation Metrics for Graph Edit Distance

Graph edit distance (GED) is an error tolerant graph matching paradigm whose methods are often evaluated in a classification context and less deeply assessed in terms of the accuracy of the found solution. To evaluate the accuracy of GED methods, low level information is required not only at the classification level but also at the matching level. Most of the publicly available repositories with associated ground truths are dedicated to evaluating graph classification or exact graph matching methods and so the matching correspondences as well as the distance between each pair of graphs are not directly evaluated. This paper consists of two parts. First, we provide a graph database repository annotated with low level information like graph edit distances and their matching correspondences. Second, we propose a set of performance evaluation metrics to assess the performance of GED methods.

Zeina Abu-Aisheh, Romain Raveaux, Jean-Yves Ramel

Improving Hausdorff Edit Distance Using Structural Node Context

In order to cope with the exponential time complexity of graph edit distance, several polynomial-time approximation algorithms have been proposed in recent years. The Hausdorff edit distance is a quadratic-time matching procedure for labeled graphs which reduces the edit distance to a correspondence problem between local substructures. In its original formulation, nodes and their adjacent edges have been considered as local substructures. In this paper, we integrate a more general structural node context into the matching procedure based on hierarchical subgraphs. In an experimental evaluation on diverse graph data sets, we demonstrate that the proposed generalization of Hausdorff edit distance can significantly improve the accuracy of graph classification while maintaining low computational complexity.

Andreas Fischer, Seiichi Uchida, Volkmar Frinken, Kaspar Riesen, Horst Bunke

Learning Graph Model for Different Dimensions Image Matching

Hyperspectral imagery has been widely used in real applications such as remote sensing, agriculture, surveillance, and geological analysis. Matching hyperspectral images is a challenge task due to the high dimensional nature of the data. The matching task becomes more difficult when images with different dimensions, such as a hyperspectral image and an RGB image, have to be matched. In this paper, we address this problem by investigating structured support vector machine to learn graph model for each type of image. The graph model incorporates both low-level features and stable correspondences within images. The inherent characteristics are depicted by using graph matching algorithm on weighted graph models. We validate the effectiveness of our method through experiments on matching hyperspectral images to RGB images, and hyperspectral images with different dimensions.

Haoyi Zhou, Xiao Bai, Jun Zhou, Haichuan Yang, Yun Liu

VF2 Plus: An Improved version of VF2 for Biological Graphs

Subgraph isomorphism is a common problem in several application fields where graphs are the best suited data representation, but it is known to be an NP-Complete problem. However, several algorithms exist that are fast enough on commonly encountered graphs so as to be practically usable; among them, for more than a decade VF2 has been the state of the art algorithm used to solve this problem and it is still the reference algorithm for many applications. Nevertheless, VF2 has been designed and implemented ten years ago when the structural features of the commonly used graphs were considerably different. Hence a renovation is required to make the algorithm able to compete in the challenges arisen in the last years, such as the use of graph matching on the very large graphs coming from bioinformatics applications. In this paper we propose a significant set of enhancements to the original VF2 algorithm that enable it to compete with more recently proposed graph matching techniques. Finally, we evaluate the effectiveness of these enhancement by comparing the matching performance both with the original VF2 and with several recent algorithms, using both the widely known MIVIA graph database and another public graph dataset containing real-world graphs from bioinformatics applications.

Vincenzo Carletti, Pasquale Foggia, Mario Vento

Report on the First Contest on Graph Matching Algorithms for Pattern Search in Biological Databases

Graphs are a powerful data structure that can be applied to several problems in bioinformatics, and efficient graph matching is often a tool required for several applications that try to extract useful information from large databases of graphs. While graph matching is in general a NP-complete problem, several algorithms exist that can be fast enough on practical graphs. However, there is no single algorithm that is able to outperform the others on every kind of graphs, and so it is of paramount importance to assess the algorithms on graphs coming from the actual problem domain. To this aim, we have organized the first edition of the Contest on Graph Matching Algorithms for Pattern Search in Biological Databases, hosted by the ICPR2014 Conference, so as to provide an opportunity for comparing state-of-the-art matching algorithms on a new graph database built using several kinds of real-world graphs found in bioinformatics applications. The participating algorithms were evaluated with respect to both their computation time and their memory usage. This paper will describe the contest task and databases, will provide a brief outline of the participating algorithms, and will present the results of the contest.

Vincenzo Carletti, Pasquale Foggia, Mario Vento, Xiaoyi Jiang

Approximate Graph Edit Distance Computation Combining Bipartite Matching and Exact Neighborhood Substructure Distance

Graph edit distance corresponds to a flexible graph dissimilarity measure. Unfortunately, its computation requires an exponential complexity according to the number of nodes of both graphs being compared. Some heuristics based on bipartite assignment algorithms have been proposed in order to approximate the graph edit distance. However, these heuristics lack of accuracy since they are based either on small patterns providing a too local information or walks whose tottering induce some bias in the edit distance calculus. In this work, we propose to extend previous heuristics by considering both less local and more accurate patterns using subgraphs defined around each node.

Vincenzo Carletti, Benoit Gaüzère, Luc Brun, Mario Vento

Multi-layer Tree Matching Using HSTs

Matching two images by mapping image features play a fundamental role in many computer vision task. Due to noisy nature of feature extraction, establishing a one-to-one matching of features may not always be possible. Although many-to-many matching techniques establishes the desired multi map between features, they ignore the spatial structure of the nodes. In this paper, we propose a novel technique that utilizes both the individual node features and the clustering information of nodes for image matching where image features are represented as hierarchically well-separated trees (HSTs). Our method uses the fact that non-leaf nodes of an HST represent a constellation of nodes in the original image and obtains a matching by finding a mapping between non-leaf nodes among the two HSTs. Empirical evaluation of the method on an extensive set of recognition tests shows the robustness and efficiency of the overall approach.

Yusuf Osmanlıoğlu, Ali Shokoufandeh

Large-Scale Graph Indexing Using Binary Embeddings of Node Contexts

Graph-based representations are experiencing a growing usage in visual recognition and retrieval due to their representational power in front of classical appearance-based representations in terms of feature vectors. Retrieving a query graph from a large dataset of graphs has the drawback of the high computational complexity required to compare the query and the target graphs. The most important property for a large-scale retrieval is the search time complexity to be sub-linear in the number of database examples. In this paper we propose a fast indexation formalism for graph retrieval. A binary embedding is defined as hashing keys for graph nodes. Given a database of labeled graphs, graph nodes are complemented with vectors of attributes representing their local context. Hence, each attribute counts the length of a walk of order


originated in a vertex with label


. Each attribute vector is converted to a binary code applying a binary-valued hash function. Therefore, graph retrieval is formulated in terms of finding target graphs in the database whose nodes have a small Hamming distance from the query nodes, easily computed with bitwise logical operators. As an application example, we validate the performance of the proposed methods in a handwritten word spotting scenario in images of historical documents.

Pau Riba, Josep Lladós, Alicia Fornés, Anjan Dutta

Attributed Relational Graph Matching with Sparse Relaxation and Bistochastic Normalization

Attributed relational graph (ARG) matching problem can usually be formulated as an Integer Quadratic Programming (IQP) problem. Since it is NP-hard, relaxation methods are required. In this paper, we propose a new relaxation method, called Bistochastic Preserving Sparse Relaxation Matching (BPSRM), for ARG matching problem. The main benefit of BPSRM is that the mapping constraints involving both discrete and bistochastic constraint can be well incorporated in BPSRM optimization. Thus, it can generate an approximate binary solution with one-to-one mapping constraint for ARG matching problem. Experimental results show the effectiveness of the proposed method.

Bo Jiang, Jin Tang, Bin Luo

Graph Clustering and Classification


On the Influence of Node Centralities on Graph Edit Distance for Graph Classification

Classical graph approaches for pattern recognition applications rely on computing distances between graphs in the graph domain. That is, the distance between two graphs is obtained by directly optimizing some objective function which consider node and edge attributes. Bipartite Graph Matching was first published in a journal in 2009 and new versions have appeared to speed up its runtime such as the Fast Bipartite Graph Matching. This algorithm is based on defining a cost matrix between all nodes of both graphs and solving the node correspondence through a linear assignment method. To construct the matrix, several local structures can be defined from the simplest one (only the node) to the most complex (a whole clique or eigenvector structure). In this paper, we propose five different options and we show that the type of local structure and the distance defined between these structures is relevant for graph classification.

Xavier Cortés, Francesc Serratosa, Carlos F. Moreno-García

A Mixed Weisfeiler-Lehman Graph Kernel

Using concepts from the Weisfeiler-Lehman (WL) test of isomorphism, we propose a mixed WL graph kernel (MWLGK) framework based on a family of efficient WL graph kernels for constructing mixed graph kernel. This family of kernels can be defined based on the WL sequence of graphs. We apply the MWLGK framework on WL graph sequence taking into account the structural information which was overlooked. Our MWLGK is competitive with or outperforms the corresponding single WL graph kernel on several classification benchmark data sets.

Lixiang Xu, Jin Xie, Xiaofeng Wang, Bin Luo

A Quantum Jensen-Shannon Graph Kernel Using Discrete-Time Quantum Walks

In this paper, we develop a new graph kernel by using the quantum Jensen-Shannon divergence and the discrete-time quantum walk. To this end, we commence by performing a discrete-time quantum walk to compute a density matrix over each graph being compared. For a pair of graphs, we compare the mixed quantum states represented by their density matrices using the quantum Jensen-Shannon divergence. With the density matrices for a pair of graphs to hand, the quantum graph kernel between the pair of graphs is defined by exponentiating the negative quantum Jensen-Shannon divergence between the graph density matrices. We evaluate the performance of our kernel on several standard graph datasets, and demonstrate the effectiveness of the new kernel.

Lu Bai, Luca Rossi, Peng Ren, Zhihong Zhang, Edwin R. Hancock

Density Based Cluster Extension and Dominant Sets Clustering

With the pairwise data similarity matrix as input, dominant sets clustering has been shown to be a promising clustering approach with some nice properties. However, its clustering results are found to be influenced by the similarity parameter used in building the similarity matrix. While histogram equalization transformation of the similarity matrices removes this influence effectively, this transformation causes over-segmentation in the clustering results. In this paper we present a density based cluster extension algorithm to solve the over-segmentation problem. Specifically, we determine the density threshold based on the minimum possible density inside the dominant sets and then add new members into clusters if the density requirement is satisfied. Our algorithm is shown to perform better than the original dominant sets algorithm and also some other state-of-the-art clustering algorithms in data clustering and image segmentation experiments.

Jian Hou, Chunshi Sha, Xu E, Qi Xia, Naiming Qi

Salient Object Segmentation from Stereoscopic Images

In this paper, we propose for stereoscopic images an effective object segmentation approach by incorporating saliency and depth information into graph cut. A saliency model based on color and depth is first used to generate the saliency map. Then the graph cut based on saliency and depth information as well as with the introduction of saliency weighted histogram is proposed to segment salient objects in one cut. Experimental results on a public stereoscopic image dataset with ground truths of salient objects demonstrate that the proposed approach outperforms the state-of-the-art salient object segmentation approaches.

Xingxing Fan, Zhi Liu, Linwei Ye

Causal Video Segmentation Using Superseeds and Graph Matching

The goal of video segmentation is to group pixels into meaningful spatiotemporal regions that exhibit coherence in appearance and motion. Causal video segmentation methods use only past video frames to achieve the final segmentation. The problem of causal video segmentation becomes extremely challenging due to size of the input, camera motion, occlusions, non-rigid object motion, and uneven illumination. In this paper, we propose a novel framework for semantic segmentation of causal video using superseeds and graph matching. We first employ SLIC for the extraction of superpixels in a causal video frame. A set of superseeds is chosen from the superpixels in each frame using color and texture based spatial affinity measure. Temporal coherence is ensured through propagation of labels of the superseeds across each pair of adjacent frames. A graph matching procedure based on comparison of the eigenvalues of graph Laplacians is employed for label propagation. Watershed algorithm is applied finally to label the remaining pixels to achieve final segmentation. Experimental results clearly indicate the advantage of the proposed approach over some recently reported works.

Vijay N. Gangapure, Susmit Nanda, Ananda S. Chowdhury, Xiaoyi Jiang

Fast Minimum Spanning Tree Based Clustering Algorithms on Local Neighborhood Graph

Minimum spanning tree (MST) based clustering algorithms have been employed successfully to detect clusters of heterogeneous nature. Given a dataset of


random points, most of the MST-based clustering algorithms first generate a complete graph


of the dataset and then construct MST from


. The first step of the algorithm is the major bottleneck which takes





) time. This paper proposes two algorithms namely MST-based clustering on K-means Graph and MST-based clustering on Bi-means Graph for reducing the computational overhead. The proposed algorithms make use of a centroid based nearest neighbor rule to generate a partition-based Local Neighborhood Graph (


). We prove that both the size and the computational time to construct the graph (


) is





), which is a

$O(\sqrt n)$

factor improvement over the traditional algorithms. The approximate MST is constructed from



$O(n^{3/2} \lg n)$

time, which is asymptotically faster than





). The advantage of the proposed algorithms is that they do not require any parameter setting which is a major issue in many of the nearest neighbor finding algorithms. Experimental results demonstrate that the computational time has been reduced significantly by maintaining the quality of the clusters obtained from the MST.

R. Jothi, Sraban Kumar Mohanty, Aparajita Ojha

Graph-Based Applications


From Bags to Graphs of Stereo Subgraphs in Order to Predict Molecule’S Properties

Quantitative Structure Activity and Property Relationships (QSAR and QSPR), aim to predict properties of molecules thanks to computational techniques. In these fields, graphs provide a natural encoding of molecules. However some molecules may have a same graph but differ by the three dimensional orientation of their atoms in space. These molecules, called stereoisomers, may have different properties which cannot be correctly predicted using usual graph encodings. In a previous paper we proposed to encode the stereoisomerism property of each atom by a local subgraph. A kernel between bags of such subgraphs then provides a similarity measure incorporating stereoisomerism properties. However, such an approach does not take into account potential interactions between these subgrahs. We thus propose in this paper, a method to take these interactions into account hence providing a global point of view on molecules’s stereoisomerism properties.

Pierre-Anthony Grenier, Luc Brun, Didier Villemin

Thermodynamics of Time Evolving Networks

In this paper, we present a novel and effective method for better understanding the evolution of time-varying complex networks by adopting a thermodynamic representation of network structure. We commence from the spectrum of the normalized Laplacian of a network. We show that by defining the normalized Laplacian eigenvalues as the microstate occupation probabilities of a complex system, the recently developed von Neumann entropy can be interpreted as the thermodynamic entropy of the network. Then, we give an expression for the internal energy of a network and derive a formula for the network temperature as the ratio of change of entropy and change in energy. We show how these thermodynamic variables can be computed in terms of node degree statistics for nodes connected by edges. We apply the thermodynamic characterization to real-world time-varying networks representing complex systems in the financial and biological domains. The study demonstrates that the method provides an efficient tool for detecting abrupt changes and characterizing different stages in evolving network evolution.

Cheng Ye, Andrea Torsello, Richard C. Wilson, Edwin R. Hancock

Isometric Mapping Hashing

Hashing is a popular solution to Approximate Nearest Neighbor (ANN) problems. Many hashing schemes aim at preserving the Euclidean distance of the original data. However, it is the geodesic distance rather than the Euclidean distance that more accurately characterizes the semantic similarity of data, especially in a high dimensional space. Consequently, manifold based hashing methods have achieved higher accuracy than conventional hashing schemes. To compute the geodesic distance, one should construct a nearest neighbor graph and invoke the shortest path algorithm, which is too expensive for a retrieval task. In this paper, we present a hashing scheme that preserves the geodesic distance and use a feasible out-of-sample method to generate the binary codes efficiently. The experiments show that our method outperforms several alternative hashing methods.

Yanzhen Liu, Xiao Bai, Haichuan Yang, Zhou Jun, Zhihong Zhang

Skeletal Graphs from Schrödinger Magnitude and Phase

Given an adjacency matrix, the problem of finding a simplified version of its associated graph is a challenging one. In this regard, it is desirable to retain the essential connectivity of the original graph in the new representation. To this end, we exploit the magnitude and phase information contained in the Schrödinger Operator of the Laplacian. Recent findings based on continuous-time quantum walks suggest that the long-time averages of both magnitude and phase are sensitive to long-range interactions. In this paper, we depart from this hypothesis to propose a novel representation: skeletal graphs. Using the degree of interaction (or “long-rangedness”) as a criterion we propose a structural level set (i.e. a sequence of graphs) from which it emerges the simplified graph. In addition, since the same theory can be applied to weighted graphs, we can analyze the implications of the new representation in the problems of spectral clustering, hashing and ranking from computer vision. In our experiments we will show how coherent transport phenomenon implemented by quantum walks discovers the long-range interactions without breaking the structure of the manifolds on which the graph or its connected components are embedded.

Francisco Escolano, Edwin R. Hancock, Miguel A. Lozano

Graph Based Lymphatic Vessel Wall Localisation and Tracking

We present a novel hidden Markov model (HMM) based approach to segment and track the lymph vessel in confocal microscopy images. The vessel borders are parameterised by radial basis functions (RBFs) so that the number of tracking points are reduced to a very few. The proposed method tracks the hidden states that determine the border location along a set of normal lines obtained from the previous frame. The border observation is derived from edge-based features using steerable filters. Two Gaussian probability distributions for the vessel border and background are used to infer the emission probability. The transition probability is learnt by using the Baum-Welch algorithm. A new optimisation method for determining the best sequence of the hidden states is introduced. We transform the segmentation problem into a minimisation of s-excess graph cost. Each node in the graph corresponds to one state, and the weight for each node is defined using its emission probability. The inter-relation between neighbouring nodes is defined using the transition probability. Its optimal solution can be found in polynomial time using the s-t cut algorithm. Qualitative and quantitative analysis of the method on lymphatic vessel segmentation show superior performance of the proposed method compared to the traditional Viterbi algorithm.

Ehab Essa, Xianghua Xie, Jonathan-Lee Jones

A Comic Retrieval System Based on Multilayer Graph Representation and Graph Mining

Comics analysis offers a lot of interesting Content-based Image Retrieval (CBIR) applications focusing in this special type of document images. In this paper, we propose a scheme to represent and to retrieve comic-page images using attributed Region Adjacency Graphs (RAGs) and their frequent subgraphs. We first extract the graphical structures and local features of each panel in the whole comic volume, then separate different categories of local features to different layers of attributed RAGs. After that, a list of frequent subgraphs is obtained using Frequent Subgraph Mining (FSM) techniques. For CBIR purpose, the recognition and ranking are done by checking for isomorphism between the graphs representing the query image versus the list of discovered frequent subgraphs. Our experimental results show that the proposed approach can achieve reliable retrieval results for comic images browsing using query-by-example (QBE) model.

Thanh-Nam Le, Muhammad Muzzamil Luqman, Jean-Christophe Burie, Jean-Marc Ogier

Learning High-Order Structures for Texture Retrieval

Learning interaction structures of graphical models, such as a high-order Markov-Gibbs random field (MGRF), is an open challenge. For a translation and contrast/offset invariant MGRF model of image texture, we sequentially construct higher-order cliques from the lower-order ones, starting from characteristic 2


-order interactions (i.e. edges of the interaction graph). Every next-order clique is built by adding the available required edges to one of the current cliques. Experiments on texture databases resulted in the interpretable and intuitive learned cliques of up to 20


h order. The learned high-order cliques consistently outperform in texture retrieval the multiple 2


-order ones and the state-of-the-art local binary (LBP) and ternary patterns (LTP).

Ni Liu, Georgy Gimel’farb, Patrice Delmas


Weitere Informationen

Premium Partner