Skip to main content
main-content

Über dieses Buch

This book constitutes the proceedings of the Third International Workshop on Similarity Based Pattern Analysis and Recognition, SIMBAD 2015, which was held in Copenahgen, Denmark, in October 2015. The 15 full and 8 short papers presented were carefully reviewed and selected from 30 submissions.The workshop focus on problems, techniques, applications, and perspectives: from supervisedto unsupervised learning, from generative to discriminative models, and fromtheoretical issues to empirical validations.

Inhaltsverzeichnis

Frontmatter

A Novel Data Representation Based on Dissimilarity Increments

Abstract
Many pattern recognition techniques have been proposed, typically relying on feature spaces. However, recent studies have shown that different data representations, such as the dissimilarity space, can help in the knowledge discovering process, by generating more informative spaces. Still, different measures can be applied, leading to different data representations. This paper proposes the application of a second-order dissimilarity measure, which uses triplets of nearest neighbors, to generate a new dissimilarity space. In comparison with the traditional Euclidean distance, this new representation is best suited for the identification of natural data sparsity. It leads to a space that better describes the data, by reducing the overlap of the classes and by increasing the discriminative power of features. As a result, the application of clustering algorithms over the proposed dissimilarity space results in reduced error rates, when compared with either the original feature space or the Euclidean dissimilarity space. These conclusions are supported on experimental validation on benchmark datasets.
Helena Aidos, Ana Fred

Characterizing Multiple Instance Datasets

Abstract
In many pattern recognition problems, a single feature vector is not sufficient to describe an object. In multiple instance learning (MIL), objects are represented by sets (bags) of feature vectors (instances). This requires an adaptation of standard supervised classifiers in order to train and evaluate on these bags of instances. Like for supervised classification, several benchmark datasets and numerous classifiers are available for MIL. When performing a comparison of different MIL classifiers, it is important to understand the differences of the datasets, used in the comparison. Seemingly different (based on factors such as dimensionality) datasets may elicit very similar behaviour in classifiers, and vice versa. This has implications for what kind of conclusions may be drawn from the comparison results. We aim to give an overview of the variability of available benchmark datasets and some popular MIL classifiers. We use a dataset dissimilarity measure, based on the differences between the ROC-curves obtained by different classifiers, and embed this dataset dissimilarity matrix into a low-dimensional space. Our results show that conceptually similar datasets can behave very differently. We therefore recommend examining such dataset characteristics when making comparisons between existing and new MIL classifiers. Data and other resources are available at http://​www.​miproblems.​org.
Veronika Cheplygina, David M. J. Tax

Supervised Learning of Diffusion Distance to Improve Histogram Matching

Abstract
In this paper we propose a learning method properly designed for histogram comparison. We based our approach on the so called diffusion distance which has been introduced to improve the robustness against the quantization effect and the limitations of the standard bin-to-bin distance computation. We revised the diffusion distance definition in order to cast the histogram matching as a distance metric learning problem. In particular, we exploit the Large Margin Nearest Neighbor (LMNN) classification procedure to introduce a supervised version of the standard nearest neighbor (NN) classification paradigm.
We evaluate our method on several application domains namely, brain classification, texture classification, and image classification. In all the experiments our approach shown promising results in comparison with other similar methods.
Tewodros M. Dagnew, Umberto Castellani

Similarity Analysis from Limiting Quantum Walks

Abstract
Similarity compression is a critical step to improve the efficiency of edge detection. In this paper, we compare two approaches for compressing/decompressing similarity matrices, being edge detection our application domain. In this regard, state-of-the-art contour detectors rely on spectral clustering where pixel or patch similarity is encoded in a symmetric weight matrix and the eigenvectors of the normalized Laplacian derived from this matrix are clustered in order to find contours (normalized cuts and its variants). Despite significant interest in learning the similarity measure for providing well localized boundaries, the underlying spectral analysis has played a subsidiary role, and has mostly been based on classical random walks and the heat kernel. However, recent findings based on continuous-time quantum walks suggest that under the complex wave equation there are long-range interactions not present in the classical case. In the case of the edge map this opens up a means of controlling texture in the edge map by a simple thresholding. In this paper, we use the long-time averages of quantum walks for edge detection, and show that texture is a consequence of short-rangedness of these interactions. This is due to the local-to-global property of limiting quantum walks. In addition, when analyzing the role of limiting quantum walks as intermediate/indirect similarity decompression, we find that quantum walks are able of recovering the original edge structure when a factorization compressor is used, whereas this is not the case when compression relies on the Szemeéredi Regularity Lemma, despite this latter method is by far more efficient.
Manuel Curado, Francisco Escolano, Edwin R. Hancock, Farshad Nourbakhsh, Marcello Pelillo

Introducing Negative Evidence in Ensemble Clustering Application in Automatic ECG Analysis

Abstract
Ensemble clustering generates data partitions by using different data representations and/or clustering algorithms. Each partition provides independent evidence to generate the final partition: two instances falling in the same cluster provide evidence towards them belonging to the same final partition.
In this paper we argue that, for some data representations, the fact that two instances fall in the same cluster of a given partition could provide little to no evidence towards them belonging to the same final partition. However, the fact that they fall in different clusters could provide strong negative evidence of them belonging to the same partition.
Based on this concept, we have developed a new ensemble clustering algorithm which has been applied to the heartbeat clustering problem. By taking advantage of the negative evidence we have decreased the misclassification rate over the MIT-BIH database, the gold standard test for this problem, from 2.25 % to 1.45 %.
David G. Márquez, Ana L. N. Fred, Abraham Otero, Constantino A. García, Paulo Félix

Dissimilarity Representations for Low-Resolution Face Recognition

Abstract
Low-resolution face recognition is a very difficult problem. In this setup, the training database or gallery contains high-resolution images, but the image to be recognized is of low resolution. Thus we are dealing with a resolution mismatch problem for training and test images. Standard face recognition methods fail in this setting, which suggests that current feature representation approaches are not adequate to cope with this problem. Therefore, we propose the use of dissimilarity representations based on different strategies, which differ in how images with different resolutions are compared, to solve the resolution mismatch problem. Experiments on four standard face datasets demonstrate that a strategy based on first down-scaling and afterwards up-scaling training images while up-scaling test images outperforms all the other approaches.
Mairelys Hernández-Durán, Veronika Cheplygina, Yenisel Plasencia-Calaña

Deep Metric Learning Using Triplet Network

Abstract
Deep learning has proven itself as a successful set of models for learning useful semantic representations of data. These, however, are mostly implicitly learned as part of a classification task. In this paper we propose the triplet network model, which aims to learn useful representations by distance comparisons. A similar model was defined by Wang et al. (2014), tailor made for learning a ranking for image information retrieval. Here we demonstrate using various datasets that our model learns a better representation than that of its immediate competitor, the Siamese network. We also discuss future possible usage as a framework for unsupervised learning.
Elad Hoffer, Nir Ailon

Cluster Merging Based on Dominant Sets

Abstract
As an important unsupervised learning approach, clustering is widely used in pattern recognition, information retrieval and image analysis, etc. In various clustering approaches, graph based clustering has received much interest and obtain impressive success in application recently. However, existing graph based clustering algorithms usually require as input some parameters in one form or another. In this paper we study the dominant sets clustering algorithm and present a new clustering algorithm without any parameter input. We firstly use histogram equalization to transform the similarity matrices of data. This transformation is shown to make the clustering results invariant to similarity parameters effectively. Then we merge clusters based on the ratio between intra-cluster and inter-cluster similarity. Our algorithm is shown to be effective in experiments on seven datasets.
Jian Hou, Chunshi Sha, Hongxia Cui, Lei Chi

An Adaptive Radial Basis Function Kernel for Support Vector Data Description

Abstract
For one-class classification or novelty detection, the metric of the feature space is essential for a good performance. Typically, it is assumed that the metric of the feature space is relatively isotropic, or flat, indicating that a distance of 1 can be interpreted in a similar way for every location and direction in the feature space. When this is not the case, thresholds on distances that are fitted in one part of the feature space will be suboptimal for other parts. To avoid this, the idea of this paper is to modify the width parameter in the Radial Basis Function (RBF) kernel for the Support Vector Data Description (SVDD) classifier. Although there have been numerous approaches to learn the metric in a feature space for (supervised) classification problems, for one-class classification this is harder, because the metric cannot be optimized to improve a classification performance. Instead, here we propose to consider the local pairwise distances in the training set. The results obtained on both artificial and real datasets demonstrate the ability of the modified RBF kernel to identify local scales in the input data, extracting its general structure and improving the final classification performance for novelty detection problems.
André E. Lazzaretti, David M. J. Tax

Robust Initialization for Learning Latent Dirichlet Allocation

Abstract
Latent Dirichlet Allocation (LDA) represents perhaps the most famous topic model, employed in many different contexts in Computer Science. The wide success of LDA is due to the effectiveness of this model in dealing with large datasets, the competitive performances obtained on several tasks (e.g. classification, clustering), and the interpretability of the solution provided. Learning the LDA from training data usually requires to employ iterative optimization techniques such as the Expectation-Maximization, for which the choice of a good initialization is of crucial importance to reach an optimal solution. However, even if some clever solutions have been proposed, in practical applications this issue is typically disregarded, and the usual solution is to resort to random initialization.
In this paper we address the problem of initializing the LDA model with two novel strategies: the key idea is to perform a repeated learning by employ a topic splitting/pruning strategy, such that each learning phase is initialized with an informative situation derived from the previous phase.
The performances of the proposed splitting and pruning strategies have been assessed from a twofold perspective: i) the log-likelihood of the learned model (both on the training set and on a held-out set); ii) the coherence of the learned topics. The evaluation has been carried out on five different datasets, taken from and heterogeneous contexts in the literature, showing promising results.
Pietro Lovato, Manuele Bicego, Vittorio Murino, Alessandro Perina

Unsupervised Motion Segmentation Using Metric Embedding of Features

Abstract
Motion segmentation is a well studied problem in computer vision. Most approaches assume a priori knowledge of the number of moving objects in the scene. In the absence of such information, motion segmentation is generally achieved through brute force search, e.g., searching over all possible priors or iterating over a search for the most prominent motion. In this paper, we propose an efficient method that achieves motion segmentation over a sequence of frames while estimating the number of moving segments; no prior assumption is made about the structure of scene. We utilize metric embedding to map a complex graph of image features and their relations into hierarchically well-separated tree, yielding a simplified topology over which the motions are segmented. Moreover, the method provides a hierarchical decomposition of motion for objects with moving parts.
Yusuf Osmanlıoğlu, Sven Dickinson, Ali Shokoufandeh

Transitive Assignment Kernels for Structural Classification

Abstract
Kernel methods provide a convenient way to apply a wide range of learning techniques to complex and structured data by shifting the representational problem from one of finding an embedding of the data to that of defining a positive semi-definite kernel. One problem with the most widely used kernels is that they neglect the locational information within the structures, resulting in less discrimination. Correspondence-based kernels, on the other hand, are in general more discriminating, at the cost of sacrificing positive-definiteness due to their inability to guarantee transitivity of the correspondences between multiple graphs. In this paper we adopt a general framework for the projection of (relaxed) correspondences onto the space of transitive correspondences, thus transforming any given matching algorithm onto a transitive multi-graph matching approach. The resulting transitive correspondences can then be used to provide a kernel that both maintains locational information and is guaranteed to be positive-definite. Experimental evaluation validates the effectiveness of the kernel for several structural classification tasks.
Michele Schiavinato, Andrea Gasparetto, Andrea Torsello

Large Scale Indefinite Kernel Fisher Discriminant

Abstract
Indefinite similarity measures can be frequently found in bio-informatics by means of alignment scores. Lacking an underlying vector space, the data are given as pairwise similarities only. Indefinite Kernel Fisher Discriminant (iKFD) is a very effective classifier for this type of data but has cubic complexity and does not scale to larger problems. Here we propose an extension of iKFD such that linear runtime and memory complexity is achieved for low rank indefinite kernels. Evaluation at several larger similarity data from various domains shows that the proposed method provides similar generalization capabilities while being substantially faster for large scale data.
Frank-Michael Schleif, Andrej Gisbrecht, Peter Tino

Similarity-Based User Identification Across Social Networks

Abstract
In this paper we study the identifiability of users across social networks, with a trainable combination of different similarity metrics. This application is becoming particularly interesting as the number and variety of social networks increase and the presence of individuals in multiple networks is becoming commonplace. Motivated by the need to verify information that appears in social networks, as addressed by the research project REVEAL, the presence of individuals in different networks provides an interesting opportunity: we can use information from one network to verify information that appears in another. In order to achieve this, we need to identify users across networks. We approach this problem by a combination of similarity measures that take into account the users’ affiliation, location, professional interests and past experience, as stated in the different networks. We experimented with a variety of combination approaches, ranging from simple averaging to trained hybrid models. Our experiments show that, under certain conditions, identification is possible with sufficiently high accuracy to support the goal of verification.
Katerina Zamani, Georgios Paliouras, Dimitrios Vogiatzis

Dominant-Set Clustering Using Multiple Affinity Matrices

Abstract
Pairwise (or graph-based) clustering algorithms typically assume the existence of a single affinity matrix, which describes the similarity between the objects to be clustered. In many practical applications, however, several similarity relations might be envisaged and the problem arises as to how properly select or combine them. In this paper we offer a solution to this problem for the case of dominant sets, a well-known formalization of the notion of a cluster, which generalizes the notion of maximal clique to edge-weighted graphs and has intriguing connections to evolutionary game theory. Specifically, it has been shown that dominant sets can be bijectively related to Evolutionary Stable Strategies (ESS) - a classic notion of equilibrium in the evolutionary game theory field - of a so-called “clustering game”. The clustering game is a non-cooperative game between two-players, where the objects to cluster form the set of strategies, while the affinity matrix provides the players’ payoffs. The proposed approach generalizes dominant sets to multiple affinities by extending the clustering game, which has a single payoff, to a multi-payoff game. Accordingly, dominant sets in the multi-affinity setting become equivalent to ESSs of a corresponding multi-payoff clustering game, which can be found by means of so-called Biased Replicator Dynamics. Experiments conducted over standard benchmark datasets consistently show that the proposed combination scheme allows one to substantially improve the performance of dominant-set clustering over its single-affinity counterpart.
Eyasu Zemene, Samuel Rota Bulò, Marcello Pelillo

Discovery of Salient Low-Dimensional Dynamical Structure in Neuronal Population Activity Using Hopfield Networks

Abstract
We present here a novel method for the classical task of finding and extracting recurring spatiotemporal patterns in recorded spiking activity of neuronal populations. In contrast to previously proposed methods it does not seek to classify exactly recurring patterns, but rather approximate versions possibly differing by a certain number of missed, shifted or excess spikes. We achieve this by fitting large Hopfield networks to windowed, binned spiking activity in an unsupervised way using minimum probability flow parameter estimation and then collect Hopfield memories over the raw data. This procedure results in a drastic reduction of pattern counts and can be exploited to identify prominently recurring spatiotemporal patterns. Modeling furthermore the sequence of occurring Hopfield memories over the original data as a Markov process, we are able to extract low-dimensional representations of neural population activity on longer time scales. We demonstrate the approach on a data set obtained in rat barrel cortex and show that it is able to extract a remarkably low-dimensional, yet accurate representation of population activity observed during the experiment.
Felix Effenberger, Christopher Hillar

Backmatter

Weitere Informationen

Premium Partner

    Bildnachweise