Skip to main content
main-content

Über dieses Buch

This accessible text/reference presents a coherent overview of the emerging field of non-Euclidean similarity learning. The book presents a broad range of perspectives on similarity-based pattern analysis and recognition methods, from purely theoretical challenges to practical, real-world applications. The coverage includes both supervised and unsupervised learning paradigms, as well as generative and discriminative models. Topics and features: explores the origination and causes of non-Euclidean (dis)similarity measures, and how they influence the performance of traditional classification algorithms; reviews similarity measures for non-vectorial data, considering both a “kernel tailoring” approach and a strategy for learning similarities directly from training data; describes various methods for “structure-preserving” embeddings of structured data; formulates classical pattern recognition problems from a purely game-theoretic perspective; examines two large-scale biomedical imaging applications.

Inhaltsverzeichnis

Frontmatter

Chapter 1. Introduction: The SIMBAD Project

This introductory chapter describes the SIMBAD project, which represents the first systematic attempt at bringing to full maturation a paradigm shift that is just emerging within the pattern recognition and machine learning domains, where researchers are becoming increasingly aware of the importance of similarity information per se, as opposed to the classical (feature-based) approach.
Marcello Pelillo

Foundational Issues

Frontmatter

Chapter 2. Non-Euclidean Dissimilarities: Causes, Embedding and Informativeness

In many pattern recognition applications, object structure is essential for the discrimination purpose. In such cases, researchers often use recognition schemes based on template matching which lead to the design of non-Euclidean dissimilarity measures. A vector space derived from the embedding of the dissimilarities is desirable in order to use general classifiers. An isometric embedding of the symmetric non-Euclidean dissimilarities results in a pseudo-Euclidean space. More and better tools are available for the Euclidean spaces but they are not fully consistent with the given dissimilarities.
In this chapter, first a review is given of the various embedding procedures for the pairwise dissimilarity data. Next the causes are analyzed for the existence of non-Euclidean dissimilarity measures. Various ways are discussed in which the measures are converted into Euclidean ones. The purpose is to investigate whether the original non-Euclidean measures are informative or not. A positive conclusion is derived as examples can be constructed and found in real data for which the non-Euclidean characteristics of the data are essential for building good classifiers. (This chapter is based on previous publications by the authors, (Duin and Pękalska in Proc. SSPR & SPR 2010 (LNCS), pp. 324–333, 2010 and in CIARP (LNCS), pp. 1–24, 2011; Duin in ICEIS, pp. 15–28, 2010 and in ICPR, pp. 1–4, 2008; Duin et al. in SSPR/SPR, pp. 551–561, 2008; Pękalska and Duin in IEEE Trans. Syst. Man Cybern., Part C, Appl. Rev. 38(6):729–744, 2008) and contains text, figures, equations, and experimental results taken from these papers.)
Robert P. W. Duin, Elżbieta Pękalska, Marco Loog

Chapter 3. SIMBAD: Emergence of Pattern Similarity

A theory of patterns analysis has to suggest criteria how patterns in data can be defined in a meaningful way and how they should be compared. Similarity-based Pattern Analysis and Recognition is expected to adhere to fundamental principles of the scientific process that are expressiveness of models and reproducibility of their inference. Patterns are assumed to be elements of a pattern space or hypothesis class and data provide “information” which of these patterns should be used to interpret the data. The mapping between data and patterns is constructed by an inference algorithm, in particular by a cost minimization process. Fluctuations in the data usually limit the precision that we can achieve to uniquely identify a single pattern as interpretation of the data. We advocate an information-theoretic perspective on pattern analysis to resolve this dilemma where the tradeoff between informativeness of statistical inference and their stability is mirrored in the information-theoretic optimum of high information rate and zero communication error. The inference algorithm is considered as a noisy channel which naturally limits the resolution of the pattern space given the uncertainty of the data.
Joachim M. Buhmann

Deriving Similarities for Non-vectorial Data

Frontmatter

Chapter 4. On the Combination of Information-Theoretic Kernels with Generative Embeddings

Classical methods to obtain classifiers for structured objects (e.g., sequences, images) are based on generative models and adopt a classical generative Bayesian framework. To embrace discriminative approaches (namely, support vector machines), the objects have to be mapped/embedded onto a Hilbert space; one way that has been proposed to carry out such an embedding is via generative models (maybe learned from data). This type of hybrid discriminative/generative approach has been recently shown to outperform classifiers obtained directly from the generative model upon which the embedding is built.
Discriminative approaches based on generative embeddings involve two key components: a generative model used to define the embedding; a discriminative learning algorithms to obtain a (maybe kernel) classifier. The literature on generative embedding is essentially focused on defining the embedding, and some standard off-the-shelf kernel and learning algorithm are usually adopted. Recently, we have proposed a different approach that exploits the probabilistic nature of generative embeddings, by using information-theoretic kernels defined on probability distributions. In this chapter, we review this approach and its building blocks. We illustrate the performance of this approach on two medical applications.
Pedro M. Q. Aguiar, Manuele Bicego, Umberto Castellani, Mário A. T. Figueiredo, André T. Martins, Vittorio Murino, Alessandro Perina, Aydın Ulaş

Chapter 5. Learning Similarities from Examples Under the Evidence Accumulation Clustering Paradigm

The SIMBAD project puts forward a unified theory of data analysis under a (dis)similarity based object representation framework. Our work builds on the duality of probabilistic and similarity notions on pairwise object comparison. We address the Evidence Accumulation Clustering paradigm as a means of learning pairwise similarity between objects, summarized in a co-association matrix. We show the dual similarity/probabilistic interpretation of the co-association matrix and exploit these for coherent consensus clustering methods, either exploring embeddings over learned pairwise similarities, in an attempt to better highlight the clustering structure of the data, or by means of a unified probabilistic approach leading to soft assignments of objects to clusters.
Ana L. N. Fred, André Lourenço, Helena Aidos, Samuel Rota Bulò, Nicola Rebagliati, Mário A. T. Figueiredo, Marcello Pelillo

Embedding and Beyond

Frontmatter

Chapter 6. Geometricity and Embedding

In this chapter, we compare and contrast two approaches to the problem of embedding non-Euclidean data, namely geometric and structure preserving embedding. Under the first heading, we explore how spherical embedding can be used to embed data onto the surface of sphere of optimal radius. Here we explore both elliptic and hyperbolic geometries, i.e., positive and negative curvatures. Our results on synthetic and real data show that the elliptic embedding performs well under noisy conditions and can deliver low-distortion embeddings for a wide variety of datasets. Hyperbolic data seems to be much less common (at least in our datasets) and is more difficult to accurately embed. Under the second heading, we show how the Ihara zeta function can be used to embed hypergraphs in a manner which reflects their underlying relational structure. Specifically, we show how a polynomial characterization derived from the Ihara zeta function leads to an embedding which captures the prime cycle structure of the hypergraphs.
Peng Ren, Furqan Aziz, Lin Han, Eliza Xu, Richard C. Wilson, Edwin R. Hancock

Chapter 7. Structure Preserving Embedding of Dissimilarity Data

Partitioning methods for observations represented by pairwise dissimilarities are studied. Particular emphasis is put on their properties when applied to dissimilarity matrices that do not admit a loss-free embedding into a vector space. Specifically, the Pairwise Clustering cost function is shown to exhibit a shift invariance property which basically means that any symmetric dissimilarity matrix can be modified to allow a vector-space representation without distorting the optimal group structure. In an approximate sense, the same holds true for a probabilistic generalization of Pairwise Clustering, the so-called Wishart–Dirichlet Cluster Process. This shift-invariance property essentially means that these clustering methods are “blind” against Euclidean or metric violations. From the application side, such blindness against metric violations might be seen as a highly desired feature, since it broadens the applicability of certain algorithms. From the viewpoint of theory building, however, the same property might be viewed as a “negative” result, since studying these algorithms will not lead to any new insights on the role of metricity in clustering problems.
Volker Roth, Thomas J. Fuchs, Julia E. Vogt, Sandhya Prabhakaran, Joachim M. Buhmann

Chapter 8. A Game-Theoretic Approach to Pairwise Clustering and Matching

Clustering refers to the process of extracting maximally coherent groups from a set of objects using pairwise, or high-order, similarities. Traditional approaches to this problem are based on the idea of partitioning the input data into a predetermined number of classes, thereby obtaining the clusters as a by-product of the partitioning process. In this chapter, we provide a brief review of our recent work which offers a radically different view of the problem and allows one to work directly on non-(geo)metric data. In contrast to the classical approach, in fact, we attempt to provide a meaningful formalization of the very notion of a cluster in the presence of non-metric (even asymmetric and/or negative) (dis)similarities and show that game theory offers an attractive and unexplored perspective that serves well our purpose. To this end, we formulate the clustering problem in terms of a non-cooperative “clustering game” and show that a natural notion of a cluster turns out to be equivalent to a classical (evolutionary) game-theoretic equilibrium concept. Besides the game-theoretic perspective, we exhibit also characterizations of our cluster notion in terms of optimization theory and graph theory. As for the algorithmic issues, we describe two approaches to find equilibria of a clustering game. The first one is based on the classical replicator dynamics from evolutionary game theory, the second one is a novel class of dynamics inspired by infection and immunization processes which overcome their limitations. Finally, we show applications of the proposed framework to matching problems, where we aim at finding correspondences within a set of elements. In particular, we address the problems of point-pattern matching and surface registration.
Marcello Pelillo, Samuel Rota Bulò, Andrea Torsello, Andrea Albarelli, Emanuele Rodolà

Applications

Frontmatter

Chapter 9. Automated Analysis of Tissue Micro-Array Images on the Example of Renal Cell Carcinoma

Automated tissue micro-array analysis forms a challenging problem in computational pathology. The detection of cell nuclei, the classification into malignant and benign as well as the evaluation of their protein expression pattern by immunohistochemical staining are crucial routine steps for human cancer research and oncology. Computational assistance in this field can extremely accelerate the high throughput of the upcoming patient data as well as facilitate the reproducibility and objectivity of qualitative and quantitative measures. In this chapter, we describe an automated pipeline for staining estimation of tissue micro-array images, which comprises nucleus detection, nucleus segmentation, nucleus classification and staining estimation among cancerous nuclei. This pipeline is a practical example for the importance of non-metric effects in this kind of image analysis, e.g., the use of shape information and non-Euclidean kernels improve the nucleus classification performance significantly. The pipeline is explained and validated on a renal clear cell carcinoma dataset with MIB-1 stained tissue micro-array images and survival data of 133 patients. Further, the pipeline is implemented for medical use and research purpose in the free program TMARKER.
Peter J. Schüffler, Thomas J. Fuchs, Cheng Soon Ong, Volker Roth, Joachim M. Buhmann

Chapter 10. Analysis of Brain Magnetic Resonance (MR) Scans for the Diagnosis of Mental Illness

We address the problem of schizophrenia detection by analyzing magnetic resonance imaging (MRI). In general, mental illness like schizophrenia or bipolar disorders are traditionally diagnosed by self-reports and behavioral observations. A new trend in neuroanatomical research consists of using MRI images to find possible connections between cognitive impairments and neuro-physiological abnormalities. Indeed, brain imaging techniques are appealing to provide a non-invasive diagnostic tool for mass analyses and early diagnoses. The problem is challenging due to the heterogeneous behavior of the disease and up to now, although the literature is large in this field, there is not a consolidated framework to deal with it. In this context, advanced pattern recognition and machine learning techniques can be useful to improve the automatization of the involved procedures and the characterization of mental illnesses with specific and detectable brain abnormalities. In this book, we have exploited similarity-based pattern recognition techniques to further improve brain classification problem by employing the algorithms developed in the other chapters of this book. (This chapter is based on previous works (Castellani et al. in Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI’11, vol. 6892, pp. 426–433, 2011; Gönen et al. in Proceedings of the International Workshop on Similarity-Based Pattern Analysis, SIMBAD’11, vol. 7005, pp. 250–260, 2011; Ulaş et al. in Proceedings of the Iberoamerican Congress on Pattern Recognition, CIARP’11, vol. 7042, pp. 491–498, 2011; in IAPR International Conference on Pattern Recognition in Bioinformatics, PRIB’11, vol. 7036, pp. 306–317, 2011; and in Int. J. Imaging Syst. Technol. 21(2):179–192, 2011) by the authors and contains text, equations and experimental results taken from these papers.)
Aydın Ulaş, Umberto Castellani, Manuele Bicego, Vittorio Murino, Marcella Bellani, Michele Tansella, Paolo Brambilla

Backmatter

Weitere Informationen

Premium Partner

BranchenIndex Online

Die B2B-Firmensuche für Industrie und Wirtschaft: Kostenfrei in Firmenprofilen nach Lieferanten, Herstellern, Dienstleistern und Händlern recherchieren.

Whitepaper

- ANZEIGE -

Best Practices für die Mitarbeiter-Partizipation in der Produktentwicklung

Unternehmen haben das Innovationspotenzial der eigenen Mitarbeiter auch außerhalb der F&E-Abteilung erkannt. Viele Initiativen zur Partizipation scheitern in der Praxis jedoch häufig. Lesen Sie hier  - basierend auf einer qualitativ-explorativen Expertenstudie - mehr über die wesentlichen Problemfelder der mitarbeiterzentrierten Produktentwicklung und profitieren Sie von konkreten Handlungsempfehlungen aus der Praxis.
Jetzt gratis downloaden!

Bildnachweise