Skip to main content

2013 | Buch

Graph Embedding for Pattern Analysis

insite
SUCHEN

Über dieses Buch

Graph Embedding for Pattern Recognition covers theory methods, computation, and applications widely used in statistics, machine learning, image processing, and computer vision. This book presents the latest advances in graph embedding theories, such as nonlinear manifold graph, linearization method, graph based subspace analysis, L1 graph, hypergraph, undirected graph, and graph in vector spaces. Real-world applications of these theories are spanned broadly in dimensionality reduction, subspace learning, manifold learning, clustering, classification, and feature selection. A selective group of experts contribute to different chapters of this book which provides a comprehensive perspective of this field.

Inhaltsverzeichnis

Frontmatter
Multilevel Analysis of Attributed Graphs for Explicit Graph Embedding in Vector Spaces
Abstract
Ability to recognize patterns is among the most crucial capabilities of human beings for their survival, which enables them to employ their sophisticated neural and cognitive systems [1], for processing complex audio, visual, smell, touch, and taste signals. Man is the most complex and the best existing system of pattern recognition. Without any explicit thinking, we continuously compare, classify, and identify huge amount of signal data everyday [2], starting from the time we get up in the morning till the last second we fall asleep. This includes recognizing the face of a friend in a crowd, a spoken word embedded in noise, the proper key to lock the door, smell of coffee, the voice of a favorite singer, the recognition of alphabetic characters, and millions of more tasks that we perform on regular basis.
Muhammad Muzzamil Luqman, Jean-Yves Ramel, Josep Lladós
Feature Grouping and Selection Over an Undirected Graph
Abstract
High-dimensional regression/classification is challenging due to the curse of dimensionality. Lasso [18] and its various extensions [10], which can simultaneously perform feature selection and regression/classification, have received increasing attention in this situation. However, in the presence of highly correlated features lasso tends to only select one of those features resulting in suboptimal performance [25]. Several methods have been proposed to address this issue in the literature. Shen and Ye [15] introduce an adaptive model selection procedure that corrects the estimation bias through a data-driven penalty based on generalized degrees of freedom.
Sen Yang, Lei Yuan, Ying-Cheng Lai, Xiaotong Shen, Peter Wonka, Jieping Ye
Median Graph Computation by Means of Graph Embedding into Vector Spaces
Abstract
In pattern recognition [8, 14], a key issue to be addressed when designing a system is how to represent input patterns. Feature vectors is a common option. That is, a set of numerical features describing relevant properties of the pattern are computed and arranged in a vector form. The main advantages of this kind of representation are computational simplicity and a well sound mathematical foundation. Thus, a large number of operations are available to work with vectors and a large repository of algorithms for pattern analysis and classification exist. However, the simple structure of feature vectors might not be the best option for complex patterns where nonnumerical features or relations between different parts of the pattern become relevant.
Miquel Ferrer, Itziar Bardají, Ernest Valveny, Dimosthenis Karatzas, Horst Bunke
Patch Alignment for Graph Embedding
Abstract
Dozens of manifold learning-based dimensionality reduction algorithms have been proposed in the literature. The most representative ones are locally linear embedding (LLE) [65], ISOMAP [76], Laplacian eigenmaps (LE) [4], Hessian eigenmaps (HLLE) [20], and local tangent space alignment (LTSA) [102]. LLE uses linear coefficients, which reconstruct a given example by its neighbors, to represent the local geometry, and then seeks a low-dimensional embedding, in which these coefficients are still suitable for reconstruction. ISOMAP preserves global geodesic distances of all the pairs of examples.
Yong Luo, Dacheng Tao, Chao Xu
Improving Classifications Through Graph Embeddings
Abstract
Unsupervised classification is used to identify similar entities in a dataset and is extensively used in many application domains such as spam filtering [5], medical diagnosis [15], demographic research [13], etc. Unsupervised classification using K-Means generally clusters data based on (1) distance-based attributes of the dataset [4, 16, 17, 23] or (2) combinatorial properties of a weighted graph representation of the dataset [8].
Anirban Chatterjee, Sanjukta Bhowmick, Padma Raghavan
Learning with ℓ 1-Graph for High Dimensional Data Analysis
Abstract
An informative graph, directed or undirected, is critical for those graph-orientated algorithms designed for data analysis, such as clustering, subspace learning, and semi-supervised learning. Data clustering often starts with a pairwise similarity graph and then translates into a graph partition problem [19], and thus the quality of the graph essentially determines the clustering quality.
Jianchao Yang, Bin Cheng, Shuicheng Yan, Yun Fu, Thomas Huang
Graph-Embedding Discriminant Analysis on Riemannian Manifolds for Visual Recognition
Abstract
Recently, several studies have utilised non-Euclidean geometry to address several computer vision problems including object tracking [17], characterising the diffusion of water molecules as in diffusion tensor imaging [24], face recognition [23, 31], human re-identification [4], texture classification [16], pedestrian detection [39] and action recognition [22, 43].
Sareh Shirazi, Azadeh Alavi, Mehrtash T. Harandi, Brian C. Lovell
A Flexible and Effective Linearization Method for Subspace Learning
Abstract
In the past decades, a large number of subspace learning or dimension reduction methods [2, 16, 20, 32, 34, 37, 44] have been proposed. Principal component analysis (PCA) [32] pursues the directions of maximum variance for optimal reconstruction. Linear discriminant analysis (LDA) [2], as a supervised algorithm, aims to maximize the inter-class scatter and at the same time minimize the intra-class scatter. Due to utilization of label information, LDA is experimentally reported to outperform PCA for face recognition, when sufficient labeled face images are provided [2].
Feiping Nie, Dong Xu, Ivor W. Tsang, Changshui Zhang
A Multi-graph Spectral Framework for Mining Multi-source Anomalies
Abstract
Anomaly detection refers to the task of detecting objects whose characteristics deviate significantly from the majority of the data [5]. It is widely used in a variety of domains, such as intrusion detection, fraud detection, and health monitoring. Today’s information explosion generates significant challenges for anomaly detection when there exist many large, distributed data repositories consisting of a variety of data sources and formats.
Jing Gao, Nan Du, Wei Fan, Deepak Turaga, Srinivasan Parthasarathy, Jiawei Han
Graph Embedding for Speaker Recognition
Abstract
This chapter presents applications of graph embedding to the problem of text-independent speaker recognition. Speaker recognition is a general term encompassing multiple applications. At the core is the problem of speaker comparison—given two speech recordings (utterances), produce a score which measures speaker similarity. Using speaker comparison, other applications can be implemented—speaker clustering (grouping similar speakers in a corpus), speaker verification (verifying a claim of identity), speaker identification (identifying a speaker out of a list of potential candidates), and speaker retrieval (finding matches to a query set).
Z. N. Karam, W. M. Campbell
Metadaten
Titel
Graph Embedding for Pattern Analysis
herausgegeben von
Yun Fu
Yunqian Ma
Copyright-Jahr
2013
Verlag
Springer New York
Electronic ISBN
978-1-4614-4457-2
Print ISBN
978-1-4614-4456-5
DOI
https://doi.org/10.1007/978-1-4614-4457-2

Neuer Inhalt