Neighborhood discriminant projection for face recognition
Introduction
Face recognition has become one of the most challenging problems in computer vision and pattern recognition (Chellappa et al., 1995, Zhao et al., 2003, Li and Jain, 2005). Numerous methods have been proposed for face recognition over the past few decades (Zhao et al., 2003, Turk and Pentland, 1991, Belhumeur et al., 1997, Cevikalp et al., 2005, Huang et al., 2002). Among these methods, Principal Component Analysis (PCA) (Turk and Pentland, 1991) and Linear Discriminant Analysis (LDA) (Belhumeur et al., 1997) are the most popular techniques, which assume that the samples lie on a linearly embedded manifold and aim at preserving the global Euclidean structure of the image space.
However, a lot of research has shown that facial images possibly lie on a nonlinear submanifold (Roweis and Saul, 2000, Tenenbaum et al., 2000, He and Niyogi, 2003, He et al., 2005b, Chen et al., 2005, Hu et al., 2004, Yang, 2002b). When using PCA and LDA for dimensionality reduction, they will fail to discover the intrinsic dimension of the image space. Recently, a number of nonlinear techniques originated from manifold learning have been proposed to discover the nonlinear structure of the manifold by investigating the local geometry of samples, such as LLE (Roweis and Saul, 2000), Isomap (Tenenbaum et al., 2000) and Laplacian Eigenmap (Belkin and Niyogi, 2001). These methods are appropriate for representation of nonlinear data, but only defined on the training data. Because of the difficult issue that how to map a new test sample to the low dimensional space, these nonlinear manifold learning algorithms cannot be applied directly to classification problems. Although kernel-based methods, such as Kernel PCA (Schölkopf et al., 1998) and Kernel LDA (Yang, 2002a), can efficiently deal with the nonlinear data and evaluate the map on new test samples, they do not consider explicitly the structure of the nonlinear manifold of the data and are computationally expensive in computational complexity which is undesirable in practical face recognition systems.
Recently, some manifold-based algorithms (He and Niyogi, 2003, He et al., 2003, He et al., 2005a, He et al., 2005b, Athitsos et al., 2004) resolve the difficulty that how to map a new test sample to the low dimensional space. However, these methods are designed to preserve the locality of samples in the lower dimensional space rather than good discrimination ability. As a result, the projected vectors of different classes may overlap. Only a few manifold learning algorithms thoroughly consider the within-class information and between-class information to address classification problems, including (Yan et al., 2004, Yan et al., 2005, Chen et al., 2005, Yang, 2002b). Among these methods, Local Discriminant Embedding (LDE) (Chen et al., 2005) and Marginal Fisher Analysis (MFA) (Yan et al., 2005) are the representational methods. However, LDE and MFA both use PCA for dimensionality reduction to deal with the small sample size problem, which may result in the loss of important discriminative information.
In order to overcome the above shortcoming, we propose a novel manifold learning algorithm, called Neighborhood Discriminant Projection (NDP), which explicitly considers the within-class submanifold and the between-class submanifold by integrating the within-class neighboring information and the between-class neighboring information. The aim of NDP is to preserve the within-class neighboring geometry of the image space, while keeping away the projected vectors of the samples of different classes. To be specific, the within-class submanifold is modeled by the within-class affinity weight of the samples, which is an optimal representation of the intrinsic neighboring geometry and computed based on the method of LLE (Roweis and Saul, 2000). The between-class submanifold is modeled by the between-class affinity weight of the samples, which reflects the similarity of the samples of different classes and can be obtained by the method of Laplacian Eigenmap (Belkin and Niyogi, 2001). Due to thoroughly consider the nonlinear within-class submanifold and the between-class submanifold, the obtained linear subspace not only preserves the within-class neighboring geometry but also differentiates the projected vectors of the samples of different classes.
The rest of the paper is organized as follows: the Neighborhood Discriminant Projection (NDP) algorithm is described in Section 2. In Section 3, experimental results are presented to demonstrate the effectiveness and robustness of NDP. Finally, conclusions are summarized in Section 4.
Section snippets
Neighborhood discriminant projection (NDP)
For the convenience of understanding, in the following, the small italic letters denote scalars, such as a, b, c; the small bold non-italic letters denote vectors, such as a, b, c; and the capital bold non-italic letters denote matrices, such as A, B, C. Let we have n samples belonging to c classes of faces, and the corresponding class labels are . And let the number of samples in the ith class be ni, while denotes the jth sample in the ith class.
Experimental results
To verify the proposed NDP approach, three well-known face databases (ORL database (Samaria and Harter, 1994), UMIST database (Graham and Allinson, 1998) and FERET database (Phillips et al., 2000)) were used; and the system performance of NDP was compared with the ones of PCA (Turk and Pentland, 1991), LDA (Belhumeur et al., 1997), Direct-LDA (Yu and Yang, 2001), DCV (Cevikalp et al., 2005), MFA (Yan et al., 2005), LDE (Chen et al., 2005), Supervised NPE (SNPE) (He et al., 2005b) and Supervised
Conclusions
In this paper, we propose a novel manifold learning method named Neighborhood Discriminant Projection for face recognition. In order to preserve the within-class neighboring geometry of the image space and make the projected vectors of the samples of different classes far from each other, NDP explicitly considers the within-class submanifold and the between-class submanifold by the within-class graph and the between-class graph. Experimental results on ORL database, UMIST database and FERET
Acknowledgements
This work was supported by the National Natural Science Foundation of China under Grant No. 60635050 and 60405004, the National High-Tech Research and Development Plan of China under Grant Nos. 2006AA01Z318, 2006AA01Z192 and 20060101Z1059, and the National Basic Research Program of China under Grant No. 2006CB708303.
References (29)
- et al.
A direct LDA algorithm for high-dimensional data with application to face recognition
Pattern Recognition
(2001) - Athitsos, V., Alon, J., Sclaroff, S., Kollios, G., 2004. BoostMap: A method for efficient approximate similarity...
- et al.
Eigenfaces vs. Fisherfaces: Recognition using class specific linear projection
IEEE. Trans. Pattern Anal. Machine Intell.
(1997) - Belkin, M., Niyogi, P., 2001. Laplacian eigenmaps and spectral techniques for embedding and clustering. In: Proc. Conf....
- et al.
Discriminative common vectors for face recognition
IEEE. Trans. Pattern Anal. Machine Intell.
(2005) - et al.
Human and machine recognition of faces: A survey
Proc. IEEE
(1995) - Chen, H.-T., Chang, H.-W., Liu, T.-L., Local discriminant embedding and its variants. In: Proc. IEEE Computer Society...
- et al.
Characterizing virtual eigensignatures for general purpose face recognition
- He, X., Niyogi, P., 2003. Locality preserving projections. In: Proc. Conf. Advances in Neural Information Processing...
Face recognition using Laplacianfaces
IEEE. Trans. Pattern Anal. Machine Intell.
Cited by (29)
Theoretical framework in graph embedding-based discriminant dimensionality reduction
2021, Signal ProcessingCitation Excerpt :Thus, Ding et al. built the links between its homogeneous and heterogeneous neighbors to keep neighbors in the same class compact and those in different classes separated in the embedding space by constructing double adjacency graphs (DAGs) and proposed an effective DNE (DAG-DNE) [21]. Other local discriminant embeddings and their improved versions, such as neighborhood discriminant projection [22], margin maximum embedding discriminant [23], local Fisher discriminate analysis [24], and locality sensitive discriminant analysis [25], have been proposed in the past years. The aforementioned methods simply assign +1 and −1 to weight edges between intra-class and inter-class neighbors in the construction of adjacency graphs.
Optimal feature extraction methods for classification methods and their applications to biometric recognition
2016, Knowledge-Based SystemsCitation Excerpt :To increase discriminating ability, some supervised feature extraction methods based on manifold learning were proposed. The representative methods include Local Discriminant Embedding (LDE) [12], Marginal Fisher Analysis (MFA) [13], Discriminant Simplex Analysis (DSA) [14], Neighborhood Discriminant Projection (NDP) [15], Local Sensitive Discriminant Analysis (LSDA) [16] and multi-manifold discriminant analysis (MMDA) [17]. These methods thoroughly consider the within-class information and the between-class information.
Towards multi-scale fuzzy sparse discriminant analysis using local third-order tensor model of face images
2016, NeurocomputingCitation Excerpt :Nevertheless, a supervised classifier usually fail in extracting effective features due to the variations in appearance between training samples and gallery ones, especially in the case of insufficient training set. Recently, locality-oriented feature representation has become a fundamental and efficient learning approach in image analysis by grouping similar objects into the same category, as reported in many previous works, including locally linear embedding (LLE) [1], isomap [2], locality preserving projections (LPP) [3], laplacianface [4], locality pursuit embedding (LPE) [5], neighborhood preserving embedding (NPE) [6–8] and locality preserving canonical correlation analysis (LPCCA) [9]. These methods have the capability to improve the performance over their supervised counterparts, such as the discriminant analysis (DA) [10–12] and maximum margin criterion (MMC) [13].
Similarity-balanced discriminant neighbor embedding and its application to cancer classification based on gene expression data
2015, Computers in Biology and MedicineCitation Excerpt :Since both LPP and NPE do not make full use of the class label information, they cannot work well in classification tasks. Thus, supervised methods for LPP and NPE have been developed, such as supervised locality preserving projections (SLPP) [23], discriminant locality preserving projection (DLPP) [24], neighborhood discriminant projection (NDP) [25], null space discriminant locality preserving projection (NDLPP) [26], and supervised neighborhood preserving embedding (SNPE) [27]. Different from LPP and NPE, DNE is a supervised manifold learning method itself.
Marginal discriminant projections: An adaptable margin discriminant approach to feature reduction and extraction
2010, Pattern Recognition Letters