Multi-class semi-supervised kernel minimum squared error for face recognition
Introduction
During the past years, face recognition has become one of the most successful applications of pattern recognition and machine learning [1], [2], [3], [4], [5]. One of the most widely used methods for face recognition is appearance-based methods [6], [7], [8], [9]. In order to cover the appearance variations, multiple images associated to different types, such as poses and illuminations, should be gathered before training a classifier. However, in real-world applications, there are only a small amount of labeled face images per person. In this situation, the face recognition performance may not be robust to face variations since the images do not well cover all the face variations only using a small amount of labeled data. And unlabeled face images are easy to collect and often abundant in real world. For instance, in a surveillance system in a factory or entrance access system used by a company, most of the collected face images in the working stage will be unlabeled data but still belong to one of the known classes. Consequently, semi-supervised learning may be a useful tool which attempts to train a better classifier using both labeled and unlabeled data.
Meanwhile, kernel methods have been receiving more and more attention in nonlinear learning [10]. KMSE, which is one of the kernel methods, has become a hot topic due to its higher computational efficiency in the training phase. Gan el al. [11] proposed a semi-supervised KMSE algorithm, called Laplacian regularized KMSE (LapKMSE), which explicitly exploits the manifold structure of both labeled and unlabeled data. Experimental results on benchmark datasets and face recognition have illustrated the effectiveness of LapKMSE. However, KMSE and LapKMSE are originally designed for binary classification problem. How to effectively deal with multi-class problems is still an open problem [12]. Up to now, a widely used technique is that how to design and combining several binary classifiers. There are usually two ways: one-against-all (OAA) [13] and one-against-one (OAO) [14]. OAA and OAO construct c and c(c − 1)/2 binary classifiers where c is the number of classes, respectively. When the number of classes is large, it will be very time-consuming.
In this paper, we propose a multi-class LapKMSE (McLapKMSE) which is applied to face recognition. We directly consider the training samples in one optimization equation and resolve the equation to obtain the decision function of multi-class classification. Because our algorithm resolves one equation at once not multiple ones, McLapKMSE has lower complexity than LapKMSE. And the experiments conducted on face recognition demonstrate that McLapKMSE can achieve the comparable results but be more efficient.
Section snippets
Naïve KMSE
Let X = {(x1, y1), ⋯ , (xl, yl)} be a training set of size l, where and . For the binary classification problem, yi = −1 if xi ∈ ω1 or yi = 1 if xi ∈ ω2. By using a nonlinear mapping function Φ, a training sample is transformed into a new feature space Φ(xi) from the original feature space. The task of KMSE is to build a linear model in the new feature. The outputs of the training samples obtained by the linear model are desired to be equal to the labels.where
Method
In this section, we will present our method in details.
Assuming the dataset is given as 2.2 with yi ∈ {1, 2, ⋯ , c}, we define ri = [0, 0, ⋯ , 1, 0, ⋯ , 0]1×c to encode the sample label where rij = 1 if yi = j. K is divided into two subsets as follows:where Kl is the first l rows and Ku is the latter u rows of K.
First, we can rewrite Eq.(12) as:
In the above equation, α is a vector and the equation can only construct a binary classifier. When it
Experimental results
In this section, a series of experiments on face recognition are conducted to evaluate our algorithm performance. We compare the performance of our algorithm with KMSE, LapRLS, LapSVM and LapKMSE. For a fair comparison, the four methods employ the OAA strategy for multi-class problems. Two face databases are used in the experiments: Yale face database 1 and ORL face database 2. The
Conclusion
In this paper, we introduce a multi-class semi-supervised learning algorithm. The algorithm extends LapKSME to the multi-class situation. Compared to the traditional used strategy, our algorithm needs only resolve one equation at once while OAO or OAA needs construct multiple binary classifiers and resolve multiple equations. A series of experiments are carried out on the two face databases and the results show the effectiveness and efficiency of our algorithm.
Acknowledgements
We would like to express our sincere thanks to Yale University and AT&T Laboratories Cambridge for offering Yale and ORL face databases, respectively. The work is supported by the National Natural Science Foundation of China under grant No. 61271328.
References (20)
- et al.
Face recognition using discriminant locality preserving projections
Image Vis. Comput.
(2006) Laplacian regularized kernel minimum squared error and its application to face recognition
Optik
(2014)- et al.
Dictionaries for image and video-based face recognition
J. Opt. Soc. Am. A
(2014) - et al.
An adaptive approximation image reconstruction method for single sample problem in face recognition using FLDA
Multimed. Tools Appl.
(2014) - et al.
Decision optimization for face recognition based on an alternate correlation plane quantification metric
Opt. Lett.
(2012) - et al.
Face recognition using laplacian faces
IEEE Trans. Pattern Anal. Mach. Intell.
(2005) - et al.
Discriminative multimanifold analysis for face recognition from a single training sample per person
IEEE Trans. Pattern Anal. Mach. Intell.
(2013) - et al.
Appearance-based face recognition and light-fields
IEEE Trans. Pattern Anal. Mach. Intell.
(2004) - et al.
Visual learning and recognition of 3-d objects from appearance
Int. J. Comput. Vis.
(1995) - et al.
Face recognition by exploring information jointly in space, scale and orientation
IEEE Trans. Image Process.
(2011)
Cited by (4)
OPFaceNet: OPtimized Face Recognition Network for noise and occlusion affected face images using Hyperparameters tuned Convolutional Neural Network
2022, Applied Soft ComputingCitation Excerpt :Some of the methods used by the global approach are methods of Eigenface and Fisherface. The Modular eigenface methodology adopts the hybrid approach for the face recognition, and this scheme uses the global eigenfaces and local eigen features from the face images [17,18]. Another method presented in [19] used the hybrid color space, the Gabor image representation, the LBP, and Discrete Cosine Transform (DCT) feature for face recognition.
A noise-robust semi-supervised dimensionality reduction method for face recognition
2018, OptikCitation Excerpt :In the past decade, face recognition (FR) has attracted much attention in pattern recognition and computer vision fields and has been successfully applied in biometric identification and security system [1–7].
A novel nonlinear multi-feature fusion algorithm: Multiple kernel multiset integrated canonical correlation analysis
2018, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)Multiple instance learning tracking based on Fisher linear discriminant with incorporated priors
2018, International Journal of Advanced Robotic Systems