Skip to main content

2012 | Buch

Cross Disciplinary Biometric Systems

verfasst von: Chengjun Liu, Vijay Kumar Mago

Verlag: Springer Berlin Heidelberg

Buchreihe : Intelligent Systems Reference Library

insite
SUCHEN

Über dieses Buch

Cross disciplinary biometric systems help boost the performance of the conventional systems. Not only is the recognition accuracy significantly improved, but also the robustness of the systems is greatly enhanced in the challenging environments, such as varying illumination conditions. By leveraging the cross disciplinary technologies, face recognition systems, fingerprint recognition systems, iris recognition systems, as well as image search systems all benefit in terms of recognition performance. Take face recognition for an example, which is not only the most natural way human beings recognize the identity of each other, but also the least privacy-intrusive means because people show their face publicly every day. Face recognition systems display superb performance when they capitalize on the innovative ideas across color science, mathematics, and computer science (e.g., pattern recognition, machine learning, and image processing). The novel ideas lead to the development of new color models and effective color features in color science; innovative features from wavelets and statistics, and new kernel methods and novel kernel models in mathematics; new discriminant analysis frameworks, novel similarity measures, and new image analysis methods, such as fusing multiple image features from frequency domain, spatial domain, and color domain in computer science; as well as system design, new strategies for system integration, and different fusion strategies, such as the feature level fusion, decision level fusion, and new fusion strategies with novel similarity measures.

Inhaltsverzeichnis

Frontmatter
Feature Local Binary Patterns
Abstract
This chapter presents a Feature Local Binary Patterns (FLBP) method that encodes both local and feature information, where the feature pixels may be broadly defined by, for example, the edge pixels, the intensity peaks or valleys in an image, or new feature information derived from the local binary patterns or LBP. FLBP thus is expected to perform better than LBP for texture description and pattern recognition. For a given pixel and its nearest feature pixel, a distance vector is first formed by pointing from the given pixel to the feature pixel. A True Center (TC), which is the center pixel of a neighborhood, is then located on the distance vector by a TC parameter. A Virtual Center (VC), which replaces the center pixel of the neighborhood, is specified on the distance vector by a VC parameter. FLBP is then defined by comparing the neighbors of the true center with the virtual center. Note that when both the TC and VC parameters are zero, FLBP degenerates to LBP, which indicates that LBP is a special case of FLBP. Other special cases of FLBP include FLBP1 when the VC parameter is zero and FLBP2 when the TC parameter is zero.
Jiayu Gu, Chengjun Liu
New Color Features for Pattern Recognition
Abstract
This chapter presents a pattern recognition framework that applies new color features, which are derived from both the primary color (the red component) and the subtraction of the primary colors (the red minus green component, the blue minus green component). In particular, feature extraction from the three color components consists of the following processes: Discrete Cosine Transform (DCT) for dimensionality reduction for each of the three color components, concatenation of the DCT features to form an augmented feature vector, and discriminant analysis of the augmented feature vector with enhanced generalization performance. A new similarity measure is presented to further improve pattern recognition performance of the pattern recognition framework. Experiments using a large scale, grand challenge pattern recognition problem, the Face Recognition Grand Challenge (FRGC), show the feasibility of the proposed framework. Specifically, the experimental results on the most challenging FRGC version 2 Experiment 4 with 36,818 color images reveal that the proposed framework helps improve face recognition performance, and the proposed new similarity measure consistently performs better than other popular similarity measures.
Chengjun Liu
Gabor-DCT Features with Application to Face Recognition
Abstract
This chapter presents a Gabor-DCT Features (GDF) method on color facial parts for face recognition. The novelty of the GDF method is fourfold. First, four discriminative facial parts are used for dealing with image variations. Second, the Gabor filtered images of each facial part are grouped together based on adjacent scales and orientations to form a Multiple Scale and Multiple Orientation Gabor Image Representation (MSMO-GIR). Third, each MSMO-GIR first undergoes Discrete Cosine Transform (DCT) with frequency domain masking for dimensionality and redundancy reduction, and then is subject to discriminant analysis for extracting the Gabor-DCT features. Finally, at the decision level, the similarity scores derived from all the facial parts as well as from the Gabor filtered whole face image are fused together by means of the sum rule. Experiments on the Face Recognition Grand Challenge (FRGC) version 2 Experiment 4 and the CMU Multi-PIE database show the feasibility of the proposed GDF method.
Zhiming Liu, Chengjun Liu
Frequency and Color Fusion for Face Verification
Abstract
A face verification method is presented in this chapter by fusing the frequency and color features for improving the face recognition grand challenge performance. In particular, the hybrid color space RIQ is constructed, according to the discriminating properties among the individual component images. For each component image, the frequency features are extracted from the magnitude, the real and imaginary parts in the frequency domain of an image. Then, an improved Fisher model extracts discriminating features from the frequency data for similarity computation using a cosine similarity measure. Finally, the similarity scores from the three component images in the RIQ color space are fused by means of a weighted summation at the decision level for the overall similarity computation. To alleviate the effect of illumination variations, an illumination normalization procedure is applied to the R component image. Experiments on the Face Recognition Grand Challenge (FRGC) version 2 Experiment 4 show the feasibility of the proposed frequency and color fusion method.
Zhiming Liu, Chengjun Liu
Mixture of Classifiers for Face Recognition across Pose
Abstract
A two dimensional Mixture of Classifiers (MoC) method is presented in this chapter for face recognition across pose. The 2D MoC method performs first pose classification with predefined pose categories and then face recognition within each individual pose class. The main contributions of the paper come from (i) proposing an effective pose classification method by addressing the scales problem of images in different pose classes, and (ii) applying pose-specific classifiers for face recognition. Comparing with the 3D methods for face recognition across pose, the 2D MoC method does not require a large number of manual annotations or a complex and expensive procedure of 3D modeling and fitting. Experimental results using a data set from the CMU PIE database show the feasibility of the 2D MoC method.
Chengjun Liu
Wavelet Features for 3D Face Recognition
Abstract
A fusion framework is introduced in this chapter to demonstrate the feasibility of integrating 2D and 3D face recognition systems. Specifically, four convolution filters based on wavelet functions (Gaussian derivative, Morlet, complex Morlet, and complex frequency B-spline) are applied to extract the convolution features from the 2D and 3D image modalities to capture the intensity texture and curvature shape, respectively. The convolution features are then used to compute two separate similarity measures for the 2D and 3D modalities, which are later linearly fused to calculate the final similarity measure. The feasibility of the proposed method is demonstrated using the Face Recognition Grand Challenge (FRGC) version 2 Experiment 3, which contains 4,950 2D color images (943 controlled and 4,007 uncontrolled) and 4,950 3D recordings. The experimental results show that the Gaussian derivative convolution filter extracts the most discriminating features from the 3D modality among the four filters, and the complex frequency B-spline convolution filter outperforms the other filters when the 2D modality is applied.
Peichung Shih, Chengjun Liu
Minutiae-Based Fingerprint Matching
Abstract
At today, thanks to the high discriminability of minutiae and the availability of standard formats, minutia-based fingerprint matching algorithms are the most widely adopted methods in fingerprint recognition systems. Many minutiae matching algorithms employ a local minutiae matching stage followed by a consolidation stage. In the local matching stage, local minutiae descriptors are used, since they are discriminant and robust against typical perturbations (e.g., skin and non-linear distortion, partial overlap, rotation, displacement, noise). Minutiae Cylinder-Code representation (MCC), recently proposed by the authors, obtained remarkable performance with respect to state-of-the-art local minutiae descriptors. In this chapter, the basic principles of minutiae-based techniques and local minutiae descriptors are discussed, then the MCC approach is described in detail. Experimental results on standard benchmarks such as FVC2006 and FVC-onGoing are reported to show the great accuracy and efficiency of MCC.
Raffaele Cappelli, Matteo Ferrara, Davide Maltoni
Iris Segmentation: State of the Art and Innovative Methods
Abstract
Iris recognition is nowadays considered as one of the most accurate biometric recognition techniques. However, the overall performances of such systems can be reduced in non-ideal conditions, such as unconstrained, on-the-move, or non-collaborative setups.
In particular, a critical step of the recognition process is the segmentation of the iris pattern in the input face/eye image. This process has to deal with the fact that the iris region of the eye is a relatively small area, wet and constantly in motion due to involuntary eye movements. Moreover, eyelids, eyelashes and reflections are occlusions of the iris pattern that can cause errors in the segmentation process. As a result, an incorrect segmentation can produce erroneous biometric recognitions and seriously reduce the final accuracy of the system.
This chapter reviews current state-of-the-art iris segmentation methods in different applicative scenarios. Boundary estimation methods will be discussed, along with methods designed to remove reflections and occlusions, such as eyelids and eyelashes. In the last section, the results of the main described methods applied to public image datasets are reviewed and commented.
Ruggero Donida Labati, Angelo Genovese, Vincenzo Piuri, Fabio Scotti
Various Discriminatory Features for Eye Detection
Abstract
Five types of discriminatory features are derived using a Discriminatory Feature Extraction (DFE) method from five different sources: the grayscale image, the YCbCr color image, the 2D Haar wavelet transformed image, the Histograms of Oriented Gradients (HOG), and the Local Binary Patterns (LBP). The DFE method, which applies a new criterion vector defined on two measure vectors, is able to derive multiple discriminatory features in a whitened Principal Component Analysis (PCA) space for two-class problems. As the popular discriminant analysis method derives only one feature for a two-class problem, the DFE method improves upon the discriminant analysis method for two class problems, where one feature usually is not enough for achieving good classification performance. The effectiveness of the DFE method as well as the five types of discriminatory features is evaluated for the eye detection problem. Experiments using the Face Recognition Grand Challenge (FRGC) version 2 database show that the DFE method is able to improve the discriminatory power of the five types of discriminatory features for eye detection. In particular, the experimental results reveal that the discriminatory HOG features achieve the best eye detection performance, followed in order by the discriminatory YCbCr color features, the discriminatory 2D Haar features, the discriminatory grayscale features, and the discriminatory LBP features.
Shuo Chen, Chengjun Liu
LBP and Color Descriptors for Image Classification
Abstract
Four novel color Local Binary Pattern (LBP) descriptors are presented in this chapter for scene image and image texture classification with applications to image search and retrieval. Specifically, the first color LBP descriptor, the oRGB-LBP descriptor, is derived by concatenating the LBP features of the component images in an opponent color space — the oRGB color space. The other three color LBP descriptors are obtained by the integration of the oRGB-LBP descriptor with some additional image features: the Color LBP Fusion (CLF) descriptor is constructed by integrating the RGB-LBP, the YCbCr-LBP, the HSV-LBP, the rgb-LBP, as well as the oRGB-LBP descriptor; the Color Grayscale LBP Fusion (CGLF) descriptor is derived by integrating the grayscale-LBP descriptor and the CLF descriptor; and the CGLF+PHOG descriptor is obtained by integrating the Pyramid of Histograms of Orientation Gradients (PHOG) and the CGLF descriptor. Feature extraction applies the Enhanced Fisher Model (EFM) and image classification is based on the nearest neighbor classification rule (EFM-NN). The proposed image descriptors and the feature extraction and classification methods are evaluated using three databases: the MIT scene database, the KTH-TIPS2-b database, and the KTH-TIPS materials database. The experimental results show that (i) the proposed oRGB-LBP descriptor improves image classification performance upon other color LBP descriptors, and (ii) the CLF, the CGLF, and the CGLF+PHOG descriptors further improve upon the oRGB-LBP descriptor for scene image and image texture classification.
Sugata Banerji, Abhishek Verma, Chengjun Liu
Backmatter
Metadaten
Titel
Cross Disciplinary Biometric Systems
verfasst von
Chengjun Liu
Vijay Kumar Mago
Copyright-Jahr
2012
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-28457-1
Print ISBN
978-3-642-28456-4
DOI
https://doi.org/10.1007/978-3-642-28457-1