Skip to main content
main-content

Über dieses Buch

rd It is a pleasure and an honour both to organize ICB 2009, the 3 IAPR/IEEE Inter- tional Conference on Biometrics. This will be held 2–5 June in Alghero, Italy, hosted by the Computer Vision Laboratory, University of Sassari. The conference series is the premier forum for presenting research in biometrics and its allied technologies: the generation of new ideas, new approaches, new techniques and new evaluations. The ICB series originated in 2006 from joining two highly reputed conferences: Audio and Video Based Personal Authentication (AVBPA) and the International Conference on Biometric Authentication (ICBA). Previous conferences were held in Hong Kong and in Korea. This is the first time the ICB conference has been held in Europe, and by Programme Committee, arrangements and by the quality of the papers, ICB 2009 will continue to maintain the high standards set by its predecessors. In total we received around 250 papers for review. Of these, 36 were selected for oral presentation and 93 for poster presentation. These papers are accompanied by the invited speakers: Heinrich H. Bülthoff (Max Planck Institute for Biological Cybernetics, Tüb- gen, Germany) on “What Can Machine Vision Learn from Human Perception?”, - daoki Furui (Department of Computer Science, Tokyo Institute of Technology) on “40 Years of Progress in Automatic Speaker Recognition Technology” and Jean-Christophe Fondeur (SAGEM Security and Morpho, USA) on “Large Scale Deployment of Biom- rics and Border Control”.

Inhaltsverzeichnis

Frontmatter

Face

Facial Geometry Estimation Using Photometric Stereo and Profile Views

This paper presents a novel method for estimating the three-dimensional shape of faces, facilitating the possibility of enhanced face recognition. The method involves a combined use of photometric stereo and profile view information. It can be divided into three principal stages: (1) An initial estimate of the face is obtained using four-source high-speed photometric stereo. (2) The profile is determined from a side-view camera. (3) The facial shape estimation is iteratively refined using the profile until an energy functional is minimised. This final stage, which is the most important contribution of the paper, works by continually deforming the shape estimate so that its profile is exact. An energy is then calculated based on the difference between the raw images and synthetic images generated using the new shape estimate. The surface normals are then adjusted according to energy until convergence. Several real face reconstructions are presented and compared to ground truth. The results clearly demonstrate a significant improvement in accuracy compared to standard photometric stereo.

Gary A. Atkinson, Melvyn L. Smith, Lyndon N. Smith, Abdul R. Farooq

3D Signatures for Fast 3D Face Recognition

We propose a vector representation (called a 3D signature) for 3D face shape in biometrics applications. Elements of the vector correspond to fixed surface points in a face-centered coordinate system. Since the elements are registered to the face, comparisons of vectors to produce match scores can be performed without a probe to gallery alignment step such as an invocation of the iterated closest point (ICP) algorithm in the calculation of each match score. The proposed 3D face recognition method employing the 3D signature ran more than three orders of magnitude faster than a traditional ICP based distance implementation, without sacrificing accuracy. As a result, it is feasible to apply distance based 3D face biometrics to recognition scenarios that, because of computational constraints, may have previously been limited to verification. Our use of more complex shape regions, which is a trivial task with the use of 3D signatures, improves biometric performance over simple spherical cut regions used previously [1]. Experimental results with a large database of 3D images demonstrate the technique and its advantages.

Chris Boehnen, Tanya Peters, Patrick J. Flynn

On Decomposing an Unseen 3D Face into Neutral Face and Expression Deformations

This paper presents a technique for decomposing an unseen 3D face under any facial expression into an estimated 3D neutral face and expression deformations (the shape residue between the non-neutral and the estimated neutral 3D face). We show that this decomposition gives a robust facial expression classification and improves the accuracy of an off-the-shelf 3D face recognition system. The proposed decomposition system is a multistage data-driven process in which training expression residues and neutral faces reciprocally guide the decomposition of the 3D face. A plausible decomposition was achieved. The shapes and the normals of the expression residue are used for expression classification while the neutral face estimates are used for expression robust face recognition. Experiments were performed on a large number of non-neutral scans and significant expression classification rates were achieved. Moreover, 6% increase in face recognition rate was achieved for probes with severe facial expressions.

Faisal R. Al-Osaimi, Mohammed Bennamoun, Ajmal Mian

Pose Normalization for Local Appearance-Based Face Recognition

We focused this work on handling variation in facial appearance caused by 3D head pose. A pose normalization approach based on fitting active appearance models (AAM) on a given face image was investigated. Profile faces with different rotation angles in depth were warped into shape-free frontal view faces. Face recognition experiments were carried out on the pose normalized facial images with a local appearance-based approach. The experimental results showed a significant improvement in accuracy. The local appearance-based face recognition approach is found to be robust against errors introduced by face model fitting.

Hua Gao, Hazım Kemal Ekenel, Rainer Stiefelhagen

Bayesian Face Recognition Based on Markov Random Field Modeling

In this paper, a Bayesian method for face recognition is proposed based on Markov Random Fields (MRF) modeling. Constraints on image features as well as contextual relationships between them are explored and encoded into a cost function derived based on a statistical model of MRF. Gabor wavelet coefficients are used as the base features, and relationships between Gabor features at different pixel locations are used to provide higher order contextual constraints. The posterior probability of matching configuration is derived based on MRF modeling. Local search and discriminate analysis are used to evaluate local matches, and a contextual constraint is applied to evaluate mutual matches between local matches. The proposed MRF method provides a new perspective for modeling the face recognition problem. Experiments demonstrate promising results.

Rui Wang, Zhen Lei, Meng Ao, Stan Z. Li

Pixelwise Local Binary Pattern Models of Faces Using Kernel Density Estimation

Local Binary Pattern (LBP) histograms have attained much attention in face image analysis. They have been successfully used in face detection, recognition, verification, facial expression recognition etc. The models for face description have been based on LBP histograms computed within small image blocks. In this work we propose a novel, spatially more precise model, based on kernel density estimation of local LBP distributions. In the experiments we show that this model produces significantly better performance in the face verification task than the earlier models. Furthermore, we show that the use of weighted information fusion from individual pixels based on a linear support vector machine provides with further improvements in performance.

Timo Ahonen, Matti Pietikäinen

Improvements and Performance Evaluation Concerning Synthetic Age Progression and Face Recognition Affected by Adult Aging

Aging of the face degrades the performance of face recognition algorithms. This paper presents recent work in synthetic age progression as well as performance comparisons for modern face recognition systems. Two top-performing, commercial systems along with a traditional PCA-based face recognizer are compared. It is shown that the commercial systems perform better than the baseline PCA algorithm, but their performance still deteriorates on an aged data-set. It is also shown that the use of our aging model improves the rank-one accuracy in these systems.

Amrutha Sethuram, Eric Patterson, Karl Ricanek, Allen Rawls

Binary Biometric Representation through Pairwise Polar Quantization

Binary biometric representations have great significance for data compression and template protection. In this paper, we introduce pairwise polar quantization. Furthermore, aiming to optimize the discrimination between the genuine Hamming distance (GHD) and the imposter Hamming distance (IHD), we propose two feature pairing strategies: the long-short (LS) strategy for phase quantization, as well as the long-long (LL) strategy for magnitude quantization. Experimental results for the FRGC face database and the FVC2000 fingerprint database show that phase bits provide reasonably good performance, whereas magnitude bits obtain poor performance.

Chun Chen, Raymond Veldhuis

Manifold Learning for Gender Classification from Face Sequences

We propose a novel approach to gender recognition for cases when face sequences are available. Such scenarios are commonly encountered in many applications such as human-computer interaction and visual surveillance in which input data generally consists of video sequences. Instead of treating each facial image as an isolated pattern and then combining the results (at feature, decision or score levels) as generally done in previous works, we propose to exploit the correlation between the face images and look at the problem of gender classification from manifold learning point of view. Our approach consists of first learning and discovering the hidden low-dimensional structure of male and female manifolds using an extension to the Locally Linear Embedding algorithm. Then, a target face sequence is projected into both manifolds for determining the gender of the person in the sequence. The matching is achieved using a new manifold distance measure. Extensive experiments on a large set of face sequences and different image resolutions showed very promising results, outperforming many traditional approaches.

Abdenour Hadid, Matti Pietikäinen

A Random Network Ensemble for Face Recognition

In this paper, we propose a random network ensemble for face recognition problem, particularly for images with a large appearance variation and with a limited number of training set. In order to reduce the correlation within the network ensemble using a single type of feature extractor and classifier, localized random facial features have been constructed together with internally randomized networks. The ensemble classifier is finally constructed by combining these multiple networks via a sum rule. The proposed method is shown to have a better accuracy(31.5% and 15.3% improvements on AR and EYALEB databases respectively) and a better efficiency than that of the widely used PCA-SVM.

Kwontaeg Choi, Kar-Ann Toh, Hyeran Byun

Extraction of Illumination-Invariant Features in Face Recognition by Empirical Mode Decomposition

Two Empirical Mode Decomposition (EMD) based face recognition schemes are proposed in this paper to address variant illumination problem. EMD is a data-driven analysis method for nonlinear and non-stationary signals. It decomposes signals into a set of Intrinsic Mode Functions (IMFs) that containing multiscale features. The features are representative and especially efficient in capturing high-frequency information. The advantages of EMD accord well with the requirements of face recognition under variant illuminations. Earlier studies show that only the low-frequency component is sensitive to illumination changes, it indicates that the corresponding high-frequency components are more robust to the illumination changes. Therefore, two face recognition schemes based on the IMFs are generated. One is using the high-frequency IMFs directly for classification. The other one is based on the synthesized face images fused by high-frequency IMFs. The experimental results on the PIE database verify the efficiency of the proposed methods.

Dan Zhang, Yuan Yan Tang

A Discriminant Analysis Method for Face Recognition in Heteroscedastic Distributions

Linear discriminant analysis (LDA) is a popular method in pattern recognition and is equivalent to Bayesian method when the sample distributions of different classes are obey to the Gaussian with the same covariance matrix. However, in real world, the distribution of data is usually far more complex and the assumption of Gaussian density with the same covariance is seldom to be met which greatly affects the performance of LDA. In this paper, we propose an effective and efficient two step LDA, called LSR-LDA, to alleviate the affection of irregular distribution to improve the result of LDA. First, the samples are normalized so that the variances of variables in each class are consistent, and a pre-transformation matrix from the original data to the normalized one is learned using least squares regression (LSR); second, conventional LDA is conducted on the normalized data to find the most discriminant projective directions. The final projection matrix is obtained by multiply the pre-transformation matrix and the projective directions of LDA. Experimental results on FERET and FRGC ver 2.0 face databases show the proposed LSR-LDA method improves the recognition accuracy over the conventional LDA by using the LSR step.

Zhen Lei, Shengcai Liao, Dong Yi, Rui Qin, Stan Z. Li

Robust Face Recognition Using Color Information

This paper presents a robust face recognition method using color information with the following three-fold contributions. First, a novel hybrid color space, the

RC

r

Q

color space, is constructed out of three different color spaces: the

RGB

,

YC

b

C

r

, and

YIQ

color spaces. The

RC

r

Q

hybrid color space, whose component images possess complementary characteristics, enhances the discriminating power for face recognition. Second, three effective image encoding methods are proposed for the component images in the

RC

r

Q

hybrid color space: (i) a patch-based Gabor image representation for the

R

component image, (ii) a multi-resolution LBP feature fusion scheme for the

C

r

component image, and (iii) a component-based DCT multiple face encoding for the

Q

component image. Finally, at the decision level, the similarity matrices generated using the three component images in the

RC

r

Q

hybrid color space are fused using a weighted sum rule. The most challenging Face Recognition Grand Challenge (FRGC) version 2 Experiment 4 shows that the proposed method, which achieves the face verification rate of 92.43% at the false accept rate of 0.1%, performs better than the state-of-the-art face recognition methods.

Zhiming Liu, Chengjun Liu

Face Age Classification on Consumer Images with Gabor Feature and Fuzzy LDA Method

As we all know, face age estimation task is not only challenging for computer, but even hard for human in some cases, however, coarse age classification such as classifying human face as baby, child, adult or elder people is much easier for human. In this paper, we try to dig out the potential age classification power of computer on faces from consumer images which are taken under various conditions. Gabor feature is extracted and used in LDA classifiers. In order to solve the intrinsic age ambiguity problem, a fuzzy version LDA is introduced through defining age membership functions. Systematic comparative experiment results show that the proposed method with Gabor feature and fuzzy LDA can achieve better age classification precision in consumer images.

Feng Gao, Haizhou Ai

The Big Brother Database: Evaluating Face Recognition in Smart Home Environments

In this paper a preliminary study on template updating techniques for face recognition in home environments is presented. In particular a new database has been created specifically for this application, where the face images acquired are characterized by a great variability in terms of pose and illumination but the number of subjects is quite limited and a large amount of images can be exploited for intensive incremental learning. The steps of database creation and the characteristics of the data collected are described in detail. We believe such a database could be very useful to develop and optimize face recognition approaches for smart home environments. Moreover some preliminary results on incremental learning are provided and analyzed to evaluate the effects of incremental template updating on the recognition performance.

Annalisa Franco, Dario Maio, Davide Maltoni

A Confidence-Based Update Rule for Self-updating Human Face Recognition Systems

The aim of this paper is to present an automatic update rule to make a face recognition system adapt itself to the continuously changing appearance of users. The main idea is that every time the system interacts with a user, it adapts itself to include his or her current appearance, and thus, it always stays up-to-date. We propose a novel quality measure, which is used to decide whether the information just learnt from a user can be used to aggregate to what the system already knows. In the absence of databases that suit our needs, we present a publicly available database with 14,279 images of 35 users and 74 impostors acquired in a span of 5 months. Experiments on this database show that the proposed measure is adequate for a system to learn the current appearance of users in a non-supervised manner.

Sri-Kaushik Pavani, Federico M. Sukno, Constantine Butakoff, Xavier Planes, Alejandro F. Frangi

Facial Comparisons by Subject Matter Experts: Their Role in Biometrics and Their Training

The fingerprint biometrics community has accepted the involvement of human examiners as the ultimate verifiers of the output of automated systems, especially in latent fingerprint cases. Likewise, the facial biometrics community should recognize the importance of human experts working in association with the automated process, particularly when analyzing uncontrolled images. As facial biometric systems become more common, more facial image examiners will need to be trained. Currently there are no systematic training programs in place for these examiners. This paper outlines the knowledge needed to conduct facial examinations, and thus should be taught in a successful training program. A facial image comparison expert must be versed in many subjects including: comparative science, image science and processing, bones of the head, muscles of the face, properties of the skin, aging and alteration, legal issues and case law, and the history of facial identifications and photographic comparisons.

Nicole A. Spaun

Face Gender Classification on Consumer Images in a Multiethnic Environment

In this paper, we target at face gender classification on consumer images in a multiethnic environment. The consumer images are much more challenging, since the faces captured in the real situation vary in pose, illumination and expression in a much larger extent than that captured in the constrained environments such as the case of snapshot images. To overcome the non-uniformity, a robust Active Shape Model (ASM) is used for face texture normalization. The probabilistic boosting tree approach is presented which achieves a more accurate classification boundary on consumer images. Besides that, we also take into consideration the ethnic factor in gender classification and prove that ethnicity specific gender classifiers could remarkably improve the gender classification accuracy in a multiethnic environment. Experiments show that our methods achieve better accuracy and robustness on consumer images in a multiethnic environment.

Wei Gao, Haizhou Ai

Multi-View Face Alignment Using 3D Shape Model for View Estimation

For multi-view face alignment (MVFA), the non-linear variation of shape and texture, and the self-occlusion of facial feature points caused by view change are the two major difficulties. The state-of-the-art MVFA methods are essentially view-based approaches in which views are divided into several categories such as frontal, half profile, full profile etc. and each of them has its own model in MVFA. Therefore the view estimation problem becomes a critical step in MVFA. In this paper, a MVFA method using 3D face shape model for view estimation is presented in which the 3D shape model is used to estimate the pose of the face thereby selecting its model and indicating its self-occluded points. Experiments on different datasets are reported to show the improvement over previous works.

Yanchao Su, Haizhou Ai, Shihong Lao

Analysis of Eigenvalue Correction Applied to Biometrics

Eigenvalue estimation plays an important role in biometrics. However, if the number of samples is limited, estimates are significantly biased. In this article we analyse the influence of this bias on the error rates of PCA/LDA based verification systems, using both synthetic data with realistic parameters and real biometric data. Results of bias correction in the verification systems differ considerable between synthetic data and real data: while the bias is responsible for a large part of classification errors in the synthetic facial data, compensation of the bias in real facial data leads only to marginal improvements.

Anne Hendrikse, Raymond Veldhuis, Luuk Spreeuwers, Asker Bazen

Multi-Region Probabilistic Histograms for Robust and Scalable Identity Inference

We propose a scalable face matching algorithm capable of dealing with faces subject to several concurrent and uncontrolled factors, such as variations in pose, expression, illumination, resolution, as well as scale and misalignment problems. Each face is described in terms of multi-region probabilistic histograms of visual words, followed by a normalised distance calculation between the histograms of two faces. We also propose a fast histogram approximation method which dramatically reduces the computational burden with minimal impact on discrimination performance. Experiments on the “Labeled Faces in the Wild” dataset (unconstrained environments) as well as FERET (controlled variations) show that the proposed algorithm obtains performance on par with a more complex method and displays a clear advantage over predecessor systems. Furthermore, the use of multiple regions (as opposed to a single overall region) improves accuracy in most cases, especially when dealing with illumination changes and very low resolution images. The experiments also show that normalised distances can noticeably improve robustness by partially counteracting the effects of image variations.

Conrad Sanderson, Brian C. Lovell

Heterogeneous Face Recognition from Local Structures of Normalized Appearance

Heterogeneous face images come from different lighting conditions or different imaging devices, such as visible light (VIS) and near infrared (NIR) based. Because heterogeneous face images can have different skin spectra-optical properties, direct appearance based matching is no longer appropriate for solving the problem. Hence we need to find facial features common in heterogeneous images. For this, first we use Difference-of-Gaussian filtering to obtain a normalized appearance for all heterogeneous faces. We then apply MB-LBP, an extension of LBP operator, to encode the local image structures in the transformed domain, and further learn the most discriminant local features for recognition. Experiments show that the proposed method significantly outperforms existing ones in matching between VIS and NIR face images.

Shengcai Liao, Dong Yi, Zhen Lei, Rui Qin, Stan Z. Li

Sparse Representation for Video-Based Face Recognition

In this paper we address for the first time, the problem of video-based face recognition in the context of sparse representation classification (SRC). The SRC classification using still face images, has recently emerged as a new paradigm in the research of view-based face recognition. In this research we extend the SRC algorithm for the problem of temporal face recognition. Extensive identification and verification experiments were conducted using the VidTIMIT database [1,2]. Comparative analysis with state-of-the-art Scale Invariant Feature Transform (SIFT) based recognition was also performed. The SRC algorithm achieved 94.45% recognition accuracy which was found comparable to 93.83% results for the SIFT based approach. Verification experiments yielded 1.30% Equal Error Rate (EER) for the SRC which outperformed the SIFT approach by a margin of 0.5%. Finally the two classifiers were fused using the weighted sum rule. The fusion results consistently outperformed the individual experts for identification, verification and rank-profile evaluation protocols.

Imran Naseem, Roberto Togneri, Mohammed Bennamoun

Face Image Quality Evaluation for ISO/IEC Standards 19794-5 and 29794-5

Face recognition performance can be significantly influenced by face image quality. The approved ISO/IEC standard 19794-5 has specified recommendations for face photo taking for E-passport and related applications. Standardization of face image quality, ISO/IEC 29794-5, is in progress. Bad illumination, facial pose and out-of-focus are among main reasons that disqualify a face image sample. This paper presents several algorithms for face image quality assessment. Illumination conditions and facial pose are evaluated in terms of facial symmetry, and implemented based on Gabor wavelet features. Assessment of camera focus is done based on discrete cosine transform (DCT). These methods are validated by experiments.

Jitao Sang, Zhen Lei, Stan Z. Li

Upper Facial Action Unit Recognition

This paper concentrates on the comparisons of systems that are used for the recognition of expressions generated by six upper face action units (

AU

s) by using Facial Action Coding System (

FACS

). Haar wavelet, Haar-Like and Gabor wavelet coefficients are compared, using Adaboost for feature selection. The binary classification results by using Support Vector Machines (

SVM

) for the upper face

AU

s have been observed to be better than the current results in the literature, for example 96.5% for

AU2

and 97.6% for

AU5

. In multi-class classification case, the Error Correcting Output Coding (

ECOC

) has been applied. Although for a large number of classes, the results are not as accurate as the binary case,

ECOC

has the advantage of solving all problems simultaneously; and for large numbers of training samples and small number of classes, error rates are improved.

Cemre Zor, Terry Windeatt

Automatic Partial Face Alignment in NIR Video Sequences

Face recognition with partial face images is an important problem in face biometrics. The necessity can arise in not so constrained environments such as in surveillance video, or portal video as provided in Multiple Biometrics Grand Challenge (MBGC). Face alignment with partial face images is a key step toward this challenging problem.

In this paper, we present a method for partial face alignment based on scale invariant feature transform (SIFT). We first train a reference model using holistic faces, in which the anchor points and their corresponding descriptor subspaces are learned from initial SIFT keypoints and the relationships between the anchor points are also derived. In the alignment stage, correspondences between the learned holistic face model and an input partial face image are established by matching keypoints of the partial face to the anchor points of the learned face model. Furthermore, shape constraint is used to eliminate outlier correspondences and temporal constraint is explored to find more inliers. Alignment is finally accomplished by solving a similarity transform. Experiments on the MBGC near infrared video sequences show the effectiveness of the proposed method, especially when PCA subspace, shape and temporal constraint are utilized.

Jimei Yang, Shengcai Liao, Stan Z. Li

Parts-Based Face Verification Using Local Frequency Bands

In this paper we extend the Parts-Based approach of face verification by performing a frequency-based decomposition. The Parts-Based approach divides the face into a set of blocks which are then considered to be separate observations, this is a spatial decomposition of the face. This paper extends the Parts-Based approach by also dividing the face in the frequency domain and treating each frequency response from an observation separately. This can be expressed as forming a set of sub-images where each sub-image represents the response to a different frequency of, for instance, the Discrete Cosine Transform. Each of these sub-images is treated separately by a Gaussian Mixture Model (GMM) based classifier. The classifiers from each sub-image are then combined using weighted summation with the weights being derived using linear logistic regression. It is shown on the BANCA database that this method improves the performance of the system from an Average Half Total Error Rate of 24.38% to 15.17% when compared to a GMM Parts-Based approach on Protocol P.

Christopher McCool, Sébastien Marcel

Local Gabor Binary Pattern Whitened PCA: A Novel Approach for Face Recognition from Single Image Per Person

One major challenge for face recognition techniques is the difficulty of collecting image samples. More samples usually mean better results but also more effort, time, and thus money. Unfortunately, many current face recognition techniques rely heavily on the large size and representativeness of the training sets, and most methods suffer degraded performance or fail to work if there is only one training sample per person available. This so-called “Single Sample per Person” (SSP) situation is common in face recognition. To resolve this problem, we propose a novel approach based on a combination of Gabor Filter, Local Binary Pattern and Whitened PCA (LGBPWP). The new LGBPWP method has been successfully implemented and evaluated through experiments on 3000+ FERET frontal face images of 1196 subjects. The results show the advantages of our method - it has achieved the best results on the FERET database. The established recognition rates are 98.1%, 98.9%, 83.8% and 81.6% on the fb, fc, dup I, and dup II probes, respectively, using only one training sample per person.

Hieu V. Nguyen, Li Bai, Linlin Shen

3D Face Recognition Using Joint Differential Invariants

Stemming from a sound mathematical framework dating back to the beginning of the 20th century, this paper introduces a novel approach for 3D face recognition. The proposed technique is based on joint differential invariants, projecting a 3D shape in a 9-dimensional space where the effect of rotation and translation is removed. As a consequence, the matching between two different 3D samples can be directly performed in the invariant space. Thus the matching score can be effectively used to detect surfaces or parts of surfaces characterised by similar when not identical 3D structure. The paper details an efficient procedure for the generation of the invariant signature in the 9-dimensional space, carefully discussing a number of significant implications related to the application of the mathematical framework to the discrete, non-rigid case of interest. Experimental evaluation of the proposed approach is performed over the widely known 3D_RMA database, comparing results to the well established Iterative Closest Point (ICP)-based matching approach.

Marinella Cadoni, Manuele Bicego, Enrico Grosso

A Model Based Approach for Expressions Invariant Face Recognition

This paper describes an idea of recognizing the human face in the presence of strong facial expressions using model based approach. The features extracted for the face image sequences can be efficiently used for face recognition. The approach follows in 1) modeling an active appearance model (AAM) parameters for the face image, 2) using optical flow based temporal features for facial expression variations estimation, 3) and finally applying classifier for face recognition. The novelty lies not only in generation of appearance models which is obtained by fitting active shape model (ASM) to the face image using objective functions but also using a feature vector which is the combination of shape, texture and temporal parameters that is robust against facial expression variations. Experiments have been performed on Cohn-Kanade facial expression database using 62 subjects of the database with image sequences consisting of more than 4000 images. This achieved successful face recognition rate up to 91.17% using binary decision tree (BDT), 98.6% using Bayesian Networks (BN) with 10-fold cross validation in the presence of six different facial expressions.

Zahid Riaz, Christoph Mayer, Matthias Wimmer, Michael Beetz, Bernd Radig

Why Is Facial Occlusion a Challenging Problem?

This paper investigates the main reason for the obtained low performance when the face recognition algorithms are tested on partially occluded face images. It has been observed that in the case of upper face occlusion, missing discriminative information due to occlusion only accounts for a very small part of the performance drop. The main factor is found to be the registration errors due to erroneous facial feature localization. It has been shown that by solving the misalignment problem, very high correct recognition rates can be achieved with a generic local appearance-based face recognition algorithm. In the case of a lower face occlusion, only a slight decrease in the performance is observed, when a local appearance-based face representation approach is used. This indicates the importance of local processing when dealing with partial face occlusion. Moreover, improved alignment increases the correct recognition rate also in the experiments against the lower face occlusion, which shows that face registration plays a key role on face recognition performance.

Hazım Kemal Ekenel, Rainer Stiefelhagen

Nasal Region-Based 3D Face Recognition under Pose and Expression Variations

In this work, we propose a fully automatic pose and expression invariant part-based 3D face recognition system. The proposed system is based on pose correction and curvature-based nose segmentation. Since the nose is the most stable part of the face, it is largely invariant under expressions. For this reason, we have concentrated on locating the nose tip and segmenting the nose. Furthermore, the nose direction is utilized to correct pose variations. We try both one-to-all and Average Nose Model-based methodologies for registration. Our results show that the utilization of anatomically-cropped nose region increases the recognition accuracy up to 94.10 per cent for frontal facial expressions and 79.41 per cent for pose variations in the Bosphorus 2D/3D face database.

Hamdi Dibeklioğlu, Berk Gökberk, Lale Akarun

An Analysis-by-Synthesis Method for Heterogeneous Face Biometrics

Face images captured in different spectral bands,

e.g.

, in visual (VIS) and near infrared (NIR), are said to be heterogeneous. Although a person’s face looks different in heterogeneous images, it should be classified as being from the same individual. In this paper, we present a new method, called

face analogy

, in the analysis-by-synthesis framework, for heterogeneous face mapping, that is, transforming face images from one type to another, and thereby performing heterogeneous face matching. Experiments show promising results.

Rui Wang, Jimei Yang, Dong Yi, Stan Z. Li

Face Recognition with LWIR Imagery Using Local Binary Patterns

In this paper, the merits of the Local Binary Patterns (LBP) representation are investigated in the context of face recognition using long-wave infrared images. Long-wave infrared images are invariant to illumination, but at the same time they are affected by a fixed-pattern noise inherent to this technology. The fixed-pattern is normally compensated by means of a non-uniformity correction method. Our study shows that the LBP approach is robust to the fixed-pattern noise, as well as to the presence of glasses. Not only no noise suppressing preprocessing is needed, but in fact if a non-uniformity correction method is applied, the image texture is amplified and the performance of the LBP degraded.

Heydi Méndez, Cesar San Martín, Josef Kittler, Yenisel Plasencia, Edel García-Reyes

A Classification Framework for Large-Scale Face Recognition Systems

This paper presents a generic classification framework for large-scale face recognition systems. Within the framework, a data sampling strategy is proposed to tackle the data imbalance when image pairs are sampled from thousands of face images for preparing a training dataset. A modified kernel Fisher discriminant classifier is proposed to make it computationally feasible to train the kernel-based classification method using tens of thousands of training samples. The framework is tested in an open-set face recognition scenario and the performance of the proposed classifier is compared with alternative techniques. The experimental results show that the classification framework can effectively manage large amounts of training data, without regard to feature types, to efficiently train classifiers with high recognition accuracy compared to alternative techniques.

Ziheng Zhou, Samuel Chindaro, Farzin Deravi

Synthesizing Frontal Faces on Calibrated Stereo Cameras for Face Recognition

Current automatic face recognition systems often require users to face towards the capturing camera. To extent these systems for user non-intrusive application scenarios such as video surveillance, we propose a stereo camera configuration to synthesize a frontal face image from two non-frontal face images. After the head pose has been estimated, a frontal face image is synthesized using face warping and view morphing techniques. Face identification experiments reveal that using the synthetic frontal images can achieve comparable performance with real frontal face images.

Kin-Wang Cheung, Jiansheng Chen, Yiu-Sang Moon

Nasal Region Contribution in 3D Face Biometrics Using Shape Analysis Framework

The main goal of this paper is to illustrate a geometric analysis of 3D facial shapes in presence of varying facial expressions using the nose region. This approach consists of the following two main steps:

(i)

Each nasal surface is automatically denoised and preprocessed to result in an indexed collection of nasal curves. During this step one detects the tip of the nose and defines a surface distance function with that tip as the reference point. The level curves of this distance function are the desired nasal curves.

(ii)

Comparisons between noses are based on optimal deformations from one to another. This, in turn, is based on optimal deformations of the corresponding nasal curves across surfaces under an elastic metric. The experimental results, generated using a subset of FRGC v2 dataset, demonstrate the success of the proposed framework in recognizing people under different facial expressions. The recognition rates obtained here exceed those for a baseline ICP algorithm on the same dataset.

Hassen Drira, Boulbaba Ben Amor, Mohamed Daoudi, Anuj Srivastava

Generic versus Salient Region-Based Partitioning for Local Appearance Face Recognition

In this paper, we investigate different partitioning schemes for local appearance-based face recognition. Five different salient region-based partitioning approaches are analyzed and they are compared to a generic partitioning scheme. Extensive experiments have been conducted on the AR, CMU PIE, FRGC, Yale B, and Extend Yale B face databases. The experimental results show that generic partitioning provides better performance than salient region-based partitioning schemes.

Hazım Kemal Ekenel, Rainer Stiefelhagen

Near Infrared Face Based Biometric Key Binding

Biometric encryption is the basis for biometric template protection and information security. While existing methods are based on iris or fingerprint modality, face has so far been considered not reliable enough to meet the requirement for error correcting ability. In this paper, we present a novel biometric key binding method based on near infrared (NIR) face biometric. An enhanced BioHash algorithm is developed by imposing an NXOR mask onto the input to the subsequent error correcting code (ECC). This way, when combined with ECC and NIR face features, it enables reliable binding of face biometric features and the biometric key. Its ability for template protection and information cryptography is guarantied by the theory of encryption. The security level of NIR face recognition system is thereby improved. Experimental results show that the security benefit is gained with a sacrifice of 1-2% drop in the recognition performance.

Meng Ao, Stan Z. Li

Fuzzy 3D Face Ethnicity Categorization

In this paper, we propose a novel fuzzy 3D face ethnicity categorization algorithm, which contains two stages, learning and mapping. In learning stage, the visual codes are first learned for both the eastern and western individuals using the learned visual codebook (LVC) method, then from these codes we can learn two distance measures, merging distance and mapping distance. Using the merging distance, we can learn the eastern, western and human codes based on the visual codes. In mapping stage, we compute the probabilities for each 3D face mapped to eastern and western individuals using the mapping distance. And the membership degree is determined by our defined membership function. The main contribution of this paper is that we view ethnicity categorization as a fuzzy problem and give an effective solution to assign the 3D face a reasonable membership degree. All experiments are based on the challenging FRGC2.0 3D Face Database. Experimental results illustrate the efficiency and accuracy of our fuzzy 3D face ethnicity categorization method.

Cheng Zhong, Zhenan Sun, Tieniu Tan

Faceprint: Fusion of Local Features for 3D Face Recognition

3D face recognition is a very active biometric research field. Due to the 3D data’s insensitivity to illumination and pose variations, 3D face recognition has the potential to perform better than 2D face recognition. In this paper, we focus on local feature based 3D face recognition, and propose a novel Faceprint method. SIFT features are extracted from texture and range images and matched, the matching number of key points together with geodesic distance ratios between models are used as three kinds of matching scores, likelihood ratio based score level fusion is conducted to calculate the final matching score. Thanks to the robustness of SIFT, shape index, and geodesic distance against various changes of geometric transformation, illumination, pose and expression, the Faceprint method is inherently insensitive to these variations. Experimental results indicate that Faceprint method achieves consistently high performance comparing with commonly used SIFT on texture images.

Guangpeng Zhang, Yunhong Wang

Combining Illumination Normalization Methods for Better Face Recognition

Face Recognition under uncontrolled illumination conditions is partly an unsolved problem. There are two categories of illumination normalization methods. The first category performs a local preprocessing, where they correct a pixel value based on a local neighborhood in the images. The second category performs a global preprocessing step, where the illumination conditions and the face shape of the entire image are estimated. We use two illumination normalization methods from both categories, namely Local Binary Patterns and Model-based Face Illumination Correction. The preprocessed face images of both methods are individually classified with a face recognition algorithm which gives us two similarity scores for a face image. We combine the similarity scores using score-level fusion, decision-level fusion and hybrid fusion. In our previous work, we show that combining the similarity score of different methods using fusion can improve the performance of biometric systems. We achieved a significant performance improvement in comparison with the individual methods.

Bas Boom, Qian Tao, Luuk Spreeuwers, Raymond Veldhuis

Bayesian Networks to Combine Intensity and Color Information in Face Recognition

We present generative models dedicated to face recognition. Our models consider data extracted from color face images and use Bayesian Networks to model relationships between different observations derived from a single face. Specifically, the use of color as a complementary observation to local, grayscale-based features is investigated. This is done by means of new generative models, combining color and grayscale information in a principled way. Color is either incorporated at the global face level, at the local facial feature level, or at both levels. Experiments on the face authentication task are conducted on two benchmark databases, XM2VTS and BANCA. Obtained results show that integrating color in an intelligent manner improves the performance over a similar baseline system acting on grayscale only, but also over an Eigenfaces-based system were information from different color channels are treated independently.

Guillaume Heusch, Sébastien Marcel

Combining Facial Skin Mark and Eigenfaces for Face Recognition

In this paper we investigate the use of facial skin mark information for biometric person verification. We performed statistical analysis of the facial skin mark information. The position, size and color intensity of the skin marks are considered as features for skin mark based face matching. Developed facial skin mark matcher has good performance, but can not be applied to faces with no detected skin marks. Due to non-universality of skin mark information, a novel combination algorithm of traditional Eigenfaces matcher with skin mark matcher has been proposed. The resulting combined face matcher has universality property and delivers better performance than either single matcher. The AR Face Database was used in experiments.

Zhi Zhang, Sergey Tulyakov, Venu Govindaraju

Speech

Analysis of the Utility of Classical and Novel Speech Quality Measures for Speaker Verification

In this work, we analyze several quality measures for speaker verification from the point of view of their utility, i.e., their ability to predict performance in an authentication task. We select several quality measures derived from classic indicators of speech degradation, namely ITU P.563 estimator of subjective quality, signal to noise ratio and kurtosis of linear predictive coefficients. Moreover, we propose a novel quality measure derived from what we have called Universal Background Model Likelihood (UBML), which indicates the degradation of a speech utterance in terms of its divergence with respect to a given universal model. Utility of quality measures is evaluated following the protocols and databases of NIST Speaker Recognition Evaluation (SRE) 2006 and 2008 (telephone-only subset), and ultimately by means of error-vs.-rejection plots as recommended by NIST. Results presented in this study show significant utility for all the quality measures analyzed, and also a moderate decorrelation among them.

Alberto Harriero, Daniel Ramos, Joaquin Gonzalez-Rodriguez, Julian Fierrez

Impact of Prior Channel Information for Speaker Identification

Joint factor analysis (JFA) has been very successful in speaker recognition but its success depends on the choice of development data. In this work, we apply JFA to a very diverse set of recording conditions and conversation modes in NIST 2008 SRE, showing that having channel matched development data will give improvements of about 50% in terms of Equal Error Rate against a Maximum a Posteriori (MAP) system, while not having it will not give significant improvement. To provide robustness to the system, we estimate eigenchannels in two ways. First, we estimate the eigenchannels separately for each condition and stack them. Second, we pool all the relevant development data and obtain a single estimate. Both techniques show good performance, but the former leads to lower performance when working with low-dimension channel subspaces, due to the correlation between those subspaces.

C. Vaquero, N. Scheffer, S. Karajekar

Minimising Speaker Verification Utterance Length through Confidence Based Early Verification Decisions

This paper presents a novel approach of estimating the confidence interval of speaker verification scores. This approach is utilised to minimise the utterance lengths required in order to produce a confident verification decision. The confidence estimation method is also extended to address both the problem of high correlation in consecutive frame scores, and robustness with very limited training samples. The proposed technique achieves a drastic reduction in the typical data requirements for producing confident decisions in an automatic speaker verification system. When evaluated on the NIST 2005 SRE, the early verification decision method demonstrates that an average of 5–10 seconds of speech is sufficient to produce verification rates approaching those achieved previously using an average in excess of 100 seconds of speech.

Robbie Vogt, Sridha Sridharan

Scatter Difference NAP for SVM Speaker Recognition

This paper presents Scatter Difference Nuisance Attribute Projection (SD-NAP) as an enhancement to NAP for SVM-based speaker verification. While standard NAP may inadvertently remove desirable speaker variability, SD-NAP explicitly de-emphasises this variability by incorporating a weighted version of the between-class scatter into the NAP optimisation criterion. Experimental evaluation of SD-NAP with a variety of SVM systems on the 2006 and 2008 NIST SRE corpora demonstrate that SD-NAP provides improved verification performance over standard NAP in most cases, particularly at the EER operating point.

Brendan Baker, Robbie Vogt, Mitchell McLaren, Sridha Sridharan

Data-Driven Impostor Selection for T-Norm Score Normalisation and the Background Dataset in SVM-Based Speaker Verification

A data-driven background dataset refinement technique was recently proposed for SVM based speaker verification. This method selects a refined SVM background dataset from a set of candidate impostor examples after individually ranking examples by their relevance. This paper extends this technique to the refinement of the T-norm dataset for SVM-based speaker verification. The independent refinement of the background and T-norm datasets provides a means of investigating the sensitivity of SVM-based speaker verification performance to the selection of each of these datasets. Using refined datasets provided improvements of 13% in min. DCF and 9% in EER over the full set of impostor examples on the 2006 SRE corpus with the majority of these gains due to refinement of the T-norm dataset. Similar trends were observed for the unseen data of the NIST 2008 SRE.

Mitchell McLaren, Robbie Vogt, Brendan Baker, Sridha Sridharan

Support Vector Machine Regression for Robust Speaker Verification in Mismatching and Forensic Conditions

In this paper we propose the use of Support Vector Machine Regression (SVR) for robust speaker verification in two scenarios:

i)

strong mismatch in speech conditions and

ii)

forensic environment. The proposed approach seeks robustness to situations where a proper background database is reduced or not present, a situation typical in forensic cases which has been called

database mismatch

. For the mismatching condition scenario, we use the NIST SRE 2008 core task as a highly variable environment, but with a mostly representative background set coming from past NIST evaluations. For the forensic scenario, we use the Ahumada III database, a public corpus in Spanish coming from real authored forensic cases collected by Spanish Guardia Civil. We show experiments illustrating the robustness of a SVR scheme using a GLDS kernel under strong session variability, even when no session variability is applied, and especially in the forensic scenario, under database mismatch.

Ismael Mateos-Garcia, Daniel Ramos, Ignacio Lopez-Moreno, Joaquin Gonzalez-Rodriguez

Scores Selection for Emotional Speaker Recognition

Emotion variability of the training and testing utterances is one of the largest challenges in speaker recognition. It is a common situation where training data is the neutral speech and testing data is the mixture of neutral and emotional speech. In this paper, we experimentally analyzed the performance of the GMM-based verification system with the utterances in this situation. It reveals that the verification performance improves as the emotion ratio decreases and the scores of neutral features against his/her model are distributed in the upper area than other three scores(neutral against the model of other speakers, and non-neutral speech against the model of himself/herself and other speakers). Based on these, we propose a scores selection method to reduce the emotion ratio of the testing utterance by eliminating the non-neutral features. It is applicable to the GMM-based recognition system without labeling the emotion state in the testing process. The experiments are carried on the MASC Corpus and the performance of the system with scores selection is improved with an EER reduction from 13.52% to 10.17%.

Zhenyu Shan, Yingchun Yang

Automatic Cross-Biometric Footstep Database Labelling Using Speaker Recognition

The often daunting task of collecting and manually labelling biometric databases can be a barrier to research. This is especially true for a new or non-established biometric such as footsteps. The availability of very large data sets often plays a role in the research of complex modelling and normalisation algorithms and so an automatic, semi-unsupervised approach to reduce the cost of manual labelling is potentially of immense value.

This paper proposes a novel, iterative and adaptive approach to the automatic labelling of what is thought to be the first large scale footstep database (more than 10,000 examples across 127 persons). The procedure involves the simultaneous collection of a spoken, speaker-dependent password which is used to label the footstep data automatically via a pre-trained speaker recognition system. Subsets of labels are manually checked by listening to the particular password utterance, or viewing the associated talking face; both are recorded with the same time stamp as the footstep sequence.

Experiments to assess the resulting label accuracy, based on manually labelled subsets, suggest that the accuracy of the automatic labelling is better than 0.1%, and thus sufficient to assess a biometric such as footsteps, which is anticipated to have a much higher error rate.

Rubén Vera-Rodríguez, John S. D. Mason, Nicholas W. D. Evans

Towards Structured Approaches to Arbitrary Data Selection and Performance Prediction for Speaker Recognition

We developed measures relating feature vector distributions to speaker recognition (SR) performances for performance prediction and potential arbitrary data selection for SR. We examined the measures of mutual information, kurtosis, correlation, and measures pertaining to intra- and inter-speaker variability. We applied the measures on feature vectors of phones to determine which measures gave good SR performance prediction of phones standalone and in combination. We found that mutual information had an -83.5% correlation with the Equal Error Rates (EERs) of each phone. Also, Pearson’s correlation between the feature vectors of two phones had a -48.6% correlation with the relative EER improvement of the score-level combination of the phones. When implemented in our new data-selection scheme (which does not require a SR system to be run), the measures allowed us to select data with 2.13% overall EER improvement (on SRE08) over data selected via a brute-force approach, at a fifth of the computational costs.

Howard Lei

Fingerprint and Palmprint

Beyond Minutiae: A Fingerprint Individuality Model with Pattern, Ridge and Pore Features

Fingerprints are considered to be unique because they contain various distinctive features, including minutiae, ridges, pores, etc. Some attempts have been made to model the minutiae in order to get a quantitative measure for uniqueness or individuality of fingerprints. However, these models do not fully exploit information contained in non-minutiae features that is utilized for matching fingerprints in practice. We propose an individuality model that incorporates all three levels of fingerprint features: pattern or class type (Level 1), minutiae and ridges (Level 2), and pores (Level 3). Correlations among these features and their distributions are also taken into account in our model. Experimental results show that the theoretical estimates of fingerprint individuality using our model consistently follow the empirical values based on the public domain NIST-4 database.

Yi Chen, Anil K. Jain

Active Fingerprint Ridge Orientation Models

This paper proposes a statistical model for fingerprint ridge orientations. The active fingerprint ridge orientation model (AFROM) iteratively deforms to fit the orientation field (OF) of a fingerprint. The OFs are constrained by the AFROM to vary only in ways according to a training set. The main application of the method is the OF estimation in noisy fingerprints as well as the interpolation and extrapolation of larger OF parts. Fingerprint OFs are represented by Legendre Polynomials. The method does not depend on any pre-alignment or registration of the input image itself. The training can be done fully automatic without any user interaction. We show that the model is able to extract the significant appearance elements of fingerprint flow patterns even from noisy training images. Furthermore, our method does not depend on any other computed data, except a segmentation. We evaluated both, the generalisation as well as the prediction capability of the proposed method. These evaluations assess our method very good results.

Surinder Ram, Horst Bischof, Josef Birchbauer

FM Model Based Fingerprint Reconstruction from Minutiae Template

Minutiae-based representation is the most widely adopted fingerprint representation scheme. The compactness of minutiae template has created an impression that the minutiae template does not contain sufficient information to allow the reconstruction of the original fingerprint image. This belief has now been shown to be false; several algorithms have been proposed that can reconstruct fingerprint images from minutiae templates. However, these reconstruction techniques have a common weak point: many spurious minutiae, not included in the original minutiae template, are generated in the reconstructed image. Moreover, some of these techniques can only reconstruct a partial fingerprint. In this paper, a novel fingerprint reconstruction algorithm is proposed, which not only reconstructs the whole fingerprint, but the reconstructed fingerprint contains very few spurious minutiae. A fingerprint image is modeled as a 2D Frequency Modulation (FM) signal whose phase consists of the continuous part and the spiral part (which corresponds to minutiae). An algorithm is proposed to reconstruct the continuous phase from minutiae. The proposed reconstruction algorithm has been evaluated with respect to the success rates of type-I attack (match the reconstructed fingerprint against the original fingerprint) and type-II attack (match the reconstructed fingerprint against the different impressions of the original fingerprint) using a commercial fingerprint recognition system. Both types of attacks were shown to be successful in deceiving the fingerprint system.

Jianjiang Feng, Anil K. Jain

Robust Biometric System Using Palmprint for Personal Verification

This paper describes a prototype of robust biometric system for verification. The system uses features extracted using Speeded Up Robust Features (SURF) operator of human hand. The hand image for features is acquired using a low cost scanner. The extracted palmprint region is robust to hand translation and rotation on the scanner. The system is tested on IITK database and PolyU database. It has FAR 0.02%, FRR 0.01% and an accuracy of 99.98% at original size. The system addresses the robustness in the context of scale, rotation and occlusion of palmprint. The system performs at accuracy more than 99% for scale, more than 98% for rotation, and more than 99% for occlusion. The robustness and accuracy suggest that it can be a suitable system for civilian and high-security environments.

G. S. Badrinath, Phalguni Gupta

Accurate Palmprint Recognition Using Spatial Bags of Local Layered Descriptors

State-of-the-art palmprint recognition algorithms achieve high accuracy based on component based texture analysis. However, they are still sensitive to local variations of appearances introduced by deformation of skin surfaces or local contrast variations. To tackle this problem, this paper presents a novel palmprint representation named Spatial Bags of Local Layered Descriptors (SBLLD). This technique works by partitioning the whole palmprint image into sub-regions and describing distributions of layered palmprint descriptors inside each sub-region. Through the procedure of partitioning and disordering, local statistical palmprint descriptions and spatial information of palmprint patterns are integrated to achieve accurate image description. Furthermore, to remove irrelevant and attributes from the proposed feature representation, we apply a simple but efficient ranking based feature selection procedure to construct compact and descriptive statistical palmprint representation, which improves classification ability of the proposed method in a further step. Our idea is verified through verification test on large-scale PolyU Palmprint Database Version 2.0. Extensive experimental results testify efficiency of our proposed palmprint representation.

Yufei Han, Tieniu Tan, Zhenan Sun

Pose Invariant Palmprint Recognition

A palmprint based authentication system that can work with a multi-purpose camera in uncontrolled circumstances, such as those mounted on a laptop, mobile device or those for surveillance, can dramatically increase the applicability of such a system. However, the performance of existing techniques for palmprint authentication fall considerably, when the camera is not aligned with the surface of the palm. The problems arise primarily due to variations in appearance introduced due to varying pose, but is compounded by specularity of the skin and blur due to motion and focus. In this paper, we propose a method to deal with variations in pose in unconstrained palmprint imaging. The method can robustly estimate and correct variations in pose, and compute a similarity measure between the corrected test image and a reference image. Experimental results on a set of 100 user’s palms captured at varying poses show a reduction in Equal Error Eate from 22.4% to 8.7%.

Chhaya Methani, Anoop M. Namboodiri

Palmprint Recognition Based on Regional Rank Correlation of Directional Features

Automatic personal identification based on palmprints has been considered as a promising technology in biometrics family during recent years. In pursuit of accurate palmprint recognition approaches, it is a key issue to design proper image representation to describe skin textures in palm regions. According to previous achievements, directional texture measurement provides a powerful tool for depicting palmprint appearances. Most of successful approaches can be ranged into this framework. Following this idea, we propose a novel palmprint representation in this paper, which describes palmprint images by constructing rank correlation statistics of appearance patterns within local image areas. Promising experimental results on two large scale palmprint databases demonstrate that the proposed method achieves even better performances than the state-of-the-art approaches.

Yufei Han, Zhenan Sun, Tieniu Tan, Ying Hao

Direct Pore Matching for Fingerprint Recognition

Sweat pores on fingerprints have proven to be useful features for personal identification. Several methods have been proposed for pore matching. The state-of-the-art method first matches minutiae on the fingerprints and then matches the pores based on the minutia matching results. A problem of such minutia-based pore matching method is that the pore matching is dependent on the minutia matching. Such dependency limits the pore matching performance and impairs the effectiveness of the fusion of minutia and pore match scores. In this paper, we propose a novel direct approach for matching fingerprint pores. It first determines the correspondences between pores based on their local features. It then uses the RANSAC (RANdom SAmple Consensus) algorithm to refine the pore correspondences obtained in the first step. A similarity score is finally calculated based on the pore matching results. The proposed pore matching method successfully avoids the dependency of pore matching on minutia matching results. Experiments have shown that the fingerprint recognition accuracy can be greatly improved by using the method proposed in this paper.

Qijun Zhao, Lei Zhang, David Zhang, Nan Luo

A Novel Fingerprint Matching Algorithm Using Ridge Curvature Feature

Fingerprint matching based on solely minutiae feature ignore the abundant ridge information in fingerprint images. We propose a novel fingerprint matching algorithm which integrates minutiae feature with ridge curvature map(RCM). The RCM is approximated by a polynomial model which is computed by Least Square(LS) method. In the matching stage, phase-only correlation matching method is employed to match two RCMs. Then sum fusion rule is selected to combine the minutiae matching score and the RCM matching score. Experiments conducted on FVC2002 and FVC2004 databases show that proposed algorithm can obtain more promising performance than solely minutiae-based algorithm and several other multi-feature fusion algorithms.

Peng Li, Xin Yang, Qi Su, Yangyang Zhang, Jie Tian

Fingerprint Matching Based on Neighboring Information and Penalized Logistic Regression

This paper proposes a novel minutiae-based fingerprint matching algorithm. A fingerprint is represented by minutiae set and sampling points on all ridges. Therefore, the foreground of a fingerprint image can be accurately estimated by the sampling points. The similarity between two minutiae is measured by two parts: neighboring minutiae which are different in minutiae pattern and neighboring sampling points which are different in orientation and frequency. After alignment and minutiae pairing, Nine features are extracted to represent the matching status and penalized logistic regression (PLR) is adopted to calculate the matching score. The proposed algorithm is evaluated on fingerprint databases of FVC2002 and compared with the participants in FVC 2002. Experimental results show that the proposed algorithm achieves good performance and ranks 5

th

according to average equal error rate.

Kai Cao, Xin Yang, Jie Tian, Yangyang Zhang, Peng Li, Xunqiang Tao

A Novel Region Based Liveness Detection Approach for Fingerprint Scanners

Biometric scanners have become widely popular in providing security to information technology and entry to otherwise sensitive locations. However, these systems have been proven to be vulnerable to spoofing, or granting entry to an imposter using fake fingers. While matching algorithms are highly successful in identifying the unique fingerprint biometric of an individual, they lack the ability to determine if the source of the image is coming from a living individual, or a fake finger, comprised of PlayDoh, silicon, gelatin or other material. Detection of liveness patterns is one method in which physiological traits are identified in order to ensure that the image received by the scanner is coming from a living source. In this paper, a new algorithm for detection of perspiration is proposed. The method quantifies perspiration via region labeling methods, a simple computer vision technique. This method is capable of extracting observable trends in live and spoof images, generally relating to the differences found in the number and size of identifiable regions per contour along a ridge or valley segment. This approach was tested on a optical fingerprint scanner, Identix DFR2100. The dataset includes a total of 1526 live and 1588 spoof fingerprints, arising from over 150 unique individuals with multiple visits. Performance was evaluated through a neural network classifier, and the results are compared to previous studies using intensity based ridge and valley liveness detection. The results yield excellent classification, achieving overall classification rates greater than 95.5%. Implementation of this liveness detection method can greatly improve the security of fingerprint scanners.

Brian DeCann, Bozhao Tan, Stephanie Schuckers

Focal Point Detection Based on Half Concentric Lens Model for Singular Point Extraction in Fingerprint

A focal point is a kind of singular points, closely related to a core point, which can be derived from curvature of fingerprint ridges and valleys. It is expected that the focal point is more reliable than the core point in case of low quality fingerprint. This paper proposes a new efficient focal point localization method based on a half concentric lens model. The half concentric lens window, with directional adaptive, accelerates convergence of a focal point localization process rapidly. Moreover, concentric lens similarity factor is introduced in order to measure orientation and stability of an extracted focal point. From experimental results, the proposed scheme is out-performed most of singular point detection schemes in literature in term of location accuracy and consistency. For computational complexity, algorithm requires average 75 millisecond execution-time from original fingerprint to a unique focal point.

Natthawat Boonchaiseree, Vutipong Areekul

Robust Fingerprint Matching Using Spiral Partitioning Scheme

Fingerprint matching for low quality or partial fingerprint images is very challenging. It is mainly because the features such as minutia points can not be extracted reliably. In the case of partial fingerprint images captured using solid state sensors, enough number of minutia points may not be included. In this paper, we introduce a novel fingerprint representation that combines information from each extracted minutia with detected ridges in its neighborhood. The proposed algorithm first enhances a fingerprint image and generates a binary image. Then instead of using thinning-based algorithms, the ridges are extracted using a chaincode scheme, which retains the original thickness of the ridges and precise local orientations. The minutia points are detected by tracing the ridge lines. Finally the enriched local structural features are built for each minutia by a spiral coding using the ridge line orientations around the minutia. The new features are translation and rotational invariant. Each feature vector represents a minutia and its neighboring ridge structures. Matching of two fingerprints is performed by calculating the Euclidean distances between pairs of corresponding feature vectors. Preliminary experiments show that the proposed algorithm is effective.

Zhixin Shi, Venu Govindaraju

Performance and Computational Complexity Comparison of Block-Based Fingerprint Enhancement

Performance and computational complexity comparisons of various block-based fingerprint enhancement schemes are tested and reported in this literature. Enhancement performance is evaluated by comparing equal error rates, which obtained by a proposed fingerprint matching algorithm using local and global features. Various enhancement methods are tested; i.e. three types of spatial Gabor filtering, short-time Fourier transform filtering, and discrete cosine transform filtering. These enhancement schemes also tested with various databases such as FVC2000, FVC2002, and FVC2004. Finally, computational complexity of enhancement implementation is analyzed and concluded.

Suksan Jirachaweng, Teesid Leelasawassuk, Vutipong Areekul

Reference Point Detection for Arch Type Fingerprints

Reference point detection is an important task in the design of an automated fingerprint identification system. Many algorithms have emerged with acceptable results but are mostly suitable for non-arch type fingerprint. It still remains as a challenging problem to reliably identify reference points for fingerprints of the arch type. A topological method is presented in this paper to detect reference points in arch type fingerprint images. To evaluate the performance, 400 arch type fingerprint image pairs in the NIST DB4 database are utilized. The alignment accuracy on average is about 35 pixels in distance and 9 degrees in orientation, which is very well comparable with respect to state-of-the-arts as designed for non-arch type fingerprints.

H. K. Lam, Z. Hou, W. Y. Yau, T. P. Chen, J. Li, K. Y. Sim

Palmprint Verification Using Circular Gabor Filter

Recently,researchers have been paying an excessive amount of attention to biometric palmprint which has gained popularity and received significant prominence due to its high stability and uniqueness. In this study, two certain filters have been taken into consideration, namely Gabor filter and Circular Gabor filter which are used to obtain feature information from two distinguishing regions of interest, square and inscribed circle areas as the central part sub-images and the two palmprint images are compared with one another in terms of their hamming distance. The outcomes of the experiment gave an indication to the fact that circular Gabor Filter had a comparatively better performance than traditional one in extracting distinct feature information.

Azadeh Ghandehari, Reza Safabakhsh

Kernel Principal Component Analysis of Gabor Features for Palmprint Recognition

This paper presents Gabor-based kernel Principal Component Analysis (KPCA) method by integrating the Gabor wavelet and the KPCA methods for palmprint recognition. The intensity values of the palmprint images extracted by using an image preprocessing method are first normalized. Then Gabor wavelets are applied to derive desirable palmprint features. The transformed palm images exhibit strong characteristics of spatial locality, scale, and orientation selectivity. The KPCA method nonlinearly maps the Gabor wavelet image into a high-dimensional feature space and the matching is realized by weighted Euclidean distance. The proposed algorithm has been successfully tested on the PolyU palmprint database which the samples were collected in two different sessions. Experimental results show that this method achieves 97.22% accuracy for PolyU dataset using 3850 images from 385 different palms captured in the first session as train set and the second session im0061ges as test set.

Murat Aykut, Murat Ekinci

Latent Fingerprint Matching: Fusion of Rolled and Plain Fingerprints

Law enforcement agencies routinely collect both rolled and plain fingerprints of all the ten fingers of suspects. These two types of fingerprints complement each other, since rolled fingerprints are of larger size and contain more minutiae, and plain fingerprints are less affected by distortion and have clearer ridge structure. It is widely known in the law enforcement community that searching both rolled and plain fingerprints can improve the accuracy of latent matching, but, this does not appear to be a common practice in law enforcement. To our knowledge, only rank level fusion option is provided by the vendors. There has been no systematic study and comparison of different fusion techniques. In this paper, multiple fusion approaches at three different levels (rank, score and feature) are proposed to fuse rolled and plain fingerprints. Experimental results in searching 230 latents in the NIST SD27 against a database of 4,180 pairs of rolled and plain fingerprints show that most of the fusion approaches can improve the identification performance. The greatest improvement was obtained by boosted max fusion at the score level, which reaches a rank-1 identification rate of 83.0%, compared to the rank-1 rate of 57.8% for plain and 70.4% for rolled prints.

Jianjiang Feng, Soweon Yoon, Anil K. Jain

Biometric Competitions

Overview of the Multiple Biometrics Grand Challenge

The goal of the Multiple Biometrics Grand Challenge (MBGC) is to improve the performance of face and iris recognition technology from biometric samples acquired under unconstrained conditions. The MBGC is organized into three challenge problems. Each challenge problem relaxes the acquisition constraints in different directions. In the Portal Challenge Problem, the goal is to recognize people from near-infrared (NIR) and high definition (HD) video as they walk through a portal. Iris recognition can be performed from the NIR video and face recognition from the HD video. The availability of NIR and HD modalities allows for the development of fusion algorithms. The Still Face Challenge Problem has two primary goals. The first is to improve recognition performance from frontal and off angle still face images taken under uncontrolled indoor and outdoor lighting. The second is to improve recognition performance on still frontal face images that have been resized and compressed, as is required for electronic passports. In the Video Challenge Problem, the goal is to recognize people from video in unconstrained environments. The video is unconstrained in pose, illumination, and camera angle. All three challenge problems include a large data set, experiment descriptions, ground truth, and scoring code.

P. Jonathon Phillips, Patrick J. Flynn, J. Ross Beveridge, W. Todd Scruggs, Alice J. O’Toole, David Bolme, Kevin W. Bowyer, Bruce A. Draper, Geof H. Givens, Yui Man Lui, Hassan Sahibzada, Joseph A. Scallan, Samuel Weimer

Face Video Competition

Person recognition using facial features, e.g., mug-shot images, has long been used in identity documents. However, due to the widespread use of web-cams and mobile devices embedded with a camera, it is now possible to realise facial video recognition, rather than resorting to just still images. In fact, facial video recognition offers many advantages over still image recognition; these include the potential of boosting the system accuracy and deterring spoof attacks. This paper presents the first known benchmarking effort of person identity verification using facial video data. The evaluation involves 18 systems submitted by seven academic institutes.

Norman Poh, Chi Ho Chan, Josef Kittler, Sébastien Marcel, Christopher Mc Cool, Enrique Argones Rúa, José Luis Alba Castro, Mauricio Villegas, Roberto Paredes, Vitomir Štruc, Nikola Pavešić, Albert Ali Salah, Hui Fang, Nicholas Costen

Fingerprint and On-Line Signature Verification Competitions at ICB 2009

This paper describes the objectives, the tasks proposed to the participants and the associated protocols in terms of database and assessment tools of two competitions on fingerprints and on-line signatures. The particularity of the fingerprint competition is to be an on-line competition, for evaluation of fingerprint verification tools such as minutiae extractors and matchers as well as complete systems. This competition will be officialy launched during the ICB conference. The on-line signature competition will test the influence of multi-sessions, environmental conditions (still and mobility) and signature complexity on the performance of complete systems using two datasets extracted from the BioSecure database. Its result will be presented during the ICB conference.

Bernadette Dorizzi, Raffaele Cappelli, Matteo Ferrara, Dario Maio, Davide Maltoni, Nesma Houmani, Sonia Garcia-Salicetti, Aurélien Mayoue

Partial Face Matching between Near Infrared and Visual Images in MBGC Portal Challenge

The latest multi-biometric grand challenge (MBGC 2008) sets up a new experiment in which near infrared (NIR) face videos containing partial faces are used as a probe set and the visual (VIS) images of full faces are used as the target set. This is challenging for two reasons: (1) it has to deal with partially occluded faces in the NIR videos, and (2) the matching is between heterogeneous NIR and VIS faces. Partial face matching is also a problem often confronted in many video based face biometric applications.

In this paper, we propose a novel approach for solving this challenging problem. For partial face matching, we propose a local patch based method to deal with partial face data. For heterogeneous face matching, we propose the philosophy of enhancing common features in heterogeneous images while reducing differences. This is realized by using edge-enhancing filters, which at the same time is also beneficial for partial face matching. The approach requires neither learning procedures nor training data. Experiments are performed using the MBGC portal challenge data, comparing with several known state-of-the-arts methods. Extensive results show that the proposed approach, without knowing statistical characteristics of the subjects or data, outperforms the methods of contrast significantly, with ten-fold higher verification rates at FAR of 0.1%.

Dong Yi, Shengcai Liao, Zhen Lei, Jitao Sang, Stan Z. Li

Multibiometrics and Security

Fusion in Multibiometric Identification Systems: What about the Missing Data?

Many large-scale biometric systems operate in the identification mode and include multimodal information. While biometric fusion is a well-studied problem, most of the fusion schemes have been implicitly designed for the verification scenario and cannot account for missing data (missing modalities or incomplete score lists) that is commonly encountered in multibiometric identification systems. In this paper, we show that likelihood ratio-based score fusion, which was originally designed for verification systems, can be extended for fusion in the identification scenario under certain assumptions. We further propose a Bayesian approach for consolidating ranks and a hybrid scheme that utilizes both ranks and scores to perform fusion in identification systems. We also demonstrate that the proposed fusion rules can handle missing information without any ad-hoc modifications. We observe that the recognition performance of the simplest rank level fusion scheme, namely, the highest rank method, is comparable to the performance of complex fusion strategies, especially when the goal is not to obtain the best rank-1 accuracy but to just retrieve the top few matches.

Karthik Nandakumar, Anil K. Jain, Arun Ross

Challenges and Research Directions for Adaptive Biometric Recognition Systems

Biometric authentication using mobile devices is becoming a convenient and important means to secure access to remote services such as telebanking and electronic transactions. Such an application poses a very challenging pattern recognition problem: the training samples are often sparse and they cannot represent the biometrics of a person. The query features are easily affected by the acquisition environment, the user’s accessories, occlusions and aging. Semi-supervised learning – learning from the query/test data – can be a means to tap the vast unlabeled training data. While there is evidence that semi-supervised learning can work in text categorization and biometrics, its application on mobile devices remains a great challenge. As a preliminary, yet, indispensable study towards the goal of semi-supervised learning, we analyze the following sub-problems: model adaptation, update criteria, inference with several models and user-specific time-dependent performance assessment, and explore possible solutions and research directions.

Norman Poh, Rita Wong, Josef Kittler, Fabio Roli

Modelling FRR of Biometric Verification Systems Using the Template Co-update Algorithm

The decrease of representativeness of available templates during time is due to the large intra-class variations characterizing biometrics (e.g. faces). This requires the design of algorithms able to make biometric verification systems adaptive to such variations. Among others, the template co-update algorithm, which uses the mutual help of two complementary biometric matchers, has shown promising experimental results. The present paper is aimed to describe a theoretical model able to explain the co-update behaviour. In particular, the focus is on the relationships between error rate and gallery size increase. Preliminary experimental results are shown to validate the proposed model.

Luca Didaci, Gian Luca Marcialis, Fabio Roli

Bipartite Biotokens: Definition, Implementation, and Analysis

Cryptographic transactions form the basis of many common security systems found throughout computer networks. Supporting these transactions with biometrics is very desirable, as stronger non-repudiation is introduced, along with enhanced ease-of-use. In order to support such transactions, some sort of secure template construct is required that, when re-encoded, can release session specific data. The construct we propose for this task is the

bipartite biotoken

. In this paper, we define the bipartite biotoken, describe its implementation for fingerprints, and present an analysis of its security. No other technology exists with the critical reissue and secure embedding properties of the bipartite biotoken. Experimental results for matching accuracy are presented for the FVC 2002 data set and imposter testing on 750 Million matches.

W. J. Scheirer, T. E. Boult

Fusion of LSB and DWT Biometric Watermarking Using Offline Handwritten Signature for Copyright Protection

Biometric watermarking was introduced as the synergistic integration of biometrics and digital watermarking technology. This paper proposes a novel biometric watermarking technique, which embeds offline handwritten signature in host image for copyright protection. We propose to combine the conventional LSB-based and DWT-based watermarking techniques into a unison framework, which is known as LSB-DWT in this paper. The proposed LSB-DWT technique is evaluated against various simulated security attacks, i.e. JPEG compression, Gaussian low-pass filtering, median filtering, Gaussian noise, scaling, rotation and cropping. The experimental results demonstrate that the proposed LSB-DWT technique exhibits remarkable watermark imperceptibility and watermark robustness.

Cheng-Yaw Low, Andrew Beng-Jin Teoh, Connie Tee

Audio-Visual Identity Verification and Robustness to Imposture

The robustness of talking-face identity verification (IV) systems is best evaluated by monitoring their behavior under impostor attacks. We propose a scenario where the impostor uses a still face picture and a sample of speech of the genuine client to transform his/her speech and visual appearance into that of the target client. We propose

MixTrans

, an original text-independent technique for voice transformation in the cepstral domain, which allows a transformed audio signal to be estimated and reconstructed in the temporal domain. We also propose a face transformation technique that allows a frontal face image of a client to be animated, using principal warps to deform defined MPEG-4 facial feature points based on determined facial animation parameters. The robustness of the talking-face IV system is evaluated under these attacks. Results on the BANCA talking-face database clearly show that such attacks represent a serious challenge and a security threat to IV systems.

Walid Karam, Chafic Mokbel, Hanna Greige, Gérard Chollet

Theoretical Framework for Constructing Matching Algorithms in Biometric Authentication Systems

In this paper, we propose a theoretical framework to construct matching algorithms for any biometric authentication systems. Conventional matching algorithms are not necessarily secure against strong intentional impersonation attacks such as wolf attacks. The wolf attack is an attempt to impersonate a genuine user by presenting a “wolf” to a biometric authentication system without the knowledge of a genuine user’s biometric sample. A “wolf” is a sample which can be accepted as a match with multiple templates. The wolf attack probability (

WAP

) is the maximum success probability of the wolf attack, which was proposed by Une, Otsuka, Imai as a measure for evaluating security of biometric authentication systems [UOI1], [UOI2]. We present a principle for construction of secure matching algorithms against the wolf attack for any biometric authentication systems. The ideal matching algorithm determines a threshold for each input value depending on the entropy of the probability distribution of the (Hamming) distances. Then we show that if the information about the probability distribution for each input value is perfectly given, then our matching algorithm is secure against the wolf attack. Our generalized matching algorithm gives a theoretical framework to construct secure matching algorithms. How lower

WAP

is achievable depends on how accurately the entropy is estimated. Then there is a trade-off between the efficiency and the achievable

WAP

. Almost every conventional matching algorithm employs a fixed threshold and hence it can be regarded as an efficient but insecure instance of our theoretical framework. Daugman’s algorithm proposed in [Da2] can also be regarded as a non-optimal instance of our framework.

Manabu Inuma, Akira Otsuka, Hideki Imai

A Biometric Menagerie Index for Characterising Template/Model-Specific Variation

An important phenomenon influencing the performance of a biometric experiment, attributed to Doddington et al (1998), is that the match scores (whether under genuine or impostor matching) are strongly dependent on the model or template from which the match scores have been derived. Although there exist studies to classify the characteristic of the template/model, as well as the query data, into animal names such as sheep, goats, wolves and lambs – so-called Doddington’s menagerie, or higher semantic categories considering simultaneously both genuine and impostor match scores, due to Yager and Dunstone (2008), there is currently absence of means to characterise the extent of Doddington’s menagerie. This paper aims to design such an index, called the biometric menagerie index (BMI). It is defined as the ratio of the between-client variance and the expectation of the total variance. BMI has three desirable properties. First, it is invariant to shifting and scaling of the match scores. Second, its value lies between zero and one, with zero implying the absence of Doddington’s menagerie effect, and one signifying its strong presence. Third, it is experimentally verified that BMI generalizes to

different choices

of impostor population. Our findings based on the XM2VTS benchmark score database suggest the followings: First, the BMI of genuine match scores is generally higher than that of the impostor match scores. Second, two different matching algorithms observing the same biometric data may have significantly different BMI values, hence suggesting that the biometric menagerie is algorithm-dependent.

Norman Poh, Josef Kittler

An Usability Study of Continuous Biometrics Authentication

We present an usability study for a bi-modality Continuous Biometrics Authentication System (CBAS) that runs on the Windows platform. Our CBAS combines fingerprint and facial biometrics to authenticate users. As authentication is continuous, CBAS constantly contributes a computational overhead of up to 42% to the computer system. This usability study seeks to investigate (a) whether this overhead will have an impact on the performance of users to complete tasks; and (b) whether the users deem the responsiveness of the system to be acceptable. The results of our study are encouraging, indicating that the runtime cost of a CBAS system has no measurable statistical impact on the task completion by users. We found that user acceptance of CBAS to be good and they did not perceive the CBAS to degrade system response. This suggests that continuous biometrics for authentication is viable – the CBAS benefits outweighs system impact drawbacks.

Geraldine Kwang, Roland H. C. Yap, Terence Sim, Rajiv Ramnath

A First Approach to Contact-Based Biometrics for User Authentication

This paper presents the concept of contact-based biometric features, which are behavioral biometric features related to the dynamic manipulation of objects that exist in the surrounding environment. The motivation behind the proposed features derives from activity-related biometrics and the extension of them to activities involving objects. The proposed approach exploits methods from different scientific fields, such as virtual reality, collision detection and pattern classification and is applicable to user authentication systems. Experimental results in a data-set of 20 subjects show that the introduced features comprise a very efficient and interesting approach in the research of biometric features.

Athanasios Vogiannou, Konstantinos Moustakas, Dimitrios Tzovaras, Michael G. Strintzis

Template Update Methods in Adaptive Biometric Systems: A Critical Review

Template representativeness is a fundamental problem in a biometric recognition system. The performance of the system degrades if the enrolled templates are un-representative of the substantial intra-class variations encountered in the input biometric samples. Recently, several template updates methods based on supervised and semi-supervised learning have been proposed in the literature with an aim to update the enrolled templates to the intra-class variations of the input data. However, the state of art related to template update is still in its infancy. This paper presents a critical review of the current approaches to template updating in order to analyze the state of the art in terms of advancement reached and open issues remain.

Ajita Rattani, Biagio Freni, Gian Luca Marcialis, Fabio Roli

Simulating the Influences of Aging and Ocular Disease on Biometric Recognition Performance

Many applications of ocular biometrics require long-term stability, yet only limited data on the effects of disease and aging on the error rates of ocular biometrics is currently available. Based on pathologies simulated using image manipulation validated by opthalmology and optometry specialists, the present paper reports on the effects that selected common ocular diseases and age-related pathologies have on the recognition performance of two widely used iris and retina recognition algorithms, finding the algorithms to be robust against many even highly visible pathologies, permitting acceptable re-enrolment intervals for most disease progressions.

Halvor Borgen, Patrick Bours, Stephen D. Wolthusen

Cancelable Biometrics with Perfect Secrecy for Correlation-Based Matching

In this paper, we propose a novel method of Cancelable Biometrics for correlation-based matching. The biometric image is transformed by Number Theoretic Transform (Fourier-like transform over a finite field), and then the transformed data is masked with a random filter. By applying a particular kind of masking technique, the correlation between the registered image and the input matching image can be computed in masked domain (i.e., encrypted domain) without knowing the original images. And we proved theoretically that in our proposed method the masked version does not leak any information of the original image, in other words, our proposed method has perfect secrecy. Additionally, we applied our proposed method to finger-vein pattern verification and experimentally obtained very high verification performance.

Shinji Hirata, Kenta Takahashi

An Information Theoretic Framework for Biometric Security Systems

An information theoretic framework is established to analyze the performance of biometric security systems. Two performance metrics, namely privacy, measured by the normalized equivocation rate of the biometric measurements, and security, measured by the rate of the key generated from the biometric measurements, are first defined. A fundamental tradeoff between these two metrics is then identified. The scenario in which a potential attacker does not have side information is considered first. The privacy-security region, which characterizes the above-noted tradeoff, is derived for this case. An important role of common information among random variables is revealed in perfect privacy biometric security systems. The scenario in which the attacker has side information is then considered. Inner and outer bounds on the privacy-security tradeoff are derived in this case.

Lifeng Lai, Siu-Wai Ho, H. Vincent Poor

Constructing Passwords from Biometrical Data

We propose a probabilistic model for constructing passwords on the basis of outcomes of biometrical measurements. An algorithm for the transformation of biometrical data to passwords is given. The performance of the authentication scheme is evaluated by the compression factor, the false acceptance/rejection rates, the probability distribution over the set of passwords, and the probability of a correct guess of the input biometrical data mapped to the known password. An application of the results to DNA measurements is presented.

Vladimir B. Balakirsky, Anahit R. Ghazaryan, A. J. Han Vinck

Efficient Biometric Verification in Encrypted Domain

Biometric authentication over public networks leads to a variety of privacy issues that needs to be addressed before it can become popular. The primary concerns are that the biometrics might reveal more information than the identity itself, as well as provide the ability to track users over an extended period of time. In this paper, we propose an authentication protocol that alleviates these concerns. The protocol takes care of user privacy, template protection and trust issues in biometric authentication systems. The protocol uses asymmetric encryption, and captures the advantages of biometric authentication. The protocol provides non-repudiable identity verification, while not revealing any additional information about the user to the server or vice versa. We show that the protocol is secure under various attacks. Experimental results indicate that the overall method is efficient to be used in practical scenarios.

Maneesh Upmanyu, Anoop M. Namboodiri, K. Srinathan, C. V. Jawahar

A New Approach for Biometric Template Storage and Remote Authentication

In this paper, we propose a new remote biometric based authentication scheme, which is designed for distributed systems with a central database for the storage of the biometric data. For our scheme, we consider the recently introduced security notions of Identity and Transaction privacy and present a different storage mechanism for biometrics resulting in a reduced database storage cost. Besides, the components of the system do not need to store any biometric template in cleartext or in encrypted form, which affects the social acceptance of the system positively. Finally, we compare our results with existing schemes satisfying the current security notions and achieve improved computational complexity.

Neyire Deniz Sarier

A Biometric Key-Binding and Template Protection Framework Using Correlation Filters

We describe a new framework to bind cryptographic keys with biometric signatures using correlation filters. This scheme combines correlation filter based biometric recognition with biometric key-binding while offering template protection, revocability, diversity and security. We demonstrate the effectiveness of our scheme via numerical results on the CMU-PIE face database.

Vishnu Naresh Boddeti, Fei Su, B. V. K. Vijaya Kumar

Security-Enhanced Fuzzy Fingerprint Vault Based on Minutiae’s Local Ridge Information

Fuzzy vault is a practical and promising fingerprint template protection technology. However, this scheme has some security issues, in which cross-matching between different vaults may be the most serious one. In this paper, we develop an improvement version of fuzzy vault integrating minutiae’s local ridge orientation information. The improved fuzzy fingerprint vault, a two factor authentication scheme to some extent, can effectively prevent cross-matching between different fingerprint vaults. Experimental results show that, if and only if the fingerprint and the password of users are simultaneity obtained by the attacker, the fuzzy vault can be cracked. Results under three scenarios indicate that, although the authentication performance of Scena.1 decreases a little in term of GAR, the security of Scena.2 and Scena.3, hence the security of the whole scheme, is enhanced greatly.

Peng Li, Xin Yang, Kai Cao, Peng Shi, Jie Tian

Systematic Construction of Iris-Based Fuzzy Commitment Schemes

As a result of the growing interest in biometrics a new field of research has emerged entitled

Biometric Cryptosystems

. Only a small amount of work, which additionally tends to be custom-built according to the specific application context, has been published in this area. This work provides a systematic treatment of how to construct biometric cryptosystems based on iris biometrics. A cryptographic primitive called

Fuzzy Commitment Scheme

is adopted to different types of iris recognition algorithms to hide and retrieve a cryptographic key in and out of a biometric template. Experimental results confirm the soundness of the approach.

Christian Rathgeb, Andreas Uhl

Parallel versus Serial Classifier Combination for Multibiometric Hand-Based Identification

This paper presents an approach for optimizing both recognition and processing performance of a biometric system in identification mode. Multibiometric techniques facilitate bridging the gap between desired performance and current unimodal recognition rates. However, traditional parallel classifier combination techniques, such as Score sum, Borda count and Highest rank, cause further processing overhead, as they require a matching of the extracted sample with each template of the system for each feature. We examine a framework of serial combination techniques, which exploits ranking capabilities of individual features by reducing the set of possible matching candidates at each iteration, and we compare its performance with parallel schemes. Using this technique, both a reduction of misclassification and processing time in identification mode will be shown to be feasible for a single-sensor hand-based biometric system.

Andreas Uhl, Peter Wild

Robust Multi-modal and Multi-unit Feature Level Fusion of Face and Iris Biometrics

Multi-biometrics has recently emerged as a mean of more robust and efficient personal verification and identification. Exploiting information from multiple sources at various levels i.e., feature, score, rank or decision, the false acceptance and rejection rates can be considerably reduced. Among all, feature level fusion is relatively an understudied problem. This paper addresses the feature level fusion of multi-modal and multi-unit sources of information. For multi-modal fusion the face and iris biometric traits are considered, while the multi-unit fusion is applied to merge the data from the left and right iris images. The proposed approach computes the

SIFT

features from both biometric sources, either multi-modal or multi-unit. For each source, feature selection on the extracted

SIFT

features is performed via spatial sampling. Then these selected features are finally concatenated together into a single feature

super-vector

using serial fusion. This concatenated super feature vector is used to perform classification.

Experimental results from face and iris standard biometric databases are presented. The reported results clearly show the performance improvements in classification obtained by applying feature level fusion for both multi-modal and multi-unit biometrics in comparison to uni-modal classification and score level fusion.

Ajita Rattani, Massimo Tistarelli

Robust Human Detection under Occlusion by Integrating Face and Person Detectors

Human detection under occlusion is a challenging problem in computer vision. We address this problem through a framework which integrates face detection and person detection. We first investigate how the response of a face detector is correlated with the response of a person detector. From these observations, we formulate hypotheses that capture the intuitive feedback between the responses of face and person detectors and use it to verify if the individual detectors’ outputs are true or false. We illustrate the performance of our integration framework on challenging images that have considerable amount of occlusion, and demonstrate its advantages over individual face and person detectors.

William Robson Schwartz, Raghuraman Gopalan, Rama Chellappa, Larry S. Davis

Multibiometric People Identification: A Self-tuning Architecture

Multibiometric systems can solve a number of problems of unimodal approaches. One source for such problems can be found in the lack of dynamic update of parameters, which does not allow current systems to adapt to changes in the working settings. They are generally calibrated once and for all, so that they are tuned and optimized with respect to standard conditions. In this work we propose an architecture where, for each single-biometry subsystem, parameters are dynamically optimized according to the behaviour of all the others. This is achieved by an additional component, the

supervisor module

, which analyzes the responses from all subsystems and modifies the degree of reliability required from each of them to accept the respective responses. The paper explores two integration architectures with different interconnection degree, demonstrating that a tight component interaction increases system accuracy and allows identifying unstable subsystems.

Maria De Marsico, Michele Nappi, Daniel Riccio

Gait

Covariate Analysis for View-Point Independent Gait Recognition

Many studies have shown that gait can be deployed as a biometric. Few of these have addressed the effects of view-point and covariate factors on the recognition process. We describe the first analysis which combines view-point invariance for gait recognition which is based on a model-based pose estimation approach from a single un-calibrated camera. A set of experiments are carried out to explore how such factors including clothing, carrying conditions and view-point can affect the identification process using gait. Based on a covariate-based probe dataset of over 270 samples, a recognition rate of 73.4% is achieved using the

KNN

classifier. This confirms that people identification using dynamic gait features is still perceivable with better recognition rate even under the different covariate factors. As such, this is an important step in translating research from the laboratory to a surveillance environment.

I. Bouchrika, M. Goffredo, J. N. Carter, M. S. Nixon

Dynamic Texture Based Gait Recognition

We present a novel approach for human gait recognition that inherently combines appearance and motion. Dynamic texture descriptors, Local Binary Patterns from Three Orthogonal Planes (LBP-TOP), are used to describe human gait in a spatiotemporal way. We also propose a new coding of multiresolution uniform Local Binary Patterns and use it in the construction of spatiotemporal LBP histograms. We show the suitability of the representation for gait recognition and test our method on a popular CMU MoBo dataset. We then compare our result to the state of the art methods.

Vili Kellokumpu, Guoying Zhao, Stan Z. Li, Matti Pietikäinen

Gender Recognition Based on Fusion of Face and Multi-view Gait

In this paper, we consider the problem of gender recognition based on face and multi-view gait cues in the same walking sequence. The gait cues are derived from multiple simultaneous camera views. Meanwhile, the face cues are captured by a camera at front view. According to this setup, we build a database including 32 male subjects and 28 female subjects. Then, for face, we normalize the frame images decomposed from videos and introduce PCA to reduce image dimension. For gait, we extract silhouettes from videos and employ an improved spatio-temporal representation on the silhouettes to obtain gait features. SVM is then used to classify gender with face features and gait features from each view respectively. We employ three fusion approaches involving voting rule, weighted voting rule and Bayes combination rule at the decision level. The effectiveness of various approaches is evaluated on our database. The experimental results of integrating face and multi-view gait show an obvious improvement on the accuracy of gender recognition.

De Zhang, Yunhong Wang

Unsupervised Real-Time Unusual Behavior Detection for Biometric-Assisted Visual Surveillance

This paper presents a novel unusual behaviors detection algorithm to acquire biometric data for intelligent surveillance in real-time. Our work aims to design a completely unsupervised method for detecting unusual behaviors without using any explicit training dataset. To this end, the proposed approach learns from the behaviors recorded in the history; such that the definition of unusual behavior is modeled according to previous observations, but not a manually labeled dataset. To implement this, pyramidal

Lucas-Kanade algorithm

is employed to estimate the optical flow between consecutive frames, the results are encoded into flow histograms. Leveraging the correlations between the flow histograms, unusual actions can be detected by applying

principal component analysis

(PCA). This approach is evaluated under both indoor and outdoor surveillance scenarios. It shows promising results that our detection algorithm is able to discover unusual behaviors and adapt to changes in behavioral pattern automatically.

Tsz-Ho Yu, Yiu-Sang Moon

Multilinear Tensor-Based Non-parametric Dimension Reduction for Gait Recognition

The small sample size problem and the difficulty in determining the optimal reduced dimension limit the application of subspace learning methods in the gait recognition domain. To address the two issues, we propose a novel algorithm named

multi-linear tensor-based learning without tuning parameters (

MTP

)

for gait recognition. In

MTP

, we first employ a new method for automatic selection of the optimal reduced dimension. Then, to avoid the small sample size problem, we use multi-linear tensor projections in which the dimensions of all the subspaces are automatically tuned. Theoretical analysis of the algorithm shows that

MTP

converges. Experiments on the USF Human Gait Database show promising results of

MTP

compared to other gait recognition methods.

Changyou Chen, Junping Zhang, Rudolf Fleischer

Quantifying Gait Similarity: User Authentication and Real-World Challenge

Template-based approaches using acceleration signals have been proposed for gait-based biometric authentication. In daily life a number of real-world factors affect the users’ gait and we investigate their effects on authentication performance. We analyze the effect of walking speed, different shoes, extra load, and the natural variation over days on the gait. Therefore we introduce a statistical Measure of Similarity (MOS) suited for template-based pattern recognition. The MOS and actual authentication show that these factors may affect the gait of an individual at a level comparable to the variations between individuals. A change in walking speed of 1km/h for example has the same MOS of 20% as the in-between individuals’ MOS. This limits the applicability of gait-based authentication approaches. We identify how these real-world factors may be compensated and we discuss the opportunities for gait-based context-awareness in wearable computing systems.

Marc Bächlin, Johannes Schumm, Daniel Roggen, Gerhard Töster

Iris

40 Years of Progress in Automatic Speaker Recognition

Research in automatic speaker recognition has now spanned four decades. This paper surveys the major themes and advances made in the past 40 years of research so as to provide a technological perspective and an appreciation of the fundamental progress that has been accomplished in this important area of speech-based human biometrics. Although many techniques have been developed, many challenges have yet to be overcome before we can achieve the ultimate goal of creating human-like machines. Such a machine needs to be able to deliver satisfactory performance under a broad range of operating conditions. A much greater understanding of the human speech process is still required before automatic speaker recognition systems can approach human performance.

Sadaoki Furui

Robust Biometric Key Extraction Based on Iris Cryptosystem

Biometric cryptosystem can not only provide an efficient mechanism for template protection, but also facilitate cryptographic key management, thus becomes a promising direction in information security field. In this paper, we propose a robust key extraction approach which consists of concatenated coding scheme and bit masking scheme based on iris database. The concatenated coding scheme that combines Reed-Solomon code and convolutional code is proposed so that much longer keys can be extracted from the iris data, while the bit masking scheme is proposed to minimize and randomize the errors occur in the iris codes, making the error pattern more suitable for the coding scheme. The experiment results show that the system can achieve a FRR of 0.52% with the key length of 938 bits.

Long Zhang, Zhenan Sun, Tieniu Tan, Shungeng Hu

Iris Matching by Local Extremum Points of Multiscale Taylor Expansion

Random distribution of features in iris image texture allows to perform iris-based personal authentication with high confidence. We propose to use the most significant local extremum points of the first two Taylor expansion coefficients as descriptors of the iris texture. A measure of similarity that is robust to moderate inaccuracies in iris segmentation is presented for the proposed features. We provide experimental results of verification quality for four commonly used iris data-sets. Strong and weak aspects of the proposed approach are also discussed.

Algirdas Bastys, Justas Kranauskas, Rokas Masiulis

Efficient Iris Spoof Detection via Boosted Local Binary Patterns

Recently, spoof detection has become an important and challenging topic in iris recognition. Based on the textural differences between the counterfeit iris images and the live iris images, we propose an efficient method to tackle this problem. Firstly, the normalized iris image is divided into sub-regions according to the properties of iris textures. Local binary patterns (LBP) are then adopted for texture representation of each sub-region. Finally, Adaboost learning is performed to select the most discriminative LBP features for spoof detection. In particular, a kernel density estimation scheme is proposed to complement the insufficiency of counterfeit iris images during Adaboost training. The comparison experiments indicate that the proposed method outperforms state-of-the-art methods in both accuracy and speed.

Zhaofeng He, Zhenan Sun, Tieniu Tan, Zhuoshi Wei

Custom Design of JPEG Quantisation Tables for Compressing Iris Polar Images to Improve Recognition Accuracy

Custom JPEG quantisation matrices are proposed to be used in the context of compressing iris polar images within iris recognition. These matrices are obtained by employing a Genetic algorithm for the corresponding optimisation. Superior matching results in iris recognition in terms of average Hamming distance and improved ROC are found as compared to the use of the default JPEG quantisation table.

Mario Konrad, Herbert Stögner, Andreas Uhl

Improving Compressed Iris Recognition Accuracy Using JPEG2000 RoI Coding

The impact of using JPEG2000 region of interest coding on the matching accuracy of iris recognition systems is investigated. In particular, we compare the matching scores as obtained by a concrete recognition system when using JPEG2000 compression of rectilinear iris images with and without region of interest coding enabled. The region of interest is restricted to the iris texture area plus the pupil region. It turns out that average matching scores can be improved and that the number of false negative matches is significantly decreased using region of interest coding as compared to plain JPEG2000 compression.

J. Hämmerle-Uhl, C. Prähauser, T. Starzacher, Andreas Uhl

Image Averaging for Improved Iris Recognition

We take advantage of the temporal continuity in an iris video to improve matching performance using signal-level fusion. From multiple frames of an iris video, we create a single average image. Our signal-level fusion method performs better than methods based on single still images, and better than previously published multi-gallery score-fusion methods. We compare our signal fusion method with another new method: a multi-gallery, multi-probe score fusion method. Between these two new methods, the multi-gallery, multi-probe score fusion has slightly better recognition performance, while the signal fusion has significant advantages in memory and computation requirements.

Karen P. Hollingsworth, Kevin W. Bowyer, Patrick J. Flynn

Iris Recognition Using 3D Co-occurrence Matrix

This paper presents a biometric recognition based on the iris of a human eye using gray-level co-occurrence matrix (GLCM). A new approach of GLCM, called 3D-GLCM, which is expanded from the original 2D-GLCM is proposed and used to extract the iris features. The experimental results show that the proposed approach gains an encouraging performance on the UBIRIS iris database. The recognition rate up to 99.65% can be achieved.

Wen-Shiung Chen, Ren-Hung Huang, Lili Hsieh

A New Fake Iris Detection Method

Recent research works have revealed that it is not difficult to spoof an automated iris recognition system using fake iris such as contact lens and paper print etc. Therefore, it is very important to detect fake iris as much as possible. In this paper, we propose a new fake iris detection method based on wavelet packet transform. First, wavelet packet decomposition is used to extract the feature values which provide unique information for discriminating fake irises from real ones. Second, to enhance the detecting accuracy of fake iris, Support vector machine (SVM) is used to characterize the distribution boundary based on extracted wavelet packet features, for it has good classification performance in high dimensional space and it is originally developed for two-class problems. The experimental results indicate the proposed method is to be a very promising technique for making iris recognition systems more robust against fake iris spoofing attempts.

Xiaofu He, Yue Lu, Pengfei Shi

Eyelid Localization in Iris Images Captured in Less Constrained Environment

Eyelid localization plays an important role in an accurate iris recognition system. In less constrained environment where the subjects are less cooperative, the problem becomes very difficult due to interference of eyelashes, eyebrows, glasses, hair and diverse variation of eye size and position. To determine upper eyelid boundary accurately, the paper proposes an integro-differential parabolic arc operator combined with a RANSAC-like algorithm. The integro-differential operator works as a parabolic arc edge detector. During search process of the operator, the potential candidate parabolas should near at least certain percentage of edgels of upper eyelid boundary, detected by 1D edge detector. The RANSAC-like algorithm functions as a constraint that not only makes eyelid localization more accurate, but also enables it more efficient by excluding invalid candidates for further processing. Lower eyelid localization is much simpler due to very less interference involved, and a method is presented that exploits 1D edgels detection and an RANSAC algorithm for parabolic fitting. Experiments are made on UBIRIS.v2 where images were captured at-a-distance and on-the-move. The comparison shows that the proposed algorithm is quite effective in localizing eyelids in heterogeneous images.

Xiaomin Liu, Peihua Li, Qi Song

Noisy Iris Verification: A Modified Version of Local Intensity Variation Method

In this paper, a modified version of local intensity variation method is proposed to enhance the efficiency of identification system while dealing with degradation factors presented in iris texture. Our contributions to improve the robustness and performance of local intensity variation method consist of defining overlapped patches to compensate for deformation of texture, performing a de-noising strategy to remove high frequency components of intensity signals, proposing to add a coding strategy, and combining the dissimilarity values obtained from intensity signals. Experimental results on UBIRIS database demonstrate the effectiveness of proposed method when facing low quality images. To assess the robustness of proposed method to noise, lack of focus, and motion blur, we simulate these degradation factors that may occur during image acquisition in non-ideal conditions. Our results on a private database show that verification performance remains acceptable while the original method [11] suffers from a dramatic degradation.

Nima Tajbakhsh, Babak Nadjar Araabi, Hamid Soltanian-zadeh

An Automated Video-Based System for Iris Recognition

We have successfully implemented a Video-based Automated System for Iris Recognition (VASIR), evaluating its successful performance on the MBGC dataset. The proposed method facilitates the ultimate goal of automatically detecting an eye area, extracting eye images, and selecting the best quality iris image from video frames. The selection method’s performance is evaluated by comparing it to the selection performed by humans. Masek’s algorithm was adapted to segment and normalize the iris region. Encoding the iris pattern and then completing the matching followed this stage. The iris templates from video images were compared to pre-existing still iris images for the purpose of the verification. This experiment has shown that even under varying illumination conditions, low quality, and off-angle video imagery, that iris recognition is feasible. Furthermore, our study showed that in practice an automated best image selection is nearly equivalent to human selection.

Yooyoung Lee, P. Jonathon Phillips, Ross J. Micheals

Empirical Evidence for Correct Iris Match Score Degradation with Increased Time-Lapse between Gallery and Probe Matches

We explore the effects of time lapse on iris biometrics using a data set of images with four years time lapse between the earliest and most recent images of an iris (13 subjects, 26 irises, 1809 total images). We find that the average fractional Hamming distance for a match between two images of an iris taken four years apart is statistically significantly larger than the match for images with only a few months time lapse between them. A possible implication of our results is that iris biometric enrollment templates may undergo aging and that iris biometric enrollment may not be “once for life.” To our knowledge, this is the first and only experimental study of iris match scores under long (multi-year) time lapse.

Sarah E. Baker, Kevin W. Bowyer, Patrick J. Flynn

Other Biometrics

Practical On-Line Signature Verification

A new DTW-based on-line signature verification system is presented and evaluated. The system is specially designed to operate under realistic conditions, it needs only a small number of genuine signatures to operate and it can be deployed in almost any signature capable capture device. Optimal features sets have been obtained experimentally, in order to adapt the system to environments with different levels of security. The system has been evaluated using four on-line signature databases (MCYT, SVC2004, BIOMET and MyIDEA) and its performance is among the best systems reported in the state of the art. Average EERs over these databases lay between 0.41% and 2.16% for random and skilled forgeries respectively.

J. M. Pascual-Gaspar, V. Cardeñoso-Payo, C. E. Vivaracho-Pascual

On-Line Signature Matching Based on Hilbert Scanning Patterns

Signature verification is a challenging task, because only a small set of genuine samples can be acquired and usually no forgeries are available in real application. In this paper, we propose a novel approach based on Hilbert scanning patterns and Gaussian mixture models for automatic on-line signature verification. Our system is composed of a similarity measure based on Hilbert scanning patterns and a simplified Gaussian mixture model for decision-level evaluation. To be practical, we introduce specific simplification strategies for model building and training. The system is compared to other state-of-the-art systems based on the results of the First International Signature Verification Competition (SVC 2004). Experiments are conducted to verify the effectiveness of our system.

Alireza Ahrary, Hui-ju Chiang, Sei-ichiro Kamata

Static Models of Derivative-Coordinates Phase Spaces for Multivariate Time Series Classification: An Application to Signature Verification

Multivariate time series are sequences, whose order is provided by a time index; thus, most classifiers used on such data treat time as a special quantity, and encode it structurally in a model. A typical example of such models is the hidden Markov model, where time is explicitely used to drive state transitions. The time information is discretised into a finite set of states, the cardinality of which is largely chosen by empirical criteria. Taking as an example task signature verification, we propose an alternative approach using static probabilistic models of phase spaces, where the time information is preserved by embedding of the multivariate time series into a higher-dimensional subspace, and modelled probabilistically by using the theoretical framework of static Bayesian networks. We show empirically that performance is equivalent to state-of-the-art signature verification systems.

Jonas Richiardi, Krzysztof Kryszczuk, Andrzej Drygajlo

Feature Selection in a Low Cost Signature Recognition System Based on Normalized Signatures and Fractional Distances

In a previous work a new proposal for an efficient on-line signature recognition system with very low computational load and storage requirements was presented. This proposal is based on the use of size normalized signatures, which allows for similarity estimation, usually based on DTW or HMMs, to be performed by an easy distance calcultaion between vectors, which is computed using fractional distance. Here, a method to select representative features from the normalized signatures is presented. Only the most stable features in the training set are used for distance estimation. This supposes a larger reduction in system requirements, while the system performance is increased. The verification task has been carried out. The results achieved are about 30% and 20% better with skilled and random forgeries, respectively, than those achieved with a DTW-based system, with storage requirements between 15 and 142 times lesser and a processing speed between 274 and 926 times greater. The security of the system is also enhanced as only the representative features need to be stored, it being impossible to recover the original signature from these.

C. Vivaracho-Pascual, J. Pascual-Gaspar, V. Cardeñoso-Payo

Feature Selection and Binarization for On-Line Signature Recognition

The representation of a biometric trait through a set of parametric features is commonly employed in many biometric authentication systems. In order to avoid any loss of useful information, large sets of features have been defined for biometric characteristics such as signature, gait or face. However, the proposed sets often contain features which are irrelevant, correlated with other features, or even unreliable. In this paper we propose two different approaches for the selection of those features which guarantee the best recognition performances. Moreover, we also face the problem of the binary representation of the selected features. Specifically, an algorithm which selects the minimum number of bits which should be assigned to a given feature, in order to not affect the recognition performances, is here proposed. The effectiveness of the proposed approaches is tested considering a watermarking based on-line signature authentication system, and employing the public MCYT on-line signature corpus as experimental database.

Emanuele Maiorana, Patrizio Campisi, Alessandro Neri

Writer Identification of Chinese Handwriting Using Grid Microstructure Feature

This paper proposes a histogram-based feature to Chinese writer identification. It is called grid microstructure feature. The feature is extracted from the edge image of the real handwriting image. The positions of edge pixel pairs are used to describe the characteristics in a local grid around every edge pixel. After global statistic, the probability density distribution of different pixel pairs is regarded as the feature representing the writing style of the handwriting. Then the similarity of two handwritings is measured with the improved weighted visions of some original metric. On the HIT-MW Chinese handwriting database involving 240 writers, the best Top-1 identification accuracy is 95.0% and the Top-20 accuracy reaches 99.6%.

Xin Li, Xiaoqing Ding

Enhancement and Registration Schemes for Matching Conjunctival Vasculature

Ocular biometrics has made significant strides over the past decade primarily due to the rapid advances in iris recognition. Recent literature has investigated the possibility of using conjunctival vasculature as an added ocular biometric. These patterns, observed on the sclera of the human eye, are especially significant when the iris is off-angle with respect to the acquisition device resulting in the exposure of the scleral surface. In this work, we design enhancement and registration methods to process and match conjunctival vasculature obtained under non-ideal conditions. The goal is to determine if conjunctival vasculature is a viable biometric in an operational environment. Initial results are promising and suggest the need for designing advanced image processing and registration schemes for furthering the utility of this novel biometric. However, we postulate that in an operational environment, conjunctival vasculature has to be used with the iris in a bimodal configuration.

Simona Crihalmeanu, Arun Ross, Reza Derakhshani

Entropy of the Retina Template

We compare two vessel extraction methods for creation of a retina template, using a database of 20 images of normal retinas. Each vessel in a well defined region is represented by a three dimensional feature, from which a retina template is built. Based on the sample distributions, we propose a preliminary theoretical model to predict the entropy of a retina template. We analyse by experimental and theoretical means the entropy present, and infer that entropy from our retina template compares sufficiently favourably with that of a minutia-based fingerprint template to warrant further study.

A. Arakala, J. S. Culpepper, J. Jeffers, A. Turpin, S. Boztaş, K. J. Horadam, A. M. McKendrick

Lips Recognition for Biometrics

One of the most interesting emerging method of human identification, which originates from the criminal and forensic practice, is human lips recognition. In this paper we consider lips shape features in order to determine human identity. The major contribution of this paper are novel geometrical parameters developed in order to describe human lips shape for biometric applications.

Michał Choraś

Biometrics Method for Human Identification Using Electrocardiogram

This work exploits the feasibility of physiological signal electrocardiogram (ECG) to aid in human identification. Signal processing methods for analysis of ECG are discussed. Using ECG signal as biometrics, a total of 19 features based on time interval, amplitudes and angles between clinically dominant fiducials are extracted from each heartbeat. A test set of 250 ECG recordings prepared from 50 subjects ECG from Physionet are evaluated on proposed identification system, designed on template matching and adaptive thresholding. The matching decisions are evaluated on the basis of correlation between features. As a result, encouraging performance is obtained, for instance, the achieved equal error rate is smaller than 1.01 and the accuracy of the system is 99%.

Yogendra Narain Singh, P. Gupta

Real-Time Model-Based Hand Localization for Unsupervised Palmar Image Acquisition

Unsupervised and touchless image acquisition are two problems that have recently emerged in biometric systems based on hand features. We have developed a real-time model-based hand localization system for palmar image acquisition and ROI extraction. The system operates on video sequences and produces a set of palmprint regions of interest (ROIs) for each sequence. Hand candidates are first located using Viola-Jones approach and then the best candidate is selected using model-fitting approach. Experimental results demonstrate the feasibility of the system for unsupervised palmar image acquisition in terms of speed and localization accuracy.

Ivan Fratric, Slobodan Ribaric

Palm Vein Verification System Based on SIFT Matching

We present in this communication a new biometric system based on the use of hand veins acquired by an infrared imager. After the preprocessing stage and binarization, the vein image is characterized by specific patterns. One originality of the proposed system is to use SIFT descriptors for the verification process. The developed method only necessitates a single image for the enrollment step allowing a very fast verification. The experimental results on a database containing images of 24 individuals acquired after two sessions show the efficiency of the proposed method.

Pierre-Olivier Ladoux, Christophe Rosenberger, Bernadette Dorizzi

Backmatter

Weitere Informationen

Premium Partner

    Bildnachweise