Skip to main content

2008 | Buch

Advances in Biometrics

Sensors, Algorithms and Systems

herausgegeben von: Nalini K. Ratha, BTech, MTech, PhD, Venu Govindaraju, BTech, MS, PhD

Verlag: Springer London

insite
SUCHEN

Über dieses Buch

Biometrics technology continues to stride forward with its wider acceptance and its perceived need in various new security facets of modern society. Biometrics is being required to meet the growing challenges of identity management. While biometrics finds use in all diverse applications, the research challenges continue to grow as the user community expects a fully automatic system to cover the whole universal population with almost no errors.

Recent advances in biometrics include new developments in sensors, modalities and algorithms. As new sensors are designed, newer challenges emerge in the algorithms for accurate recognition. New modalities have been discovered in addition to the fusion of existing modalities to improve accuracy and wider coverage.

Written for researchers, advanced students and practitioners to use as a handbook, this volume captures the very latest state-of-the-art research contributions from leading international researchers in the field.

Inhaltsverzeichnis

Frontmatter

Sensors

1. Multispectral Fingerprint Image Acquisition
This chapter describes the principles of operation of a new class of fingerprint sensor based on multispectral imaging (MSI). The MSI sensor captures multiple images of the finger under different illumination conditions that include different wavelengths, different illumination orientations, and different polarization conditions. The resulting data contain information about both the surface and subsurface features of the skin. These data can be processed to generate a single composite fingerprint image equivalent to that produced by a conventional fingerprint reader, but with improved performance characteristics. In particular, the MSI imaging sensor is able to collect usable biometric images in conditions where other conventional sensors fail such as when topical contaminants, moisture, and bright ambient lights are present or there is poor contact between the finger and sensor. Furthermore, the MSI data can be processed to ensure that the measured optical characteristics match those of living human skin, providing a strong means to protect against attempts to spoof the sensor.
Robert K. Rowe, Kristin Adair Nixon, Paul W. Butler
2. Touchless Fingerprinting Technology
Fingerprint image acquisition is considered the most critical step of an automated fingerprint authentication system, as it determines the final fingerprint image quality, which has drastic effects on the overall system performance.
When a finger touches or rolls onto a surface, the elastic skin deforms. The quantity and direction of the pressure applied by the user, the skin conditions, and the projection of an irregular 3D object (the finger) onto a 2D flat plane introduce distortions, noise, and inconsistencies on the captured fingerprint image. These problems have been indicated as inconsistent, irreproducible, and nonuniform contacts and, during each acquisition, their effect on the same fingerprint is different and uncontrollable. Hence, the representation of the same fingerprint changes every time the finger is placed on the sensor platen, increasing the complexity of fingerprint matching and representing a negative influence on system performance with a consequent limited spread of this biometric technology.
Recently, a new approach to capture fingerprints has been proposed. This approach, referred to as touchless or contactless fingerprinting, tries to overcome the above-cited problems. Because of the lack of contact between the finger and any rigid surface, the skin does not deform during the capture and the repeatability of the measure is ensured.
However, this technology introduces new challenges. For example, due to the curvature of the finger and the nonnull distance between the camera and the finger, the useful captured fingerprint area is reduced and the capture of rolled-equivalent fingerprints becomes very difficult. Moreover, finger positioning, lower image contrast, illumination, and user convenience still must be addressed.
In this chapter, an overview of this novel capturing approach and its advantages and disadvantages with respect to the legacy technology are highlighted. Capturing techniques using more than one camera or combining cameras and mirrors, referred to as 3D touchless fingerprinting, are here presented together with a new threedimensional representation of fingerprints and minutiae. Vulnerability and weaknesses of touchless fingerprinting are also addressed, because fake-detection results in a very critical problem for this technology.
Geppy Parziale
3. A Single-Line AC Capacitive Fingerprint Swipe Sensor
In this chapter we describe a single-line AC capacitive fingerprint swipe sensor. We focus on the basic working principle of a swipe sensor, namely, the fact that the user has to swipe his or her finger across the sensor surface to acquire a fingerprint image. Thus, an image reconstruction algorithm is needed in order to produce properly scaled fingerprint images taking into account varying swiping speeds and directions. We describe an image reconstruction algorithm that effectively compares time signals from two sets of sensor elements in order to calculate the speed and direction of the moving finger. The single-line AC capacitive swipe sensor produces high-quality fingerprint images suitable for any recognition algorithm allowing for translational and rotational degrees of freedom. The integrated touchpad functionality is an additional feature of the sensor capable of reducing the number of buttons needed on small handheld devices.
Sigmund Clausen
4. Ultrasonic Fingerprint Sensors
Research shows that biometric image quality is one of the most significant variables affecting biometric system accuracy and performance. Some imaging modalities are more susceptible to poor image quality than others, caused by a number of factors including contamination found on the finger and/or platen, excessively dry or moist fingers, ghost images caused by latent prints, and so on.
Optical fingerprint scanners, which rely on a concept called Frustrated Total Internal Reflection (FTIR), are vulnerable to inaccurate fingerprint imaging. Although optical fingerprint technology advances, we find most of these advances pertain to product features, and the limitations associated with the physics of FTIR can never be overcome. In response to FTIR issues causing suboptimum performance, a new imaging modality has been pioneered using ultrasonic imaging for livescan fingerprint applications. Ultrasonic imaging has demonstrated much more tolerance to external conditions that cause poor biometric image quality in optical systems, such as humidity, extreme temperatures, and ambient light. Although there is still a variable that cannot be overcome, that is, poor quality fingers, ultrasonic imaging has managed to minimize the impact of these types of fingers on overall system performance.
In addition, many biometric system inaccuracies are beyond the scope of most inhouse biometric vendor system testing facilities, creating a performance gap that exists today between laboratory biometric system prediction and real-world conditions. To resolve this performance gap, new biometric image quality metrics are needed to drive more precise biometric industry standards, baseline performance, and serve as the catalyst that could jumpstart an industry that has failed to meet many expectations for decades.
John K. Schneider
5. Palm Vein Authentication
This chapter discusses palm vein authentication, which uses the vascular patterns of the palm as personal identification data. Palm vein information is hard to duplicate because veins are internal to the human body. Palm vein authentication technology offers a high level of accuracy, and delivers the following results: a false rejection rate (FRR) of 0.01% and a false acceptance rate (FAR) of less than 0.00008%, using the data of 150,000 palms. Several banks in Japan have used palm vein authentication technology for customer identification since July 2004. In addition, this technology has been integrated into door security systems as well as other applications.
Masaki Watanabe
6. Finger Vein Authentication Technology and Financial Applications
Finger vein authentication is one of the most accurate and reliable biometric technologies, which is widely employed in mission-critical applications such as financial transactions. Section 6.1 describes the advantages of finger vein authentication as well as a brief history of the technology. Section 6.2 covers the overview of the hardware by taking an example of a commercial finger vein authentication device. Section 6.3 features the imaging technology and the matching algorithm employed by the commercial model. Section 6.4 explains two performance evaluation results based on internationally recognised biometric testing standards. Sections 6.5 and 6.6 illustrate the actual implementations of the technology in both general and financial applications. Conclusion and future plans are described in Section 6.7.
Mitsutoshi Himaga, Katsuhiro Kou
7. Iris Recognition in Less Constrained Environments
Iris recognition is one of the most accurate forms of biometric identifi- cation. However, current commercial off-the-shelf (COTS) systems generally impose significant constraints on the subject. This chapter discusses techniques for iris image capture that reduce those constraints, in particular enabling iris image capture from moving subjects and at greater distances than have been available in the COTS systems. The chapter also includes background information that enables the reader to put these innovations into context.
James R. Matey, David Ackerman, James Bergen, Michael Tinker
8. Ocular Biometrics: Simultaneous Capture and Analysis of the Retina and Iris
Ocular biometric identification technologies include retina and iris pattern recognition. Public perception has long equated retinal identification with ocular biometrics despite very few practical solutions being demonstrated. More recently the iris has received considerable attention. Iris biometric systems are commercially available and research is expanding. Interest in multimodal biometric systems is increasing as a means to improve on the performance of unimodal systems. The retina and iris are potentially two well-balanced biometrics comprising uncorrelated complementary information. Moreover, the fixed anatomical proximity of the retina and iris suggest that the two biometrics may be simultaneously captured by a single device. This chapter outlines novel retina and iris technologies and describes their integration into a single biometric solution. A brief anatomical context is provided and technical challenges associated with retina and iris capture are detailed. Retina and iris recognition solutions are discussed and preliminary results are provided. A device for the simultaneous capture of retina and iris biometrics is described.
David Usher, Yasunari Tosa, Marc Friedman
9. Face Recognition Beyond the Visible Spectrum
The facial vascular network is highly characteristic of the individual, much the way his fingerprint is. An unobtrusive way to capture this information is through thermal imaging. The convective heat transfer effect from the flow of “hot” arterial blood in superficial vessels creates characteristic thermal imprints, which are at a gradient with the surrounding tissue. This casts sigmoid edges on the human tissue where major blood vessels are present. We present an algorithmic methodology to extract and represent the facial vasculature. The methodology combines image morphology and probabilistic inference. The morphology captures the overall structure of the vascular network and the probabilistic part reflects the positional uncertainty for the vessel walls, due to the phenomenon of thermal diffusion. The accuracy of the methodology is tested through extensive experimentation and meticulous ground-truthing. Furthermore, the efficacy of this information for identity recognition is tested on substantial databases.
Pradeep Buddharaju, Ioannis Pavlidis, Chinmay Manohar

Algorithms

10. Voice-Based Speaker Recognition Combining Acoustic and Stylistic Features
We present a survey of the state of the art in voice-based speaker identification research. We describe the general framework of a text-independent speaker verification system, and, as an example, SRI’s voice-based speaker recognition system. This system was ranked among the best-performing systems in NIST textindependent speaker recognition evaluations in the years 2004 and 2005. It consists of six subsystems and a neural network combiner. The subsystems are categorized into two groups: acoustics-based, or low level, and stylistic, or high level. Acoustic subsystems extract short-term spectral features that implicitly capture the anatomy of the vocal apparatus, such as the shape of the vocal tract and its variations. These features are known to be sensitive to microphone and channel variations, and various techniques are used to compensate for these variations. High-level subsystems, on the other hand, capture the stylistic aspects of a person’s voice, such as the speaking rate for particular words, rhythmic and intonation patterns, and idiosyncratic word usage. These features represent behavioral aspects of the person’s identity and are shown to be complementary to spectral acoustic features. By combining all information sources we achieve equal error rate performance of around 3% on the NIST speaker recognition evaluation for two minutes of enrollment and two minutes of test data.
Sachin S. Kajarekar, Luciana Ferrer, Andreas Stolcke, Elizabeth Shriberg
11. Conversational Biometrics: A Probabilistic View
This article presents the concept of conversational biometrics; the combination of acoustic voice matching (traditional speaker verification) with other conversation-related information sources (such as knowledge) to perform identity verification. The interaction between the user and the verification system is orchestrated by a state-based policy modeled within a probabilistic framework. The verification process may be accomplished in an interactive manner (active validation) or as a “listen-in” background process (passive validation). In many system configurations, the verification may be performed transparently to the caller.
For an interactive environment evaluation with uninformed impostors, it is shown that very high performance can be attained by combining the evidence from acoustics and question–answer pairs. In addition, the study demonstrates the biometrics system to be robust against fully informed impostors, a challenge yet to be addressed with existing widespread knowledge-only verification practices. Our view of conversational biometrics emphasizes the importance of incorporating multiple sources of information conveyed in speech.
Jason Pelecanos, Jiří Navrátil, Ganesh N. Ramaswamy
12. Function-Based Online Signature Verification
A function-based approach to online signature verification is presented. The system uses a set of time sequences and Hidden Markov Models (HMMs). Development and evaluation experiments are reported on a subcorpus of the MCYT biometric database comprising more than 7000 signatures from 145 subjects. The system is compared to other state-of-the-art systems based on the results of the First International Signature Verification Competition (SVC 2004). A number of practical findings related to parameterization, modeling, and score normalization are obtained.
Julian Fierrez, Javier Ortega-Garcia
13. Writer Identification and Verification
The behavioral biometrics methods of writer identification and verification are currently enjoying renewed interest, with very promising results. This chapter presents a general background and basis for handwriting biometrics. A range of current methods and applications is given, also addressing the issue of performance evaluation. Results on a number of methods are summarized and a more in-depth example of two combined approaches is presented. By combining textural, allographic, and placement features, modern systems are starting to display useful performance levels. However, user acceptance will be largely determined by explainability of system results and the integration of system decisions within a Bayesian framework of reasoning that is currently becoming forensic practice.
Lambert Schomaker
14. Improved Iris Recognition Using Probabilistic Information from Correlation Filters
The complexity and stability of the human iris pattern make it well suited for the task of biometric verification. However, any realistically deployed iris imaging system collects images from the same iris that exhibit variation in appearance. This variation originates from external factors (e.g., changes in lighting or camera angle), as well as the subject’s physiological responses (e.g., pupil motion and eyelid occlusion). Following iris segmentation and normalization, the standard iris matching algorithm measures the Hamming distance between quantized Gabor features across a range of relative eye rotations. This chapter asserts that matching performance becomes more robust when iris images are aligned with a more flexible deformation model, using distortion-tolerant similarity cues. More specifically, the responses from local distortion-tolerant correlation filters are taken as evidence of local alignments. Then this observed evidence, along with the outputs of an eyelid detector, are used to infer posterior distributions on the hidden true states of deformation and eyelid occlusion. If the estimates are accurate, this information improves the robustness of the final match score. The proposed technique is compared to the standard iris matching algorithm on two datasets: one from the NIST Iris Challenge Evaluation (ICE), and one collected by the authors at Carnegie Mellon University. In experiments on these data, the proposed technique shows improved performance across a range of match score thresholds.
Jason Thornton, Marios Savvides, B. V. K. Vijaya Kumar
15. Headprint-Based Human Recognition
This chapter presents an innovative approach for unobtrusive person identification especially applicable in surveillance applications. New algorithms for (a) separation of hair area from the background in overhead imagery, (b) extraction of novel features that characterize the color and texture of hair, and (c) person identification given these features are presented. In one scenario, only a single training image per subject is assumed to be available. In another scenario, a small set of up to four training images is used per subject. Successful application on both still and video imagery is demonstrated. Although the visual appearance of hair cannot be used as a long-term biometric due to the nonrigid nature of hair, we demonstrate a realistic scenario where the time interval between gallery and probe imagery is short enough to achieve reliable performance.
Hrishikesh Aradhye, Martin Fischler, Robert Bolles, Gregory Myers
16. Pose and Illumination Issues in Face- and Gait- Based Identification
Although significant work has been done in the field of face- and gait- based recognition, the performance of the state-of-the-art recognition algorithms is not good enough to be effective in operational systems. Most algorithms do reasonably well for controlled images but are susceptible to changes in illumination conditions and pose. This has shifted the focus of research to more challenging tasks of obtaining better performance for uncontrolled realistic scenarios. In this chapter, we discuss several recent advances made to achieve this goal.
Rama Chellappa, Gaurav Aggarwal
17. SVDD-Based Face Reconstruction in Degraded Images
Nowadays, with rapid progress and interest in surveillance, the requirements of face recognition have dramatically increased. However, facial images captured by the camera are different from previously trained data. The captured images can be noisy and degraded. For solving these problems in the face recognition process, we propose a new method of extending the SVDD (support vector data description). In this chapter, we consider the problem of recognizing facial images and propose to use the SVDD-based face recognition. In the proposed method, we first solve the SVDD problem for the data belonging to the given prototype facial images, and model the data region for the normal faces as the ball resulting from the SVDD problem. Next, for each input facial image in various conditions, we project its feature vector onto the decision boundary of the SVDD ball so that it can be tailored enough to belong to the normal region. Finally, we synthesize facial images that are obtained from the preimage of the projection, and then perform face recognition. The applicability of the proposed method is illustrated via some experiments dealing with faces changed by different environments.
Sang-Woong Lee, Seong-Whan Lee
18. Strategies for Improving Face Recognition from Video
In this chapter, we look at face recognition from video. We exploit the fact that in a given sequence there are multiple images available per subject. We devise strategies to select frames from the video sequences based on quality and difference from each other, so as to improve recognition performance. We compare the four approaches using a video dataset collected at the University of Notre Dame. We compare two pieces of commercially available software to see how they perform when using different data sources. Finally, we experiment with clips that consist of the subject walking in the clip and compare it to using clips where the subject is sitting and talking. We show that multiframe approaches perform better than single frame approaches and that quality of the frames combined with a difference between frames improves performance better than either property alone.
Deborah Thomas, Kevin W. Bowyer, Patrick J. Flynn
19. Large-Population Face Recognition (LPFR) Using Correlation Filters
Practical face recognition applications may involve hundreds and thousands of people. Such large-population face recognition (LPFR) can be a real challenge when the recognition system stores one template for each subject in the database and matches each test image with a large number of templates in real-time. Such a system should also be able to handle the situation where some subjects do not have a sufficient number of training images, and it should be flexible enough to add or remove subjects from the database. In this chapter, we address the LPFR problem and introduce the correlation pattern recognition (CPR)-based algorithms/systems for LPFR. The CPR, a subset of statistical pattern recognition, is based on selecting or creating a reference signal (e.g., correlation filters designed in the frequency domain from training images) and then determining the degree to which the objects under examination resemble the reference signal. We introduce class-dependence feature analysis (CFA), a general framework that applies kernel correlation filters (KCF) to effectively handle the LPFR problem. In place of the computationally demanding one template for each subject design method, we introduce a more computationally attractive approach to deal with a large number of classes via binary coding and error control coding (ECC). In this chapter, we focus on using advanced kernel correlation filters along with the ECC concept to accomplish LPFR. We include results on the Face Recognition Grand Challenge (FRGC) database.
Chunyan Xie, B. V. K. Vijaya Kumar

Systems

20. Fingerprint Synthesis and Spoof Detection
This chapter addresses two topical issues in the field of fingerprint-based biometric systems: fingerprint template reverse-engineering, that is, the synthesis of fingerprint images starting from minutiae-based templates; and fake fingerprint detection, that is, discriminating between real and fake fingerprint impressions, the latter generated by artificial reproductions of a finger. After a brief review of the current state of the art, two innovative techniques are discussed in detail: a reconstruction approach able to synthesize a valid fingerprint image from an ISO 19794-2 template and a fake fingerprint detection method based on analysis of the finger odor.
Annalisa Franco, Davide Maltoni
21. Match-on-Card for Secure and Scalable Biometric Authentication
The majority of biometric systems in use today operate in a database environment. Whether it is a large-scale database such as US-VISIT or a small bank of biometrics stored on a server for logical access in an office, the solutions are based on insecure networks that are vulnerable to cyberattacks. Match-on-Card technology eliminates the need for the database by both storing and processing biometric data directly on a smartcard, providing a secure, privacy-enhancing biometric program with dynamic flexibility and scalability.
Over the years, biometric technology has proven to be a useful replacement for PINs and passwords irrespective of the market, the card technology, or the application. Advantages of using biometrics include security, speed, and user acceptance. However, biometrics has long been burdened by concerns over privacy and security.
Match-on-Card technology elevates biometrics from a mere PIN replacement to an integral part of a secure and privacy-enhancing smartcard solution. Matchon- Card technology takes biometric security and convenience one step further by performing the actual fingerprint match within the tamperproof environment of a smartcard. This removes the uncertainty of matching on a network-connected device, an external server, or a database, normally considered weak links in the security chain.
Match-on-Card creates a fully integrated biometrics solution for smartcards, which surpasses PINs and passwords in convenience, security, performance, and ease of use. The Match-on-Card technology was developed to meet the needs and demands of new markets and users of national ID and travel documents. Match-on-Card is becoming an integral part of high-security smartcards in many diverse markets.
Christer Bergman
22. Privacy and Security Enhancements in Biometrics
Many tout biometrics as the key to reducing identify theft and providing significantly improved security. However, unlike passwords, if the database or biometric is ever compromised, the biometric data cannot be changed or revoked. We introduce the concept of BiotopesTM, revocable tokens that protect the privacy of the original user, provide for many simultaneous variations that cannot be linked, and that provide for revocation if compromised. Biotopes can be computed from almost any biometric signature that is a collection of multibit numeric fields. The approach transforms the original biometric signature into an alternative revocable form (the Biotope) that protects privacy while it supports a robust distance metric necessary for approximate matching. Biotopes provide cryptographic security of the identity; support approximate matching in encoded form; cannot be linked across different databases; and are revocable. The most private form of a Biotope can be used to verify identity, but cannot be used for search.We demonstrate Biotopes derived from different face-based recognition algorithms as well as a fingerprint-based Biotope and show that Biotopes improve performance, often significantly!
The robust “distance metric”, computed on the encoded form, is provably identical to application of the same robust metric on the original biometric signature for matching subjects and never smaller for nonmatching subjects. The technique provides cryptographic security of the identity, supports matching in encoded form, cannot be linked across different databases, and is revocable.
Terrance E. Boult, Robert Woodworth
23. Adaptive Biometric Systems That Can Improve with Use
Performances of biometric recognition systems can degrade quickly when the input biometric traits exhibit substantial variations compared to the templates collected during the enrollment stage of the system’s users. On the other hand, a lot of new unlabelled biometric data, which could be exploited to adapt the system to input data variations, are made available during the system operation over the time. This chapter deals with adaptive biometric systems that can improve with use by exploiting unlabelled data. After a critical review of previous works on adaptive biometric systems, the use of semisupervised learning methods for the development of adaptive biometric systems is discussed. Two examples of adaptive biometric recognition systems based on semisupervised learning are presented in the chapter, and the concept of biometric co-training is introduced for the first time.
Fabio Roli, Luca Didaci, Gian Luca Marcialis
24. Biometrics Standards
This chapter addresses the history, current status, and future developments of standardization efforts in the field of biometrics. The need for standards is established while noting the special nature of the biometrics field and the recent acceleration of demand for interoperable systems. The nature of national and international standardization bodies involved in standardization is outlined and in particular the structures and mechanisms for developing standards within the International Standards Organization are explained. This chapter focuses on the activities of ISO’s SC37 subcommittee dealing with biometric standardization. The work of each of the six working groups of SC37 is briefly explained and some of the important achievements of these activities are highlighted. The chapter ends by looking at the future of standardization both in terms of forthcoming projects within ISO and also in terms of the interactions between ongoing research into biometric systems and the standardization process.
Farzin Deravi
Backmatter
Metadaten
Titel
Advances in Biometrics
herausgegeben von
Nalini K. Ratha, BTech, MTech, PhD
Venu Govindaraju, BTech, MS, PhD
Copyright-Jahr
2008
Verlag
Springer London
Electronic ISBN
978-1-84628-921-7
Print ISBN
978-1-84628-920-0
DOI
https://doi.org/10.1007/978-1-84628-921-7