Skip to main content

2016 | Buch

Face Recognition Across the Imaging Spectrum

insite
SUCHEN

Über dieses Buch

This authoritative text/reference presents a comprehensive review of algorithms and techniques for face recognition (FR), with an emphasis on systems that can be reliably used in operational environments. Insights are provided by an international team of pre-eminent experts into the processing of multispectral and hyperspectral face images captured under uncontrolled environments. These discussions cover a variety of imaging sensors ranging from state-of-the-art visible and infrared imaging sensors, to RGB-D and mobile phone image sensors. A range of different biometric modalities are also examined, including face, periocular and iris. This timely volume is a mine of useful information for researchers, practitioners and students involved in image processing, computer vision, biometrics and security.

Inhaltsverzeichnis

Frontmatter
Chapter 1. An Overview of Spectral Imaging of Human Skin Toward Face Recognition
Abstract
Spectral imaging is a form of remote sensing that provides a means of collecting information from surroundings without physical contact. Differences in spectral reflectance over the electromagnetic spectrum allow for the detection, classification, or quantification of objects in a scene. The development of this field has largely benefited from Earth observing airborne and spaceborne programs. Information gained from spectral imaging has been recognized as making key contributions from the regional to the global scale. The burgeoning market of compact hyperspectral sensors has opened new opportunities, at smaller spatial scales, in a large number of applications such as medical, environmental, security, and industrial processes. The market is expected to continue to evolve and result in advancements in sensor size, performance, and cost. In order to employ spectral imaging for a specific task, it is critical to have a fundamental understanding of the phenomenology of the subject of interest, the imaging sensor, image processing, and interpretation of the results. Spectral imaging of human tissue has the strong foundation of a well-known combination of components, e.g., hemoglobin, melanin, and water that make skin distinct from most backgrounds. These components are heterogeneously distributed and vary across the skin of individuals and between individuals. The spatial component of spectral imaging provides a basis for making spectral distinctions of these differences. This chapter provides an introduction to the interaction of energy in the electromagnetic spectrum with human tissue and other materials, the fundamentals of sensors and data collection, common analysis techniques, and the interpretation of results for decision making. The basic information provided in this chapter can be utilized for a wide range of applications where spectral imaging may be adopted including face recognition.
David W. Allen
Chapter 2. Collection of Multispectral Biometric Data for Cross-spectral Identification Applications
Abstract
The ultimate goal of cross-spectral biometric recognition applications involves matching probe images, captured in one spectral band, against a gallery of images captured in a different band or multiple bands (neither of which is the same band in which the probe images were captured). Both the probe and the gallery images may have been captured in either controlled or uncontrolled environments, i.e., with varying standoff distances, lighting conditions, poses. Development of effective cross-spectral matching algorithms involves, first, the process of collecting a cohort of research sample data under controlled conditions with fixed or varying parameters such as pose, lighting, obstructions, and illumination wavelengths. This chapter details “best practice” collection methodologies developed to compile large-scale datasets of both visible and SWIR face images, as well as gait images and videos. All aspects of data collection, from IRB preparation, through data post-processing, are provided, along with instrumentation layouts for indoor and outdoor live capture setups. Specifications of video and still-imaging cameras used in collections are listed. Controlled collection of 5-pose, ANSI/NIST mugshot images is described, along with multiple SWIR data collections performed both indoors (under controlled illumination) and outdoors. Details of past collections performed at West Virginia University (WVU) to compile multispectral biometric datasets, such as age, gender, and ethnicity of the subject populations, are included. Insight is given on the impact of collection parameters on the general quality of images collected, as well as on how these parameters impact design decisions at the algorithm level. Finally, where applicable, a brief description of how these databases have been used in multispectral biometrics research is included.
J. M. Dawson, S. C. Leffel, C. Whitelam, T. Bourlai
Chapter 3. Hyperspectral Face Databases for Facial Recognition Research
Abstract
Spectral imaging (SI) enables us to collect various spectral information at specific wavelengths by dividing the spectrum into multiple bands. As such, SI offers a means to overcome several major challenges specific to current face recognition systems. However, the practical usage of hyperspectral face recognition (HFR) has, to date, been limited due to database restrictions in the public domain for comparatively evaluating HFR. In this chapter, we review four publically available hyperspectral face databases (HFDs): CMU, PolyU-HSFD, IRIS-M, and Stanford databases toward providing information on the key points of each of the considered databases. In addition, a new large HFD, called IRIS-HFD-2014, is introduced. IRIS-HFD-2014 can serve as a benchmark for statistically evaluating the performance of current and future HFR algorithms and will be made publicly available.
Woon Cho, Andreas Koschan, Mongi A. Abidi
Chapter 4. MWIR-to-Visible and LWIR-to-Visible Face Recognition Using Partial Least Squares and Dictionary Learning
Abstract
Cross-spectral face recognition, which seeks to match a face image acquired in one spectral band (e.g., infrared) to that of a face acquired in another band (e.g., visible), is a relatively new area of research in the biometrics community. Thermal-to-visible face recognition has been receiving increasing attention, due to its promising potential for low-light or nighttime surveillance and intelligence gathering applications. However, matching a thermal probe image to a visible face database is highly challenging. Thermal imaging is emission dominated, acquiring thermal radiation naturally emitted by facial tissue, while visible imaging is reflection dominated, acquiring light reflected from the surface of the face. The resulting difference between the thermal face signature and the visible face signature renders conventional algorithms designed for within-spectral matching (e.g., visible-to-visible) unsuitable for thermal-to-visible face recognition. In this chapter, two thermal-to-visible face recognition approaches are discussed: (1) a partial least squares (PLS)-based approach and (2) a dictionary learning SVM approach. Preprocessing and feature extraction techniques used to correlate the signatures in the feature subspace are also discussed. We present recognition results on an extensive multimodal face dataset containing facial imagery acquired under different experimental conditions. Furthermore, we discuss key findings and implications for MWIR-to-visible and LWIR-to-visible face recognition. Finally, a novel imaging technique for acquiring an unprecedented level of facial detail in thermal images, polarimetric LWIR, is presented along with a framework for performing cross-spectral face recognition.
Shuowen Hu, Nathaniel J. Short, Prudhvi K. Gurram, Kristan P. Gurton, Christopher Reale
Chapter 5. Local Operators and Measures for Heterogeneous Face Recognition
Abstract
This chapter provides a summary of local operators recently proposed for heterogeneous face recognition. It also analyzes performance of each individual operator and demonstrates performance of composite operators. Basic local operators include local binary patterns (LBP), generalized local binary patterns (GLBPs), Weber local descriptors (WLDs), Gabor filters, and histograms of oriented gradients (HOGs). They are directly applied to normalized face images. The composite operators include Gabor filters followed by LBP, Gabor filters followed by WLD, Gabor filters followed by GLBP, Gabor filters followed by LBP, GLBP and WLD, Gabor ordinal measures (GOM), and composite multi-lobe descriptors (CMLD). When applying a composite operator to face images, images are first normalized and processed with a bank of Gabor filters and then local operators or combinations of local operators are applied to the outputs of Gabor filters. After a face image is encoded using the local operators, the outputs of local operators are converted to a histogram representation and then concatenated, resulting in a very long feature vector. No effective dimensionality reduction method or feature selection method has been found to reduce the size of the feature vector. Each component in the feature vector appears to contribute a small amount of information needed to generate a high fidelity matching score. A matching score is generated by means of Kullback-Leibler distance between two feature vectors. The cross-matching performance of heterogeneous face images is demonstrated on two datasets composed of active infrared and visible light face images. Both short and long standoff distances are considered.
Zhicheng Cao, Natalia A. Schmid, Thirimachos Bourlai
Chapter 6. Assessment of Facial Recognition System Performance in Realistic Operating Environments
Abstract
An end-to-end facial recognition system performance depends on a variety of factors. The optical system, environment, illumination, target, and recognition algorithm can all affect its accuracy. Typically, only the facial recognition algorithm has been considered when evaluating performance. The remaining environmental and system components have not been considered in the design of facial recognition imaging systems. However, in scenarios relevant to the military and homeland security, the effects of weather and range can severely degrade performance and it is necessary to understand the conditions where this happens. This work introduces a methodology to explore the sensitivities of a facial recognition imaging system to blur, noise, and turbulence effects. Using a government-owned and an open source facial recognition algorithm, system performance is evaluated under different optical blurs, sensor noises, and turbulence conditions. The ramifications of these results on the design of long-range facial recognition systems are also discussed.
Kevin R. Leonard
Chapter 7. Understanding Thermal Face Detection: Challenges and Evaluation
Abstract
In thermal face detection, researchers have generally assumed manual face detection or have designed algorithms that focus on indoor environment. However, facial properties are dependent on body temperature, surrounding environment, and any accessories or occlusion present on the face. For instance, the presence of scarfs, glasses, or any disguise accessories will alter the emitted heat pattern, thereby making it challenging to detect the face in thermal images. Similarly, daytime outdoor image acquisition has certain effects on the heat pattern compared to nighttime (or indoor controlled) image acquisition settings that affect automatic face detection performance. In this research, we provide a thorough understanding of challenges in thermal face detection along with an experimental evaluation of traditional approaches. Further, we adapt the AdaBoost face detector to yield improved performance on face detection in thermal images in both indoor and outdoor environments. We also propose a region of interest selection approach designed specifically for aiding occluded/disguised thermal face detection. Experiments are performed on the Notre Dame thermal face database as well as the IIITD databases that include variations such as disguise, age, and environmental (day/night) factors. The results suggest that while thermal face detection in semi-controlled environments is relatively easy, occlusion and disguise are challenges that require further attention.
Janhavi Agrawal, Aishwarya Pant, Tejas I. Dhamecha, Richa Singh, Mayank Vatsa
Chapter 8. Face Recognition Systems Under Spoofing Attacks
Abstract
In this chapter, we give an overview of spoofing attacks and spoofing countermeasures for face recognition systems, with a focus on visual spectrum systems (VIS) in 2D and 3D, as well as near-infrared (NIR) and multispectral systems. We cover the existing types of spoofing attacks and report on their success to bypass several state-of-the-art face recognition systems. The results on two different face spoofing databases in VIS and one newly developed face spoofing database in NIR show that spoofing attacks present a significant security risk for face recognition systems in any part of the spectrum. The risk is partially reduced when using multispectral systems. We also give a systematic overview of the existing anti-spoofing techniques, with an analysis of their advantages and limitations and prospective for future work.
Ivana Chingovska, Nesli Erdogmus, André Anjos, Sébastien Marcel
Chapter 9. On the Effects of Image Alterations on Face Recognition Accuracy
Abstract
Face recognition in controlled environments is nowadays considered rather reliable, and if face is acquired in proper conditions, a good accuracy level can be achieved by state-of-the-art systems. However, we show that, even under these desirable conditions, some intentional or unintentional face image alterations can significantly affect the recognition performance. In particular, in scenarios where the user template is created from printed photographs rather than from images acquired live during enrollment (e.g., identity documents), digital image alterations can severely affect the recognition results. In this chapter, we analyze both the effects of such alterations on face recognition algorithms and the human capabilities to deal with altered images.
Matteo Ferrara, Annalisa Franco, Davide Maltoni
Chapter 10. Document to Live Facial Identification
Abstract
The National Institute for Standards and Technology (NIST) highlights that facial recognition (FR) has improved significantly for ideal cases such, where face photographs are full frontal, of good quality, and pose and illumination variations are not significant. However, there are automated face recognition scenarios that involve comparing degraded facial photographs of subjects against their high-resolution counterparts. Such non-ideal scenarios can be encountered in situations where the need is to be able to identify legacy face photographs acquired by a government agency, including examples such as matching of scanned, but degraded, face images present in drivers licenses, refugee documents, and visas against their live photographs for the purpose of establishing or verifying a subject’s identity. The factors impacting the quality of such degraded face photographs include hairstyle, pose and expression variations, lamination and security watermarks, and other artifacts such as camera motion, camera resolution, and compression. In this work, we focus on investigating a set of methodological approaches in order to be able to overcome most of the aforementioned limitations and achieve high identification rate. Thus, we incorporate a combination of preprocessing and heterogeneous face-matching techniques, where comparisons are made between the original (degraded) photograph, the restored photograph, and the high-quality photograph (the mug shot of the live subject). For the purpose of this study, we, first, introduce the restorative building blocks that include threshold-based (TB) denoising, total variational (TV) wavelet inpainting, and exemplar-based inpainting. Next, we empirically assess improvement in image quality, when the aforementioned inpainting methods are applied separately and independently, coupled with TB denoising. Finally, we compare the face-matching performance achieved when using the original degraded, restored, and live photographs and a set of academic and commercial face matchers, including the local binary patterns (LBP) and local ternary patterns (LTP) texture-based operators, combined with different distance metric techniques, as well as a state-of-the-art commercial face matcher. Our results show that the combination of TB denoising, coupled with either of the two inpainting methods selected for the purpose of this study, illustrates significant improvement in rank-1 identification accuracy. It is expected that the proposed restoration approaches discussed in this work can be directly applied to operational scenarios that include border-crossing stations and various transit centers.
A. D. Clark, C. Whitelam, T. Bourlai
Chapter 11. Face Recognition in Challenging Environments: An Experimental and Reproducible Research Survey
Abstract
One important type of biometric authentication is face recognition, a research area of high popularity with a wide spectrum of approaches that have been proposed in the last few decades. The majority of existing approaches are conceived for or evaluated on constrained still images. However, more recently research interests have shifted toward unconstrained “in-the-wild” still images and videos. To some extent, current state-of-the-art systems are able to cope with variability due to pose, illumination, expression, and size, which represent the challenges in unconstrained face recognition. To date, only few attempts have addressed the problem of face recognition in mobile environment, where high degradation is present during both data acquisition and transmission. This book chapter deals with face recognition in mobile and other challenging environments, where both still images and video sequences are examined. We provide an experimental study of one commercial off-the-shelf (COTS) and four recent open-source face recognition algorithms, including color-based linear discriminant analysis (LDA), local Gabor binary pattern histogram sequences (LGBPHSs), Gabor grid graphs, and intersession variability (ISV) modeling. Experiments are performed on several freely available challenging still image and video face databases, including one mobile database, always following the evaluation protocols that are attached to the databases. Finally, we supply an easily extensible open-source toolbox to rerun all the experiments, which includes the modeling techniques, the evaluation protocols, and the metrics used in the experiments and provides a detailed description on how to regenerate the results.
Manuel Günther, Laurent El Shafey, Sébastien Marcel
Chapter 12. Face Recognition with RGB-D Images Using Kinect
Abstract
Face Recognition is one of the most extensively researched problems in biometrics, and many techniques have been proposed in the literature. While the performance of automated algorithms is close to perfect in constrained environments with controlled illumination, pose, and expression variations, recognition in unconstrained environments is still difficult. To mitigate the effect of some of these challenges, researchers have proposed to utilize 3D images which can encode much more information about the face than 2D images. However, due to sensor cost, 3D face images are expensive to capture. On the other hand, RGB-D images obtained using consumer-level devices such as the Kinect, which provide pseudo-depth data in addition to a visible spectrum color image, have a trade-off between quality and cost. In this chapter, we discuss existing RGB-D face recognition algorithms and present a state-of-the-art algorithm based on extracting discriminatory features using entropy and saliency from RGB-D images. We also present an overview of available RGB-D face datasets along with experimental results and analysis to understand the various facets of RGB-D face recognition.
Gaurav Goswami, Mayank Vatsa, Richa Singh
Chapter 13. Blending 2D and 3D Face Recognition
Abstract
Over the last decade, performance of face recognition algorithms systematically improved. This is particularly impressive when considering very large or challenging datasets such as the FRGC v2 or Labelled Faces in the Wild. A better analysis of the structure of the facial texture and shape is one of the main reasons of improvement in recognition performance. Hybrid face recognition methods, combining holistic and feature-based approaches, also allowed to increase efficiency and robustness. Both photometric information and shape information allow to extract facial features which can be exploited for recognition. However, both sources, grey levels of image pixels and 3D data, are affected by several noise sources which may impair the recognition performance. One of the main difficulties in matching 3D faces is the detection and localization of distinctive and stable points in 3D scans. Moreover, the large amount of data (tens of thousands of points) to be processed make the direct one-to-one matching a very time-consuming process. On the other hand, matching algorithms based on the analysis of 2D data alone are very sensitive to variations in illumination, expression and pose. Algorithms, based on the face shape information alone, are instead relatively insensitive to these sources of noise. These mutually exclusive features of 2D- and 3D-based face recognition algorithm call for a cooperative scheme which may take advantage of the strengths of both, while coping for their weaknesses. We envisage many real and practical applications where 2D data can be used to improve 3D matching and vice versa. Towards this end, this chapter highlights both the advantages and disadvantages of 2D- and 3D-based face recognition algorithms. It also explores the advantages of blending 2D- and 3D data-based techniques, also proposing a novel approach for a fast and robust matching. Several experimental results, obtained from publicly available datasets, currently at the state of the art, demonstrate the effectiveness of the proposed approach.
M. Tistarelli, M. Cadoni, A. Lagorio, E. Grosso
Chapter 14. Exploiting Score Distributions for Biometric Applications
Abstract
Biometric systems compare biometric samples to produce matching scores. However, the corresponding distributions are often heterogeneous and as a result it is hard to specify a threshold that works well in all cases. Score normalization techniques exploit the score distributions to improve the recognition performance. The goals of this chapter are to (i) introduce the reader to the concept of score normalization and (ii) answer important questions such as why normalizing matching scores is an effective and efficient way of exploiting score distributions, and when such methods are expected to work. In particular, the first section highlights the importance of normalizing matching scores; offers intuitive examples to demonstrate how variations between different (i) biometric samples, (ii) modalities, and (iii) subjects degrade recognition performance; and answers the question of why score normalization effectively utilizes score distributions. The next three sections offer a review of score normalization methods developed to address each type of variation. The chapter concludes with a discussion of why such methods have not gained popularity in the research community and answers the question of when and how one should use score normalization.
Panagiotis Moutafis, Ioannis A. Kakadiaris
Chapter 15. Multispectral Ocular Biometrics
Abstract
This chapter discusses the use of multispectral imaging to perform bimodal ocular recognition where the eye region of the face is used for recognizing individuals. In particular, it explores the possibility of utilizing the patterns evident in the sclera, along with the iris, in order to improve the robustness of iris recognition systems. Commercial iris recognition systems typically capture frontal images of the eye in the near-infrared spectrum. However, in non-frontal images of the eye, iris recognition performance degrades considerably. As the eyeball deviates away from the camera, the iris information in the image decreases, while the scleral information increases. In this work, we demonstrate that by utilizing the texture of the sclera along with the vascular patterns evident on it, the performance of an iris recognition system can potentially be improved. The iris patterns are better observed in near-infrared spectrum, while conjunctival vasculature patterns are better discerned in the visible spectrum. Therefore, multispectral images of the eye are used to capture the details of both the iris and the sclera. The contributions of this paper include (a) the assembly of a multispectral eye image collection to study the impact of intra-class variation on sclera recognition performance, (b) the design and development of an automatic sclera, iris, and pupil segmentation algorithm, and (c) the improvement of iris recognition performance by fusing the iris and scleral patterns in non-frontal images of the eye.
Simona G. Crihalmeanu, Arun A. Ross
Backmatter
Metadaten
Titel
Face Recognition Across the Imaging Spectrum
herausgegeben von
Thirimachos Bourlai
Copyright-Jahr
2016
Electronic ISBN
978-3-319-28501-6
Print ISBN
978-3-319-28499-6
DOI
https://doi.org/10.1007/978-3-319-28501-6

Premium Partner