Skip to main content

2010 | Buch

Human Recognition at a Distance in Video

insite
SUCHEN

Über dieses Buch

Most biometric systems employed for human recognition require physical contact with, or close proximity to, a cooperative subject. Far more challenging is the ability to reliably recognize individuals at a distance, when viewed from an arbitrary angle under real-world environmental conditions. Gait and face data are the two biometrics that can be most easily captured from a distance using a video camera.

This comprehensive and logically organized text/reference addresses the fundamental problems associated with gait and face-based human recognition, from color and infrared video data that are acquired from a distance. It examines both model-free and model-based approaches to gait-based human recognition, including newly developed techniques where the both the model and the data (obtained from multiple cameras) are in 3D. In addition, the work considers new video-based techniques for face profile recognition, and for the super-resolution of facial imagery obtained at different angles. Finally, the book investigates integrated systems that detect and fuse both gait and face biometrics from video data.

Topics and features: discusses a framework for human gait analysis based on Gait Energy Image, a spatio-temporal gait representation; evaluates the discriminating power of model-based gait features using Bayesian statistical analysis; examines methods for human recognition using 3D gait biometrics, and for moving-human detection using both color and thermal image sequences; describes approaches for the integration face profile and gait biometrics, and for super-resolution of frontal and side-view face images; introduces an objective non-reference quality evaluation algorithm for super-resolved images; presents performance comparisons between different biometrics and different fusion methods for integrating gait and super-resolved face from video.

This unique and authoritative text is an invaluable resource for researchers and graduate students of computer vision, pattern recognition and biometrics. The book will also be of great interest to professional engineers of biometric systems.

Inhaltsverzeichnis

Frontmatter

Introduction to Gait-Based Individual Recognition at a Distance

Frontmatter
Chapter 1. Introduction
Abstract
This book addresses the problem of recognizing people at a distance in video. Recognizing humans and their activities have become important research topics in image processing, computer vision and pattern recognition. Related research in biometrics is placed at high priority for homeland security, border control, anti-terrorism, and automated surveillance since it integrates identity information with these tasks.
Bir Bhanu, Ju Han

Gait-Based Individual Recognition at a Distance

Frontmatter
Chapter 2. Gait Representations in Video
Abstract
In this chapter, we first present a spatio-temporal gait representation, called gait energy image (GEI), to characterize human walking properties. Next, a general GEI-based framework is developed to deal with human motion analysis under different situations. The applications of this general framework will be discussed in the next chapter.
Bir Bhanu, Ju Han
Chapter 3. Model-Free Gait-Based Human Recognition in Video
Abstract
In this chapter, the GEI-based general framework presented earlier is used for individual recognition in diverse scenarios.
Insufficient training data associated with an individual is a major problem in gait recognition due to the difficulty of the data acquisition. To address this issue, we not only compute real templates from training silhouette sequences directly, but also generate synthetic templates from training sequences by simulating silhouette distortion. Features learned from real templates characterize human walking properties provided in training sequences, and features learned from synthetic templates predict gait properties under other conditions. A feature fusion strategy is therefore applied at the decision level to improve recognition performance.
Bir Bhanu, Ju Han
Chapter 4. Discrimination Analysis for Model-Based Gait Recognition
Abstract
Gait has been evaluated as a new biometric through psychological experiments. However, most gait recognition approaches do not give their theoretical or experimental performance predictions. Therefore, the discriminating power of gait as a feature for human recognition cannot be evaluated. In this chapter, a Bayesian based statistical analysis is performed to evaluate the discriminating power of static gait features (body part dimensions). Through probabilistic simulation, we not only predict the probability of correct recognition (PCR) with regard to different within-class feature variance, but also obtain the upper bound on PCR with regard to different human silhouette resolution. In addition, the maximum number of people in a database is obtained given the allowable error rate. This is extremely important for gait recognition in large databases.
Bir Bhanu, Ju Han
Chapter 5. Model-Based Human Recognition—2D and 3D Gait
Abstract
In this chapter, we propose a kinematic-based approach to recognize individuals by gait using a single camera or multiple cameras for 2D gait recognition where the model is in 3D and the data are in 2D. In addition, we present a 3D gait recognition approach where both the model and the gait data are in 3D. Detailed technical approaches and experimental results are presented for both 2D and 3D gait recognition.
Bir Bhanu, Ju Han
Chapter 6. Fusion of Color/Infrared Video for Human Detection
Abstract
In this chapter, we approach the task of human silhouette extraction from color and thermal image sequences using automatic image registration. Image registration between color and thermal images is a challenging problem due to the difficulties associated with finding correspondence. However, moving people in a static scene provide cues to address this problem. We first propose a hierarchical scheme to automatically find the correspondence between the preliminary human silhouettes extracted from synchronous color and thermal image sequences for image registration. Next, we discuss strategies for probabilistically combining cues from registered color and thermal images for improved human silhouette detection. It is shown that the proposed approach achieves good results for image registration and human silhouette extraction. Experimental results also show a comparison of various sensor fusion strategies and demonstrate the improvement in performance over non-fused cases for human silhouette extraction.
Bir Bhanu, Ju Han

Face Recognition at a Distance in Video

Frontmatter
Chapter 7. Super-Resolution of Facial Images in Video at a Distance
Abstract
In this chapter, we address the problem of super-resolution of facial images in videos that are acquired at a distance. In particular, we consider (a) a closed-loop approach for super-resolution of frontal faces, (b) super-resolution of frontal faces with facial expressions, and (c) super-resolution of side face images. The details of these technical approaches and experimental results are presented in this chapter.
Bir Bhanu, Ju Han
Chapter 8. Evaluating Quality of Super-Resolved Face Images
Abstract
The widespread use of super-resolution methods in a variety of applications such as surveillance has led to an increasing need for quality assessment measures. The current quality measures aim to compare different fusion methods by assessing the quality of the fused images. They consider the information transferred between the super-resolved image and input images only. In this chapter, we propose an integrated objective quality evaluation measure for super-resolved images, which focuses on evaluating the quality of super-resolved images that are constructed from different conditions of input images. The proposed quality evaluation measure combines both the relationship between the super-resolved image and the input images, and the relationship between the input images. Using the proposed measure, the quality of the super-resolved face images constructed from videos are evaluated under different input conditions, including the variation of pose, lighting, facial expressions and the number of input images.
Bir Bhanu, Ju Han

Integrated Face and Gait for Human Recognition at a Distance in Video

Frontmatter
Chapter 9. Integrating Face Profile and Gait at a Distance
Abstract
Human recognition from arbitrary views is an important task for many applications, such as visual surveillance, covert security and access control. It has been found to be difficult in reality, especially when a person is walking at a distance in real-world outdoor conditions. For optimal performance, the system should use as much information as possible from the observations. In this chapter, we introduce a video based system which combines cues of face profile and gait silhouette from the single camera video sequences. It is difficult to get reliable face profile information directly from a low-resolution video frame because of limited resolution. To overcome this problem, we first construct a high-resolution face profile image from multiple adjacent low-resolution frames for each video sequence. Then, we extract face features from the high-resolution profile image. Finally, dynamic time warping (DTW) is used as the matching method to compute the similarity of two face profiles based on the absolute values of curvature. For gait, we use gait energy image (GEI), a spatio-temporal compact representation, to characterize human walking properties. Gait recognition is carried out based on the direct GEI matching. Several schemes are considered for fusion of face profile and gait. A number of dynamic video sequences are tested to evaluate the performance of our system. Experimental results are compared and discussed.
Bir Bhanu, Ju Han
Chapter 10. Match Score Level Fusion of Face and Gait at a Distance
Abstract
This chapter introduces a video based recognition method to recognize non-cooperating individuals at a distance in video, who expose side views to the camera. Information from two biometric sources, side face and gait, is utilized and integrated for recognition. For side face, an enhanced side face image (ESFI), a higher resolution image compared with the image directly obtained from a single video frame, is constructed, which integrates face information from multiple video frames. For gait, the gait energy image (GEI), a spatio-temporal compact representation of gait in video, is used to characterize human walking properties. The features of face and gait are obtained separately using the principal component analysis (PCA) and the multiple discriminant analysis (MDA) combined method from ESFI and GEI, respectively. They are then integrated at the match score level by using different fusion strategies. The approach is tested on a database of video sequences, corresponding to 45 people, which are collected over seven months. The different fusion methods are compared and analyzed. The experimental results show that (a) better face features are extracted from ESFI compared to those from the original side face images; (b) the synchronization of face and gait is not necessary for face template ESFI and gait template GEI. The synthetic match scores combine information from them; and (c) integrated information from side face and gait is effective for human recognition in video.
Bir Bhanu, Ju Han
Chapter 11. Feature Level Fusion of Face and Gait at a Distance
Abstract
Video-based human recognition at a distance remains a challenging problem for the fusion of multi-modal biometrics. As compared to the approach based on match score level fusion (Chap. 10), in this chapter, we present an approach that utilizes and integrates information from side face and gait at the feature level. The features of face and gait are obtained separately using principal component analysis (PCA) from enhanced side face image (ESFI) and gait energy image (GEI), respectively. Multiple discriminant analysis (MDA) is employed on the concatenated features of face and gait to obtain discriminating synthetic features. This process allows the generation of better features and reduces the curse of dimensionality. The proposed scheme is tested using two comparative data sets to show the effect of changing clothes and face changing over time. Moreover, the proposed feature level fusion is compared with the match score level fusion and another feature level fusion scheme. The experimental results demonstrate that the synthetic features, encoding both side face and gait information, carry more discriminating power than the individual biometrics features, and the proposed feature level fusion scheme outperforms the match score level and another feature level fusion scheme. The performance of different fusion schemes is also shown as cumulative match characteristic (CMC) curves. They further demonstrate the strength of the proposed fusion scheme.
Bir Bhanu, Ju Han

Conclusions for Integrated Gait and Face for Human Recognition at a Distance in Video

Frontmatter
Chapter 12. Conclusions and Future Work
Abstract
This book has focused on human recognition at a distance by integrating gait and face in video. The research has demonstrated that the proposed video-based fusion system is effective for human identification. The representation of face and gait, where both fuse information from multiple video frames, is promising in real-world applications. The integration of face and gait biometrics will be highly useful in practical applications. Several important problems are addressed in this book. A summary of key contributions in gait-based human recognition, video-based face recognition and fusion of gait and face for individual recognition is given in this chapter.
Bir Bhanu, Ju Han
Backmatter
Metadaten
Titel
Human Recognition at a Distance in Video
verfasst von
Bir Bhanu
Ju Han
Copyright-Jahr
2010
Verlag
Springer London
Electronic ISBN
978-0-85729-124-0
Print ISBN
978-0-85729-123-3
DOI
https://doi.org/10.1007/978-0-85729-124-0

Premium Partner