Skip to main content

Über dieses Buch

This book constitutes the refereed proceedings of the 11th Chinese Conference on Biometric Recognition, CCBR 2016, held in Chengdu, China, in October 2016.

The 84 revised full papers presented in this book were carefully reviewed and selected from 138 submissions. The papers focus on Face Recognition and Analysis; Fingerprint, Palm-print and Vascular Biometrics; Iris and Ocular Biometrics; Behavioral Biometrics; Affective Computing; Feature Extraction and Classification Theory; Anti-Spoofing and Privacy; Surveillance; and DNA and Emerging Biometrics.



Face Recognition and Analysis


Occlusion-Robust Face Detection Using Shallow and Deep Proposal Based Faster R-CNN

As the first essential step of automatic face analysis, face detection always receives high attention. The performance of current state-of-the-art face detectors cannot fulfill the requirements in real-world scenarios especially in the presence of severe occlusions. This paper proposes a novel and effective approach to occlusion-robust face detection. It combines two major phases, i.e. proposal generation and classification. In the former, we combine both the proposals given by a coarse-to-fine shallow pipeline and a Region Proposal Network (RPN) based deep one respectively, to generate a more comprehensive set of candidate regions. In the latter, we further decide whether the regions are faces using a well-trained Faster R-CNN. Experiments are conducted on the WIDER FACE benchmark, and the results clearly prove the competency of the proposed method at detecting occluded faces.

Jingbo Guo, Jie Xu, Songtao Liu, Di Huang, Yunhong Wang

Locally Rejected Metric Learning Based False Positives Filtering for Face Detection

Face detection in the wild needs to deal with various challenging conditions, which often leads to the situation where intraclass difference of faces exceeds interclass difference between faces and non-faces. Based on this observation, in this paper we propose a locally rejected metric learning (LRML) based false positives filtering method. We firstly learn some prototype faces with affinity propagation clustering algorithm, and then apply locally rejected metric learning to seek a linear transformation to reduce the differences between each face and prototype faces while enlarging the differences between non-faces and prototype faces and preserving the distribution of learned prototype faces with locally rejected term. With the learned transformation, data are mapped into a new domain where face can be exactly detected. Results on FDDB and a self-collected dataset indicate our method is better than Viola-Jones face detectors. And the combination of the two methods shows an improvement in face detection.

Nanhai Zhang, Jiajie Han, Jiani Hu, Weihong Deng

Face Classification: A Specialized Benchmark Study

Face detection evaluation generally involves three steps: block generation, face classification, and post-processing. However, firstly, face detection performance is largely influenced by block generation and post-processing, concealing the performance of face classification core module. Secondly, implementing and optimizing all the three steps results in a very heavy work, which is a big barrier for researchers who only cares about classification. Motivated by this, we conduct a specialized benchmark study in this paper, which focuses purely on face classification. We start with face proposals, and build a benchmark dataset with about 3.5 million patches for two-class face/non-face classification. Results with several baseline algorithms show that, without the help of post-processing, the performance of face classification itself is still not very satisfactory, even with a powerful CNN method. We’ll release this benchmark to help assess performance of face classification only, and ease the participation of other related researchers.

Jiali Duan, Shengcai Liao, Shuai Zhou, Stan Z. Li

Binary Classifiers and Radial Symmetry Transform for Fast and Accurate Eye Localization

In order to locate eyes for iris recognition, this paper presents a fast and accurate eye localization algorithm under active infrared (IR) illumination. The algorithm is based on binary classifiers and fast radial symmetry transform. First, eye candidates can be detected by the fast radial symmetry transform in infrared image. Then three-stage binary classifiers are used to eliminate most unreliable eye candidates. Finally, the mean eye template is employed to identify the real eyes from the reliable eye candidates. A large number of tests have been completed to verify the performance of the proposed algorithm. Experimental results demonstrate that the algorithm proposed in this article is robust and efficient.

Pei Qin, Junxiong Gao, Shuangshuang Li, Chunyu Ma, Kaijun Yi, Tomas Fernandes

Robust Multi-view Face Alignment Based on Cascaded 2D/3D Face Shape Regression

In this paper, we present a cascaded regression algorithm for multi-view face alignment. Our method employs a two-stage cascaded regression framework and estimates 2D and 3D facial feature points simultaneously. In stage one, 2D and 3D facial feature points are roughly detected on the input face image, and head pose analysis is applied based on the 3D facial feature points to estimate its head pose. The face is then classified into one of three categories, namely left profile faces, frontal faces and right profile faces, according to its pose. In stage two, accurate facial feature points are detected by using an appropriate regression model corresponding to the pose category of the input face. Compared with existing face alignment methods, our proposed method can better deal with arbitrary view facial images whose yaw angles range from −90 to $$90^{\circ }$$90∘. Moreover, in order to enhance its robustness to facial bounding box variations, we randomly generate multiple bounding boxes according to the statistical distributions of bounding boxes and use them for initialization during training. Extensive experiments on public databases prove the superiority of our proposed method over state-of-the-art methods, especially in aligning large off-angle faces.

Fuxuan Chen, Feng Liu, Qijun Zhao

Extended Robust Cascaded Pose Regression for Face Alignment

We present a highly accurate and very efficient approach for face alignment, called Extended Robust Cascaded Pose Regression (ERCPR), which is robust to large variations due to differences in expressions and pose. Unlike previous shape regression-based approaches, we propose to reference features weighted by three different face landmarks, which are much more robust to shape variations. Then, a correlation-based feature selection method and a two-level boosted regression are applied to establish accurate relation between features and shapes. Experiments on two challenging face datasets (LFPW, COFW) show that our proposed approach significantly outperforms the state-of-art in terms of both efficiency and accuracy.

Yongxin Ge, Xinyu Ren, Cheng Peng, Xuchu Wang

Pose Aided Deep Convolutional Neural Networks for Face Alignment

Recently, deep convolutional neural networks have been widely used and achieved state-of-the-art performance in face recognition tasks such as face verification, face detection and face alignment. However, face alignment remains a challenging problem due to large pose variation and the lack of data. Although researchers have designed various network architecture to handle this problem, pose information was rarely used explicitly. In this paper, we propose Pose Aided Convolutional Neural Networks (PACN) which uses different networks for faces with different poses. We first train a CNN to do pose classification and a base CNN, then different networks are finetuned from the base CNN for faces of different pose. Since there wouldn’t be many images for each pose, we propose a data augmentation strategy which augment the data without affecting the pose. Experiment results show that the proposed PACN achieves better or comparable results than the state-of-the-art methods.

Shuying Liu, Jiani Hu, Weihong Deng

Face Landmark Localization Using a Single Deep Network

Existing Deep Convolutional Neural Network (DCNN) methods for Face Landmark Localization are based on Cascaded Networks or Tasks-Constrained Deep Convolutional Network (TCDCN), which are complicated and difficult to train. To solve this problem, this paper proposes a new Single Deep CNN (SDN). Unlike cascaded CNNs, SDN stacks three layer groups: each group consists of two convolutional layers and a max-pooling layer. This network structure can extract more global high-level features, which express the face landmarks more precisely. Extensive experiments show that SDN outperforms existing DCNN methods and is robust to large pose variation, lighting and even severe occlusion. While the network complexity is also reduced obviously compared to other methods.

Zongping Deng, Ke Li, Qijun Zhao, Hu Chen

Cascaded Regression for 3D Face Alignment

Although 2D facial landmark detection methods built on the cascaded regression framework have been widely researched, their performance was still limited by face shape deformations and poor light conditions. With the assist of extra shape information provided by 3D facial model, these difficulties can be eased to some degree. In this paper, we propose 3D Cascaded Regression for detecting facial landmarks on 3D faces. Our algorithm makes full use of both texture and depth information to overcome the difficulties caused by expression variations, and generates shape increments based on a weighted mixture of two separated shape updates regressed from texture and depth, respectively. Finally, the shape estimation is mapped into the original 3D facial data to obtain three-dimensional landmark coordinates. Experimental results on the BU-4DFE database demonstrate that our proposed approach achieves satisfactory performance in terms of detection accuracy and robustness, significantly superior to state-of-the-art method.

Jinwen Xu, Qijun Zhao

Deep CNNs for Face Verification

This paper proposes a method based on two deep convolutional neural networks for face verification. In the process of face normalization, we propose to use different landmarks of faces to solve the problems caused by poses. In order to increase the ability of verification, semi-verification signal is used for training one network. The final face representation is formed by catenating features of two deep CNNs after PCA reduction. What’s more, each feature is a combination of multi-scale representations through making use of auxiliary classifiers. For the final verification, we only adopt the face representation from one region and one resolution of a face jointing Joint Bayesian classifier. Experiments show that our method can extract effective face representation and our algorithm achieves 99.71 % verification accuracy on LFW dataset.

Xiaojun Lu, Yang Wang, Weilin Zhang, Song Ding, Wuming Jiang

Robust Face Recognition Under Varying Illumination and Occlusion via Single Layer Networks

Feature extraction plays a significant role in face recognition, it is desired to extract robust feature to eliminate the effect of variations caused by illumination and occlusion. Motivated by convolutional architecture of deep learning and the advantages of KMeans algorithm in filters learning. In this paper, a simple yet effective face recognition approach is proposed, which consists of three components: convolutional filters learning, nonlinear transformation and feature pooling. Concretely, firstly, KMeans is employed to construct the convolutional filters quickly on preprocessed image patches. Secondly, hyperbolic tangent is applied for nonlinear transformation on the convoluted images. Thirdly, multi levels of spatial pyramid pooling is utilized to incorporate spatial geometry information of learned features. Recognition phase only requires an efficient linear regression classifier. Experimental results on two representative databases AR and ExtendedYaleB demonstrate strong robustness of our method against real disguise, illumination, block occlusion, as well as pixel corruption.

Shu Feng

Sample Diversity, Discriminative and Comprehensive Dictionary Learning for Face Recognition

For face recognition, conventional dictionary learning (DL) methods have disadvantages. In the paper, we propose a novel robust, discriminative and comprehensive DL (RDCDL) model. The proposed model uses sample diversities of the same face image to make the dictionary robust. The model includes class-specific dictionary atoms and disturbance dictionary atoms, which can well represent the data from different classes. Both the dictionary and the representation coefficients of data on the dictionary introduce discriminative information, which improves effectively the discrimination capability of the dictionary. The proposed RDCDL is extensively evaluated on benchmark face image databases, and it shows superior performance to many state-of-the-art sparse representation and dictionary learning methods for face recognition.

Guojun Lin, Meng Yang, Linlin Shen, Weicheng Xie, Zhonglong Zheng

Compact Face Representation via Forward Model Selection

This paper proposes a compact face representation for face recognition. The face with landmark points in the image is detected and then used to generate transformed face regions. Different types of regions form the transformed face region datasets, and face networks are trained. A novel forward model selection algorithm is designed to simultaneously select the complementary face models and generate the compact representation. Employing a public dataset as training set and fusing by only six selected face networks, the recognition system with this compact face representation achieves 99.05 % accuracy on LFW benchmark.

Weiyuan Shao, Hong Wang, Yingbin Zheng, Hao Ye

A Semi-supervised Learning Algorithm Based on Low Rank and Weighted Sparse Graph for Face Recognition

Traditional graph-based semi-supervised learning can not capture both the global and local structures of the data exactly. In this paper, we propose a novel low rank and weighted sparse graph. First, we utilize exact low rank representation by the nuclear norm and Forbenius norm to capture the global subspace structure. Meanwhile, we build the weighted sparse regularization term with shape interaction information to capture the local linear structure. Then, we employ the linearized alternating direction method with adaptive penalty to solve the objective function. Finally, the graph is constructed by an effective post-processing method. We evaluate the proposed method by performing semi-supervised classification experiments on ORL, Extended Yale B and AR face database. The experimental results show that our approach improves the accuracy of semi-supervised learning and achieves the state-of-the-art performance.

Tao Zhang, Zhenmin Tang, Bin Qian

Multilinear Local Fisher Discriminant Analysis for Face Recognition

In this paper, a multilinear local fisher discriminant analysis (MLFDA) framework is introduced for tensor object dimensionality reduction and recognition. MLFDA achieves feature extraction by finding a multilinear projection to map the original tensor space into a tensor subspace that maximize the local between-class scatter as well as minimize the local within-class scatter. The experimental result shows that MLFDA has an outperformance.

Yucong Peng, Peng Zhou, Hao Zheng, Baochang Zhang, Wankou Yang

Combining Multiple Features for Cross-Domain Face Sketch Recognition

Cross-domain face sketch recognition plays an important role in biometrics research and industry. In this paper, we propose a novel algorithm combing an intra-modality method called the Eigentransformation and two inter-modality methods based on modality invariant features, namely the Multiscale Local Binary Pattern (MLBP) and the Histogram of Averaged Orientation Gradients (HAOG). Meanwhile, a sum-score fusion of min-max normalized scores is applied to fuse these recognition outputs. Experimental results on the CUFS (Chinese University of Hong Kong (CUHK) Face Sketch Database) and the CUFSF (CUHK Face Sketch FERET Database) datasets reveal that the intra-modality method and inter-modality methods provide complementary information and fusing of them yields better performance.

Yang Liu, Jing Li, ZhaoYang Lu, Tao Yang, ZiJian Liu

Recent Advances on Cross-Domain Face Recognition

Face recognition is a significant and pervasively applied computer vision task. With the specific application scenarios being explored gradually, general face recognition methods dealing with visible light images are unqualified. Cross-domain face recognition refers to a series of methods in response to face recognition problems whose inputs may come from multiple modalities, such as visible light images, sketch, near infrared images, 3D data, low-resolution images, thermal infrared images, or cross different ages, expressions, and ethnicities. Compared with general face recognition, cross-domain face recognition has not been widely explored and only few literatures systematically discuss this topic. Face recognition aiming at matching face images from photographs and other image modalities, which is usually called heterogeneous face recognition, has larger cross-domain gap and is a harder problem in this topic. This paper mainly investigates heterogeneous face databases, provides an up-to-date review of research efforts, and addresses common problems and related issues in cross-domain face recognition techniques.

Xiaoxiang Liu, Xiaobo Sun, Ran He, Tieniu Tan

Exploring Deep Features with Different Distance Measures for Still to Video Face Matching

Still to video (S2V) face recognition attracts many interests for researchers in computer vision and biometrics. In S2V scenarios, the still images are often captured with high quality and cooperative user condition. On the contrary, video clips usually show more variations and of low quality. In this paper, we primarily focus on the S2V face recognition where face gallery is formed by a few still face images, and the query is the video clip. We utilized the deep convolutional neural network to deal with the S2V face recognition. We also studied the choice of different similarity measures for the face matching, and suggest the more appropriate measure for the deep representations. Our results for both S2V face identification and verification yield a significant improvement over the previous results on two databases, i.e., COX-S2V and PaSC.

Yu Zhu, Guodong Guo

Face Hallucination Using Convolutional Neural Network with Iterative Back Projection

Face hallucination aims to generate a high-resolution (HR) face image from an input low-resolution (LR) face image, which is a specific application field of image super resolution for face image. Due to the complex and sensitive structures of face image, obtaining a super-resolved face image is more difficult than generic image super resolution. Recently, deep learning based methods have been introduced in face hallucination. In this work, we develop a novel network architecture which integrates image super-resolution convolutional neural network with network style iterative back projection (IBP) method. Extensive experiments demonstrate that the proposed improved model can obtain better performance.

Dongdong Huang, Heng Liu

Facial Ethnicity Classification with Deep Convolutional Neural Networks

As an important attribute of human beings, ethnicity plays a very basic and crucial role in biometric recognition. In this paper, we propose a novel approach to solve the problem of ethnicity classification. Existing methods of ethnicity classification normally consist of two stages: extracting features on face images and training a classifier based on the extracted features. Instead, we tackle the problem via using Deep Convolution Neural Networks to extract features and classify them simultaneously. The proposed method is evaluated in three scenarios: (i) the classification of black and white people, (ii) the classification of Chinese and Non-Chinese people, and (iii) the classification of Han, Uyghurs and Non-Chinese. Experimental results on both public and self-collected databases demonstrate the effectiveness of the proposed method.

Wei Wang, Feixiang He, Qijun Zhao

Age Estimation Based on Multi-Region Convolutional Neural Network

As one of the most important biologic features, age has tremendous application potential in various areas such as surveillance, human-computer interface and video detection. In this paper, a new convolutional neural network, namely MRCNN (Multi-Region Convolutional Neural Network), is proposed based on multiple face subregions. It joins multiple face subregions together to estimation age. Each targeted region is analyzed to explore the contribution degree to age estimation. According to the face geometrical property, we select 8 subregions, and construct 8 sub-network structures respectively, and then fuse at feature-level. The proposed MRCNN has two principle advantages: 8 sub-networks are able to learn the unique age characteristics of the corresponding subregion and the eight networks are packaged together to complement age-related information. Further, we analyze the estimation accuracy on all age groups. Experiments on MORPH illustrate the superior performance of the proposed MRCNN.

Ting Liu, Jun Wan, Tingzhao Yu, Zhen Lei, Stan Z. Li

Interval Type-2 Fuzzy Linear Discriminant Analysis for Gender Recognition

In this paper, we propose the interval type-2 fuzzy linear discriminant analysis (IT2FLDA) algorithm for gender recognition. In this algorithm, we first proposed the supervised interval type-2 fuzzy C-Mean (IT2FCM), which introduces the classified information to the IT2FCM, and then the supervised IT2FCM is incorporated into traditional linear discriminant analysis (LDA). By this way, means of each class that are estimated by the supervised IT2FCM can converge to a more desirable location than means of each class obtained by class sample average and the type-1 fuzzy k-nearest neighbor (FKNN) method in the presence of noise. Furthermore, the IT2FLDA is able to minimize the effects of uncertainties, find the optimal projective directions and make the feature subspace discriminating and robust, which inherits the benefits of the supervised IT2FCM and traditional LDA. The experimental results show that the IT2FLDA improved the gender recognition rate when compared to the results from the previous techniques.

Yijun Du, Xiaobo Lu, Weili Zeng, Changhui Hu

Fingerprint, Palm-print and Vascular Biometrics


Latent Fingerprint Enhancement Based on Orientation Guided Sparse Representation

Latent fingerprints are the finger skin impressions left at the crime scene by accident. They are usually of poor quality with unclear ridge structure and various overlapping patterns. This paper proposes a latent fingerprint enhancement algorithm which combines the TV image decomposition model and image reconstruction by orientation guided sparse representation. Firstly, the TV model is applied to decompose a latent fingerprint image into the texture and cartoon components. Secondly, we calculate the orientation field and the reliability of the texture image. Finally, for the low reliability region, sparse representation based on the redundant dictionary, which is constructed with Gabor functions and the specific local ridge orientation, is iteratively used to reconstruct the image. Experimental results based on NIST SD27 latent fingerprint database indicate that the proposed algorithm can not only remove various noises, but also restore the corrupted ridge structure well.

Kaifeng Wei, Manhua Liu

A Hybrid Quality Estimation Algorithm for Fingerprint Images

Estimating the quality of a fingerprint image is very important in an automatic fingerprint identification system. It helps to reject poor-quality samples during enrollment and adjust the enhancement, feature extraction and matching strategies according to the quality of fingerprints, thus upgrading the performance of the overall system. In this paper, we propose a locality sensitive algorithm for fingerprint image quality assessment. For low curvature parts, we estimate their quality based on the sparse coefficients computed against a redundant Gabor dictionary. For high curvature parts, the quality is measured with their responses of a set of symmetric descriptors. Besides, the ridge and valley clarity is evaluated for the whole foreground. By integrating these information, the quality assessment of a fingerprint image is obtained. We test the proposed method on the FVC2002 Db1 and FVC2004 Db1 databases. Experimental results demonstrate that the proposed method is an effective predictor of biometrics performance.

Xin Li, Ruxin Wang, Mingqiang Li, Chaochao Bai, Tong Zhao

A Preprocessing Algorithm for Touchless Fingerprint Images

Touchless fingerprint recognition with high acceptance, high security, hygiene advantages, is currently a hot research field of biometrics. The background areas of touchless fingerprints are more complex than those of the contact: the touchless fingerprint image will appear rotation and translation phenomenon, what’s more, the contrast of the ridge and valley lines is much lower. These factors seriously affected the performance of the touchless fingerprint recognition. So the general methods for contact fingerprint images are difficult to achieve a good effect. A novel method is proposed to preprocess the images reasonably aiming at these features of touchless fingerprint images. Firstly, the Otsu based on the Cb component of the YCbCr model is adopted to extract the finger area. Secondly, we combined the high-frequency enhancement filter with the iterative adaptive histogram equalization technique to enhance fingerprint images. Thirdly, we proposed a new method to extract the ROI fingerprint area. Lastly, the AR–LBP algorithm is adopted for feature extraction and the nearest neighbor classifier is used for feature matching. Experimental results show that the proposed method can achieve excellent image identify results.

Kejun Wang, Huitao Cui, Yi Cao, Xianglei Xing, Rongyi Zhang

Palmprint Recognition via Sparse Coding Spatial Pyramid Matching Representation of SIFT Feature

Spatial pyramid matching using sparse coding (ScSPM) algorithm can construct the palmprint image descriptors which may effectively express local features and global features of palmprint image. In the paper, we adopt sparse coding and max pooling instead of vector quantization coding and sum pooling to extract descriptors, and it improves the nonlinear coding to linear coding. Then, the linear SVM classifier is applied to replace the nonlinear classifier in pyramid matching. We apply this algorithm to the recognition of palmprint images and exactly analyze the effects of parameters on the recognition, including the size of a complete dictionary and sparse coding parameter. The experimental results illuminate the excellent effectiveness of the ScSPM algorithm for palmprint recognition.

Ligang Liu, Jianxin Zhang, Aoqi Yang

A Finger Vein Identification System Based on Image Quality Assessment

Generally, the quality of the acquired finger vein images makes a significant impact on the performance of finger vein identification system. Therefore, aimed at the characteristics of the vein images, we propose a novel finger vein identification system taking the image quality assessment into account. The embedded image quality assessment method is able to improve the performance of finger vein identification system by filtering the low quality images. In order to make better representation of the finger vein images, a score-level fusion strategy is proposed for the combination of the texture information and the structural information, wherein the texture information and the structural information are obtained from the Local Binary Pattern (LBP) and the histogram of oriented gradients (HOG), respectively. The comprehensive experiments on two finger vein image datasets have demonstrated that our proposed image quality assessment method and the score-level fusion method can achieve outperformed performance for the finger vein identification system.

Zhixing Huang, Wenxiong Kang, Qiuxia Wu, Junhong Zhao, Wei Jia

An Edge Detection Algorithm for Nonuniformly Illuminated Images in Finger-vein Authentication

Recently, finger-vein authentication has been a rising bio-detection technique for its outstanding security, biologic maintenance, accuracy and speed. To deal with the rotation in finger-vein image, the edge of finger in the image is detected and the inclination angle is calculated. However, considering the device universally used in finger-vein authentication, in order to detect finger-veins more clearly and get more features, illumination is not evenly distributed, so the conventional edge detection methods are affected by different illuminative backgrounds of the finger. Therefore, a new simple but effective edge detection algorithm specially designed for finger-vein authentication is proposed and evaluated in this paper. Experiments based on 5,000 finger-vein images show that the proposed algorithm provides higher accuracy than conventional methods.

Hongyu Ren, Da Xu, Wenxin Li

Finger-Vein Recognition Based on an Enhanced HMAX Model

To overcome the shortcomings of the traditional methods, in this paper, we investigate the role of a biologically-inspired network for finger-vein recognition. Firstly, robust feature representation of finger-vein images are obtained from an enhanced Hierarchical and X (HMAX) model, and successively class by the extreme learning machine (ELM). The enhanced HMAX model could calculate complex feature representations by the way of simulating the hierarchical processing mechanism in primate visual cortex. ELM performs well in classification while keeping a faster learning speed. Our proposed method is tested on the MMCBNU-6000 dataset, and achieved good performances compared with state-of-the-art methods. The results further the case for biologically-motivated approaches for finger-vein recognition.

Wenhui Sun, Jucheng Yang, Ying Xie, Shanshan Fang, Na Liu

Finger Vein Recognition via Local Multilayer Ternary Pattern

We propose a novel method for finger vein recognition in this paper. We use even symmetrical Gabor filters to smooth images and remove noise, then Contrast Limited Adaptive Histogram Equalization (CLAHE) is utilized for image enhancement. Finger Vein is extracted via Maximum Curvature (MC), and after thinning by morphological filter, we use Local multilayer Ternary Pattern (LmTP) descriptor proposed in this paper to extract finger vein features. We also propose an algorithm to calculating the similarity of LmTP features. Experiment results show the performance of the proposed method is better than other well-known metrics and LmTP is more robust than other local feature descriptors like LBP, LTP and LmBP.

Hu Zhang, Xianliang Wang, Zhixiang He

A Performance Evaluation of Local Descriptors, Direction Coding and Correlation Filters for Palm Vein Recognition

As one of new-emerging biometrics techniques, palm vein recognition has received wide attentions recently. In recent years, local descriptor, direction coding and correlation filters-based methods are popular for palmprint, palm vein, and finger vein recognition. In this paper, we make a performance evaluation for palm vein recognition using these methods. The experimental results show that the methods based on direction information can achieve better recognition performance.

Jingting Lu, Hui Ye, Wei Jia, Yang Zhao, Hai Min, Wenxiong Kang, Bob Zhang

Enlargement of the Hand-Dorsa Vein Database Based on PCA Reconstruction

This paper introduces a novel method to enlarge the hand-dorsa vein database using principal component analysis (PCA), which will be applied to increase the samples of each class. The ten samples of each hand is divided into two sets, feature set B and projection set M. Set B is used to provide the feature space using PCA methods. Set M is used to obtain projection coefficients for new image. A new sample can be constructed with the feature space and projection coefficient using PCA reconstruction method. In this work, the database is enlarged from 2040 images to 10200 images, with the samples of each hand increasing from 10 to 50. The experimental results show that the enlarged database has a satisfied recognition rate of 98.66 % using Partition Local Binary Patterns (PLBP), which indicates the proposed method performs well and would be applicable in the simulation test.

Kefeng Li, Guangyuan Zhang, Yiding Wang, Peng Wang, Cui Ni

Comparative Study of Deep Learning Methods on Dorsal Hand Vein Recognition

In recent years, deep learning techniques have facilitated the results of many image classification and retrieval tasks. This paper investigates deep learning based methods on dorsal hand vein recognition and makes a comparative study of popular Convolutional Neural Network (CNN) architectures (i.e., AlexNet, VGG Net and GoogLeNet) for such an issue. To the best of our knowledge, it is the first attempt that applies deep models to dorsal hand vein recognition. The evaluation is conducted on the NCUT database, and state-of-the-art accuracies are reached. Meanwhile, the experimental results also demonstrate the advantage of deep features to the shallow ones to discriminate dorsal hand venous network and confirm the necessity of the fine-tuning phase.

Xiaoxia Li, Di Huang, Yunhong Wang

Dorsal Hand Vein Recognition Across Different Devices

With the fast development of the information, the application of distributed recognition system becomes more widespread. But the difference of hardware condition of terminal acquisition and all kinds of environments in distributed recognition system made biometric feature images different, which were gathered by different hardware. Include contrast, lightness, shifting, angle of rotation, size and so on. These differences will inevitably reduce accuracy of recognition and will not satisfy the development needs of the times. This paper synthetically analyses the important factors of heterogeneous dorsal hand vein images which are resulted by different devices. After normalizing grayscale images, this paper uses a segmentation method based on gradient difference to segment the texture of veins and uses SIFT to extract and match features. Discrimination in this paper can improve to 90.17 %, which is higher than other algorithms. This method can effectively solve the problem about dorsal hand vein recognition across different devices.

YiDing Wang, Xuan Zheng, CongCong Wang

A New Finger Feature Fusion Method Based on Local Gabor Binary Pattern

This paper proposes a novel multimodal feature fusion method based on local Gabor binary pattern (LGBP). First, the feature maps of three modalities of finger, fingerprint (FP), finger vein (FV) and finger knuckle print (FKP), are respectively extracted using LGBP. The obtained LGBP-coded maps are further explored using local-invariant gray description to generate Local Gabor based Invariant Gray Features (LGIGFs). To reduce pose variations of fingers in imaging, LGIGFs are then weighed by a Gaussian modal. The experimental results show that the proposed method is capable of fusing multimodal feature effectively, and improve correct recognition rate greatly.

Yihua Shi, Zheng Zhong, Jinfeng Yang

Palmprint and Palm Vein Multimodal Fusion Biometrics Based on MMNBP

This paper presents a multi-biometrics recognition method based on the fusion of palmprint and palm vein. Firstly, the traditional LBP method is improved, a novel algorithm called neighbor based binary pattern (NBP) is presented, which uses the relationship of gray value between adjacent pixels in the local area to encode the image. Secondly, the images of palm vein and palmprint are subdivided into several uniform size blocks, the gray mean value of each block is calculated. Furtherly, the multi-block mean image is encoded by the NBP method, which is called multi-block mean neighbor based binary pattern (MMNBP), and the feature fusion operation is implemented. Finally, the Hamming distance is used for matching. The comparison experiments are carried out with the current typical and popular approaches in the PolyU contact public database and self-built non-contact database. The experimental results indicate the superiority and effectiveness of the approach, which has good application prospect.

Sen Lin, Ying Wang, Tianyang Xu, Yonghua Tang

Iris and Ocular Biometrics


Design of a Wide Working Range Lens for Iris Recognition

This paper presents a methodology to solve the problem of narrow working range of lens for iris recognition. For iris recognition, working range of the lens is limited by the depth of field of the imaging lens. This paper proposes a design of a lens with liquid lens as a key component. The designed lens has a wide working range and can acquire high resolution images. The designed parameters include 19.2–20.3 mm focal lens, work distance from 250 mm to 450 mm, less than 0.5 % distortion, 2.8 working F number, 4 mm image diameter, 840 nm–870 nm wavelength of operating spectrum. At 166 lp/mm, the lens can acquire iris image over all field of view with MTF > 0.3. The lens is composed of fours spherical lenses and one liquid lens. Without moving parts inside, the structure of designed lens is simple and convenient to control.

Wenzhe Liao, Kaijun Yi, Junxiong Gao, Xiaoyu Lv, Jinping Wang

Iris Image Quality Assessment Based on Saliency Detection

There are few restrictions in the image capture of mobile iris recognition, so the iris texture is easily interfered and the images may fail to meet the requirements of the identification. If the quality of captured iris images can be pre-evaluated, the unrecognizable iris images could be removed, which can reduce the operational burden and be more efficient. Therefore, an approach for iris image quality assessment based on saliency detection is proposed in this paper. First, Frequency-tuned (FT) method is used to detect image salient regions, then the binary image is obtained by segmenting saliency maps with threshold, and finally the image quality is evaluated according to the shape characteristics of the connected regions in binary images. As the results shown, the proposed method is capable of evaluating the image quality under the ideal and disturbing conditions, and removing the unrecognizable iris images because of the interference.

Xiaonan Liu, Yuwen Luo, Silu Yin, Shan Gao

An Accurate Iris Segmentation Method Based on Union-Find-Set

Iris segmentation is one of the most important steps in iris recognition system, many existing localization methods model the iris outer boundary by a circle. However the iris outer boundary are not a circle in case of partially opened eye image. In this paper, we propose a method based on Union-Find-Set to extract the accurate iris boundary. The proposed method have been tested on the a visible light iris database captured by our own laboratory. The experimental results show that the proposed method outperforms the state-of-the-art method not only on localization accuracy rate but also on localization speed.

Lijun Zhu, Weiqi Yuan

Combining Multiple Color Components for Efficient Visible Spectral Iris Localization

Iris localization is the prerequisite for the precise iris recognition. Compared with near-infrared iris images, the visible spectral iris images may have more fuzzy boundaries, which impair the iris detection. We can use multiple color components of different color spaces to realize the visible spectral iris localization. Firstly, the sclera is segmented and eyelids are located on $$ \alpha $$α component image through contrast adjustment and polynomial fitting. Secondly, morphological processing and CHT (Circular Hough Transform) is applied to localize the limbic boundary on R component image. Similarly, the pupillary boundary is localized on R component image and $$ \alpha $$α component image. Experimental results on visible spectral iris image dataset indicate that the proposed method has good performance on iris localization.

Xue Wang, Yuqing He, Kuo Pei, Mengmeng Liang, Jingxi He

Extraction of the Iris Collarette Based on Constraint Interruption CV Model

This paper proposes a method to extract collarette based on CV model with constraints. With different degrees of strength of multi-border iris image, this method realized the extraction of such complex collarette with weak border. The method analyzes the variation of the model parameters during optimization, controls model parameters by detecting errors under different applications, and establishes constraints of CV model to ensure that iteration interrupt at local optima, namely collarette boundary. In this experiment samples from our database, the experimental results show that the method is effective, and rapid to extract the iris collarette.

Jing Huang, Weiqi Yuan

A Method of Vessel Segmentation Based on BP Neural Network for Color Fundus Images

The morphological and structural changes of retinal vessels are very important for the early diagnosis of many diseases. In view of the characteristics of retinal vessels, we present a new method for vessel segmentation based on BP neural network. This method consists of four steps: histogram equalization of green channel, morphological processing, Gaussian matched filter and Hessian matrix. The fundus vessels are segmented by BP neural network. We conduct the experiments on DRIVE and STARE database. The experiment results show that our method has good effect on the segmentation of fundus retinal vessels.x

Haiying Xia, Shuaifei Deng

Corneal Arcus Segmentation Method in Eyes Opened Naturally

Detection of the corneal arcus by image analysis has important significance for the disintegration of the abnormal lipid metabolism. The traditional method is accompanied with the problem of robustness when the image is collected by non-invasive way. In this paper, an improved corneal arcus segmentation method is proposed. Firstly, locate the candidate area by detecting the eyelid and eyelash. Secondly, on the definition of similarity and the projection of color components, the Union-Find algorithm is used to accomplish the clustering of the target. Finally, the color metrics is defined to complete the segmentation of the corneal arcus. 1968 images from our database are analyzed segmentation accuracy reaches 95.4 % respectively.

Le Chang, Weiqi Yuan

Image Super-Resolution for Mobile Iris Recognition

Iris recognition is a reliable method to protect the security of mobile devices. Low resolution (LR) iris images are inevitably acquired by mobile devices, which makes mobile iris recognition very challenging. This paper adopts two pixel level super-resolution (SR) methods: Super-Resolution Convolutional Neural Networks (SRCNN) and Super-Resolution Forests (SRF). The SR methods are conducted on the normalized iris images to recover more iris texture. Ordinal measures (OMs) are applied to extract robust iris features and the Hamming distance is used to calculate the matching score. Experiments are performed on two mobile iris databases. Results show that the pixel level SR technology has limited effectiveness in improving the iris recognition accuracy. The SRCNN and SRF methods get comparable recognition results. The SRF method is much faster at both the training and testing stage.

Qi Zhang, Haiqing Li, Zhaofeng He, Zhenan Sun

Behavioral Biometrics


Online Finger-Writing Signature Verification on Mobile Device for Local Authentication

Most of the existing works for the online signature verification system have focused on the algorithm improvement of each stage, such as feature extraction, matching and classifier design. However, there are less of them related to the design of a real system and its issue in practical application. In this paper, we have designed a novel system for online finger-writing signature verification on mobile device. By means of our proposed protocol for data collection, a small Chinese signature database has been established. Finally, we also have developed a signature verification App embed with simpler and more efficient algorithms for evaluating the performance and time-consuming of real system.

Lei Tang, Yuxun Fang, Qiuxia Wu, Wenxiong Kang, Junhong Zhao

Uyghur Off-line Signature Recognition Based on Modified Corner Curve Features

In this paper, modified corner curve features based off-line signature recognition method proposed for Uyghur handwritten signature. The signature images were preprocessed according to the nature of Uyghur signature. Then corner curve features (CCF) and modified corner curve features (MCCF) with different 3 dimensional vectors were extracted respectively. Experiments were performed using Euclidean distance classifier, and non-linear SVM classifier for Uyghur signature samples from 50 different people with 1000 signatures, two kinds of experiments were performed for and variations in the number of training and testing datasets, and a high recognition rate of 98.9 % was achieved with MCCF-16. The experimental results indicated that modified corner curve features can efficiently capture the writing style of Uyghur signature.

Kurban Ubul, Ruxianguli Abudurexiti, Hornisa Mamat, Nurbiya Yadikar, Tuergen Yibulayin

Improved i-vector Speaker Verification Based on WCCN and ZT-norm

For the purpose of improving system performance in high channel variability, an improved i-vector speaker verification algorithm is proposed in this paper. Firstly, i-vectors are obtained from GMM-UBM of registered speakers. And then, the weighted linear discriminant analysis is utilized to play the role of channel compensation and dimensionality reduction in i-vectors. By doing this, more discriminant vectors could be extracted. Immediately following, WCCN and ZT-norm are combined to normalize the scores from cosine distance score classifier for the sake of removing channel disturbance. Finally, cosine distance score classifier of high robustness is generated to find target speaker. Experiment results demonstrate that our proposed i-vector system has better performance.

Yujuan Xing, Ping Tan, Chengwen Zhang

Gesture Recognition Benchmark Based on Mobile Phone

Mobile phone plays an important role in our daily life. This paper develops a gesture recognition benchmark based on sensors of mobile phone. The built-in micro gyroscope and accelerometer of mobile phone can efficiently measure the accelerations and angular velocities along x-, y- and z-axis, which are used as the input data. We calculate the energy of the input data to reduce the effect of the phone’s posture variations. A large database is collected, which contains more than 1,000 samples of 8 gestures. The Hidden Markov Model (HMM), K-Nearest Neighbor (KNN) and Support Vector Machine (SVM) are tested on the benchmark. The experimental results indicated that the employed methods can effectively recognize the gestures. To promote research on this topic, the source code and database are made available to the public. ( or correspondence author)

Chunyu Xie, Shangzhen Luan, Hainan Wang, Baochang Zhang

Improved GLOH Approach for One-Shot Learning Human Gesture Recognition

A method is presented for One-Shot Learning Human Gesture Recognition. Shi-Tomasi corner detector and sparse optical flow are used to quickly detect and track robust key-points around motion patterns in scale space. Then Improved Gradient Location and Orientation Histogram feature descriptor is applied to capture the description of robust key interest point. All the extracted features from the training samples are clustered with the k-means algorithm to learn a visual codebook. Subsequently, simulation orthogonal matching pursuit is applied to achieve descriptor coding which map each feature into a certain visual codeword. K-NN classifier is used to recognizing the gesture. The proposed approach has been evaluated on ChaLearn gesture database.

Nabin Kumar Karn, Feng Jiang

A Sign Language Recognition System in Complex Background

In view of the complicity of background, similarity of hand shape and the limitations of the algorithm, we propose a new system for sign language recognition. To separate gesture from complex backgrounds we use initial division based on improved color clustering and the re-segmentation by graph cut method. After that, the outline of hand shape is detected by CV model, the convex defects are found, the Hu moments and the geometric features are calculated. Finally, utilizing the SVM to classification that consists of the first classification on the number of defects and the second classification through multi-feature fusion, the average recognition rate of 26 kinds of sign language is 91.18 % in our collection of images which shows the effectiveness of the proposed algorithms.

Haifeng Sang, Hongjiao Wu

Enhanced Active Color Image for Gait Recognition

Active Energy Image (AEI) is an efficient template for gait recognition. However, the AEI is short of the temporal information. In this paper, we present a novel gait template, named Enhanced Active Color Image (EACI). The EACI is extract the difference of two interval in each gait frame, followed by calculating the width of that difference image and then mapping into RGB space with the ratio, describing the relative position, and composition them to a single EACI. To prove the validity of the EACI, we employ experiments on the USF HUMANID database. Experiment result shows that our EACI describes the dynamic, static and temporal information better. Compared with other published gait recognition approaches, we achieve competitive performance in gait recognition.

Yufei Shang, Yonghong Song, Yuanlin Zhang

Gait Recognition with Adaptively Fused GEI Parts

Though the general gait energy image (GEI) preserves static and dynamic information, most GEI-based gait recognition approaches do not fully exploit it, which leads to inferior performance under the conditions of appearance change, dynamic variation and viewpoint variation. Therefore, this paper proposes a novel Silhouette-based method called GEI parts (GEIs) to identify individuals. The GEIs divides GEI, as the gray-value of GEI indicates different motion of body part. Furthermore, this paper uses k-nearest neighbor as classifier and develops a feature fusion method by adding scores to the recognition results of each GEI part. The proposed method is tested on publicly available CASIA-B dataset under different conditions, by using: (1) different GEI parts individually; (2) adaptively fused GEI parts. The experimental results show that with our proposed adaptive GEIs fusion on the dynamic-static information of walking, the fused GEIs outperforms the state-of-the-art GEI.

Bei Sun, Wusheng Luo, Qin Lu, Liebo Du, Xing Zeng

Affective Computing


A Computational Other-Race-Effect Analysis for 3D Facial Expression Recognition

This paper investigates the other-race-effects in automatic 3D facial expression recognition, giving the computational analysis of the recognition performance obtained from two races, namely white and east Asian. The 3D face information is represented by local depth feature, and then a feature learning process is used to obtain race-sensitive features to simulate the other-race-effect. The learned features from own race and other race are then used to do facial expression recognition. The proposed analysis is conducted on BU-3DFE database, and the results show that the learned features from one race achieve better recognition performance on the own-race faces. It reveals that the other-race-effect are significant in facial expression recognition problem, which confirms the results of psychological experiment results.

Mingliang Xue, Xiaodong Duan, Juxiang Zhou, Cunrui Wang, Yuangang Wang, Zedong Li, Wanquan Liu

Discriminative Low-Rank Linear Regression (DLLR) for Facial Expression Recognition

In this paper we focus on the need for seeking a robust low-rank linear regression algorithm for facial expression recognition. Motivated by low-rank matrix recovery, we assumed that the matrix whose data are from the same pattern as columns vectors is approximately low-rank. The proposed algorithm firstly decomposes the training images per class into the sum of the sparse error matrix, the low-rank matrix of the original images and the class discrimination criterion. Then accelerated proximal gradient algorithm was used to minimize the sum of ℓ1-norm and the nuclear matrix norm to get the set of tight linear regression base as the dictionary. Finally, we reconstruct the samples by tight dictionary and classified the face image by linear regression method according to the residual. The experimental results on facial expression databases show that the proposed method works well.

Jie Zhu, Hao Zheng, Hong Zhao, Wenming Zheng

Facial Expression Recognition Based on Multi-scale CNNs

This paper proposes a new method for facial expression recognition, called multi-scale CNNs. It consists several sub-CNNs with different scales of input images. The sub-CNNs of multi-scale CNNs are benefited from various scaled input images to learn the optimalized parameters. After trained all these sub-CNNs separately, we can predict the facial expression of an image by extracting its features from the last fully connected layer of sub-CNNs in different scales and mapping the averaged features to the final classification probability. Multi-scale CNNs can classify facial expression more accurately than any single scale sub-CNN. On Facial Expression Recognition 2013 database, multi-scale CNNs achieved an accuracy of 71.80 % on the testing set, which is comparative to other state-of-the-art methods.

Shuai Zhou, Yanyan Liang, Jun Wan, Stan Z. Li

Facial Expression Recognition Based on Ensemble of Mulitple CNNs

Automatic recognition of facial expression is an important task in many applications such as face recognition and animation, human-computer interface and online/remote education. It is still challenging due to variations of expression, background and position. In this paper, we propose a method for facial expression recognition based on ensemble of multiple Convolutional Neural Networks (CNNs). First, the face region is extracted by a face detector from the pre-processed image. Second, five key points are detected for each image and the face images are aligned by two eye center points. Third, the face image is cropped into local eye and mouth regions, and three CNNs are trained for the whole face, eye and mouth regions, individually. Finally, the classification is made by ensemble of the outputs of three CNNs. Experiments were carried for recognition of six facial expressions on the Extended Cohn-Kanade database (CK+). The results and comparison show the proposed algorithm yields performance improvements for facial expression recognition.

Ruoxuan Cui, Minyi Liu, Manhua Liu

Real-World Facial Expression Recognition Using Metric Learning Method

Real-world human facial expressions recognition has great value in Human-Computer Interaction. Currently facial expression recognition methods perform quite poor in real-world compared with in traditional laboratory conditions. A key factor is the lack of reliable large real-world facial expression database. In this paper, a large and reliable real-world facial expression database and a Modified Metric Learning Method based on NCM classifier (PR-NCMML) to regress the probability distribution of emotional labels will be introduced. According to experiments, the six-dimension emotion probability vector derived by PR-NCMML is closer to human perception, which leads to better accuracy than the state-of-the-art methods, such as the SVM based algorithms, both dominant emotion prediction and multi-label emotion recognition.

Zhiwen Liu, Shan Li, Weihong Deng

Recognizing Compound Emotional Expression in Real-World Using Metric Learning Method

Understanding human facial expressions plays an important role in Human-Computer-Interaction (HCI). Recent achievements on automatically recognizing facial expressions are mostly based on lab-controlled databases, in which facial images are a far cry from those in the real world. The main contribution of this paper is listed in the following three points. First, a large real-world facial expression database (RAF-DB), with nearly 30,000 images collected from Flickr and labeled by 300 volunteers will be introduced. Second, for the reason that human emotions are much more complexed than the six-basic-emotion defined by Ekman et al., we re-categories real-world facial expressions as compound emotional expressions, which can explain human emotions better. Finally, a metric learning method as well as several state-of-the-art facial expression classifying methods including SVM, are used to recognize our compound expression dataset. And we found that metric learning method performed better than other classifications.

Zhiwen Liu, Shan Li, Weihong Deng

Feature Extraction and Classification Theory


Category Guided Sparse Preserving Projection for Biometric Data Dimensionality Reduction

In biometric recognition tasks, dimensionality reduction is an important pre-process which might influence the effectiveness and efficiency of subsequent procedure. Many manifold learning algorithms arise to preserve the optimal data structure by learning a projective maps and achieve great success in biometric tasks like face recognition. In this paper, we proposed a new supervised manifold learning dimensionality reduction algorithm named Category Guided Sparse Preserving Projection (CG-SPP) which combines the global category information with the merits of sparse representation and Locality Preserving Projection (LPP). Besides the sparse graph Laplacian which preserves the intrinsic data structure of samples, a category guided graph is introduced to assist in better preserving the intrinsic data structure of subjects. We apply it to face recognition and gait recognition tasks in several datasets, namely Yale, FERET, ORL, AR and OA-ISIR-A. The experimental results show its power in dimensionality reduction in comparison with the state-of-the-art algorithms.

Qianying Huang, Yunsong Wu, Chenqiu Zhao, Xiaohong Zhang, Dan Yang

Sparse Nuclear Norm Two Dimensional Principal Component Analysis

Feature extraction is an important way to improve the performance of image recognition. Compared to most of feature extraction methods, the 2-D principal component analysis (2-DPCA) better preserves the structural information of images since it is unnecessary to transform the small size image matrices into high dimensional vectors during the calculation process. In order to improve the robustness of 2-DPCA, nuclear norm-based 2-DPCA (N-2-DPCA) was proposed using nuclear norm as matrix distance measurement. However, 2-DPCA and N-2-DPCA lack the function of sparse feature extraction and selection. Thus, in this paper, we extend N-2-DPCA to sparse case, which is called SN-2-DPCA, for sparse subspace learning. To efficiently solve the model, an alternatively iterative algorithm will also be presented. The proposed SN-2-DPCA would be compared with some advanced 1-D and 2-D feature extraction methods using four well-known data sets. Experimental results indicate the competitive advantage of SN-2-DPCA.

Yudong Chen, Zhihui Lai, Ye Zhang

Unsupervised Subspace Learning via Analysis Dictionary Learning

The ubiquitous digit devices, sensors and social networks bring tremendous high-dimensional data. The high-dimensionality leads to high time complexity, large storage burden, and degradation of the generalization ability. Subspace learning is one of the most effective ways to eliminate the curse of dimensionality by projecting the data to a low-dimensional feature subspace. In this paper, we proposed a novel unsupervised feature dimension reduction method via analysis dictionary learning. By learning an analysis dictionary, we project a sample to a low-dimensional space and the feature dimension is the number of atoms in the dictionary. The coding coefficient vector is used as the low-dimensional representation of data because it reflects the distribution on the synthesis dictionary atoms. Manifold regularization is imposed on the low-dimensional representation of data to keep the locality of the original feature space. Experiments on four datasets show that the proposed unsupervised dimension reduction model outperforms the state-of-the-art methods.

Ke Gao, Pengfei Zhu, Qinghua Hu, Changqing Zhang

Hybrid Manifold Regularized Non-negative Matrix Factorization for Data Representation

Non-negative Matrix Factorization (NMF) has received considerable attention due to its parts-based representation and interpretability of the issue correspondingly. On the other hand, data usually reside on a submanifold of the ambient space. One hopes to find a compact representation which captures the hidden semantic relationships between data items and reveals the intrinsic geometric structure simultaneously. However, it is difficult to estimate the intrinsic manifold of the data space in a principled way. In this paper, we propose a novel algorithm, called Hybrid Manifold Regularized Non-negative Matrix Factorization (HMNMF), for this purpose. In HMNMF, we develop a hybrid manifold regularization framework to approximate the intrinsic manifold by combining different initial guesses. Experiments on two real-world datasets validate the effectiveness of new method.

Peng Luo, Jinye Peng, Ziyu Guan, Jianping Fan

A Novel Nonnegative Matrix Factorization Algorithm for Multi-manifold Learning

Nonnegative matrix factorization (NMF) is a promising approach to extract the sparse features of facial images. It is known that the facial images usually reside on multi-manifold due to the variations of illumination, pose and facial expression. However, NMF lacks the ability of modeling the structure of data manifold. To improve the performance of NMF for multi-manifold learning, we propose a novel Manifold based NMF (Mani-NMF) algorithm which incorporates the multi-manifold structure. The proposed algorithm simultaneously minimizes the local scatter in the same manifold and maximizes the non-local scatter between different manifolds. It theoretically proves the convergence of the algorithm. Finally, experiments on the face databases demonstrate the superiority of our method over some state of the art algorithms.

Qian Wang, Wen-Sheng Chen, Binbin Pan, Yugao Li

Deep Convex NMF for Image Clustering

Conventional matrix factorization methods fail to exploit useful information from rather complex data due to their single-layer structure. In this paper, we propose a novel deep convex non-negative matrix factorization method (DCNMF) to improve the ability of feature representation. In addition, the manifold and sparsity regularizers are imposed on each layer to discover the inherent structure of the data. For the formulated multi-layer objective, we develop an efficient iterative optimization algorithm, which can enhance the stability via layer-by-layer factorization and fine-tuning. We evaluate the proposed method by performing clustering experiments on face and handwritten character benchmark datasets; the results show that the proposed method obviously outperforms the conventional single-layer methods, and achieves the state-of-the-art performance.

Bin Qian, Xiaobo Shen, Zhenmin Tang, Tao Zhang

Unsupervised Feature Selection with Graph Regularized Nonnegative Self-representation

In this paper, we propose a novel algorithm called Graph Regularized Nonnegative Self Representation (GRNSR) for unsupervised feature selection. In our proposed GRNSR, each feature is first represented as a linear combination of its relevant features. Then, an affinity graph is constructed based on nonnegative least squares to capture the inherent local structure information of data. Finally, the l2,1-norm and nonnegative constraint are imposed on the representation coefficient matrix to achieve feature selection in batch mode. Moreover, we develop a simple yet efficient iterative update algorithm to solve GRNSR. Extensive experiments are conducted on three publicly available databases (Extended YaleB, CMU PIE and AR) to demonstrate the efficiency of the proposed algorithm. Experimental results show that GRNSR obtains better recognition performance than some other state-of-the-art approaches.

Yugen Yi, Wei Zhou, Yuanlong Cao, Qinghua Liu, Jianzhong Wang

Local Dual-Cross Ternary Pattern for Feature Representation

Extracting effective features is a fundamental issue in image representation and recognition. In this paper, we present a new feature representation method for image recognition based on Local Ternary Pattern and Dual-Cross Pattern, named Local Dual-Cross Ternary Pattern (LDCTP). LDCTP is a feature representation inspired by the sole textural structure of human faces. It is efficient and only quadruples the cost of computing Local Binary Pattern. Experiments show that LDCTP outperforms other descriptors.

Peng Zhou, Yucong Peng, Jifeng Shen, Baochang Zhang, Wankou Yang

Anti-Spoofing and Privacy


Cross-Database Face Antispoofing with Robust Feature Representation

With the wide applications of user authentication based on face recognition, face spoof attacks against face recognition systems are drawing increasing attentions. While emerging approaches of face antispoofing have been reported in recent years, most of them limit to the non-realistic intra-database testing scenarios instead of the cross-database testing scenarios. We propose a robust representation integrating deep texture features and face movement cue like eye-blink as countermeasures for presentation attacks like photos and replays. We learn deep texture features from both aligned facial images and whole frames, and use a frame difference based approach for eye-blink detection. A face video clip is classified as live if it is categorized as live using both cues. Cross-database testing on public-domain face databases shows that the proposed approach significantly outperforms the state-of-the-art.

Keyurkumar Patel, Hu Han, Anil K. Jain

Deep Representations Based on Sparse Auto-Encoder Networks for Face Spoofing Detection

Automatic face recognition plays significant role in biometrics systems, and face spoofing has raised concerns at the same time, since a photo or video of an authorized uesr’s face could be used for deceiving the system. In this paper, we propose a new hierarchical visual feature based on deep learning to discriminate spoof face. First, the LBP descriptor is used to extract low level face features of face images, and then these low level features are encoded to high level features via a deep learning architecture which is consists of sparse auto-encoder (SAE). Finally, SVM classifier is applied to detect face spoofing. We perform a experimental evaluation on two face liveness detection databases, CASIA database and NUAA database. The results indicate the robustness of the proposed approach for face spoofing detection.

Dakun Yang, Jianhuang Lai, Ling Mei

A Face Liveness Detection Scheme to Combining Static and Dynamic Features

Face liveness detection is an interesting research topic in face-based online authentication. The current face liveness detection algorithms utilize either static or dynamic features, but not both. In fact, the dynamic and static features have different advantages in face liveness detection. In this paper, we discuss a scheme to combine dynamic and static features that combines the strength of each. First, the dynamic maps are obtained from the inter frame motion in the video. Then, using a Convolutional Neural Network (CNN), the dynamic and static features are extracted from the dynamic maps and the images, respectively. Next, the fully connected layers from the CNN that include the dynamic and static features are connected to form the fused features. Finally, the fused features are used to train a two-value Support Vector Machine (SVM) classifier, which classify the images into two groups, images with real faces and images with fake faces. We conduct experiments to assess our algorithm that includes classifying images from two public databases. Experimental results demonstrate that our algorithm outperforms current state-of-the-art face liveness detection algorithms.

Lifang Wu, Yaowen Xu, Xiao Xu, Wei Qi, Meng Jian

Liveness Detection Using Texture and 3D Structure Analysis

We propose a novel face liveness detection method by analyzing the sparse structure information in 3D space based on binocular vision and texture information based on LBP (Local Binary Patterns). Structures of real faces have regular 3D structure information while structures of fake faces are usually presented in plane version or curve version different from real faces. Besides, fake faces containing quality defects caused by the printing technology can be detected by using the LBP texture feature. Three liveness detectors utilizing the 3D structure acquired from the binocular vision system, LBP and the combination of the aforementioned two methods are evaluated on a database. Experimental results show that the proposed methods can efficiently distinguish photo and real face.

Qin Lin, Weijun Li, Xin Ning, Xiaoli Dong, Peng Chen

A Liveness Detection Method Based on Blood Volume Pulse Probing

In this paper, we propose a novel method of detecting live body samples in biometrics, which is based on the detection of a blood volume pulse. We used an auto-encoder to extract a signal from the video captured from skin to determine whether the sample is alive or not. The experimental results confirmed that our method could accurately distinguish between live body samples and spoofed samples.

Jianzheng Liu, Jucheng Yang, Chao Wu, Yarui Chen

2D Fake Fingerprint Detection Based on Improved CNN and Local Descriptors for Smart Phone

With the growing use of fingerprint authentication systems on smart phone, fake fingerprint detection has become increasingly important because fingerprints can be easily spoofed from a variety of readily available materials. The performance of the existing fake fingerprint detection methods is significantly influenced by the fabrication materials used to generate spoofs during the training stage. In order to enhance the robustness against spoof materials, this paper proposes a novel 2D fake fingerprint detection method mainly for smart phone by combining Convolutional Neural Networks (CNN) and two local descriptors (Local Binary Pattern and Local Phase Quantization). To optimize CNN, global average pooling and batch normalization are integrated. Besides, 2D printed fingerprint dataset created from capacitive fingerprint scanner is used for the first time to evaluate fake fingerprint detection algorithm. Experimental results show that the proposed algorithm has high accuracy and strong robustness, and can meet the requirements on smart phone.

Yongliang Zhang, Bing Zhou, Hongtao Wu, Conglin Wen

Anonymized Distance Filter in Hamming Space

Search algorithms typically involve intensive distance computations and comparisons. In privacy-aware applications such as biometric identification, exposing the distance information may lead to compromise of sensitive data that have privacy and security implications. In this paper, we design an anonymized distance filter that can test and rank instances in a Hamming-ball search without knowing explicit distance values. We demonstrate the effectiveness of our method on both simulated and real data sets in the context of biometric identification.

Yi Wang, Jianwu Wan, Yiu-Ming Cheung, Pong C. Yuen



Dictionary Co-Learning for Multiple-Shot Person Re-Identification

Person re-identification concerns about identifying people across cameras using full-body appearance in a non-obtrusive way for video surveillance and other commercial applications, for which it is usually hard or even impossible to get other more reliable biometric data. In this paper, we present a novel approach for multiple-shot person re-identification when multiple images or video frames are available for each person, which is usually the case in real applications. Our approach collaboratively learns camera-specific dictionaries and utilizes the efficient $$l_2$$l2-norm based collaborative representation for coding, which has shown great superiority in terms of both effectiveness and efficiency to all related existing models.

Yang Wu, Dong Yang, Ru Zhou, Dong Wang

Weighted Local Metric Learning for Person Re-identification

Person re-identification aims to match individual across non-overlapping camera networks. In this paper, we propose a weighted local metric learning (WLML) method for person re-identification. Motivated by the fact that local metric learning has been exploited to handle the data which varies locally, we break down the pedestrian images into several local sub-regions, among which different metric functions are learned. Then we use structured method to learn the weight for each metric function and the final distance is calculated from a weighted sum of these metric functions. Our approach can also combine the local metric functions with global metric functions to exploit their complementary strengths. Moreover it is possible to integrate multiple visual features to further promote the recognition rate. Experiments on two challenging datasets validate the effectiveness of our proposed method.

Xinqian Gu, Yongxin Ge

Robust Color Invariant Model for Person Re-Identification

Person re-identification in a surveillance video is a challenging task because of wide variations in illumination, viewpoint, pose, and occlusion. In this paper, from feature representation and metric learning perspectives, we design a robust color invariant model for person re-identification. Firstly, we propose a novel feature representation called Color Invariant Feature (CIF), it is robust to illumination and viewpoint changes. Secondly, to learn a more discriminant metric for matching persons, XQDA metric learning algorithm is improved by adding a clustering step before computing metric, the new metric learning method is called Multiple Cross-view Quadratic Discriminant Analysis (MXQDA). Experiments on two challenging person re-identification datasets, VIPeR and CUHK1, show that our proposed approach outperforms the state of the art.

Yipeng Chen, Cairong Zhao, Xuekuan Wang, Can Gao

Fast Head Detection Algorithm via Regions of Interest

The traditional pedestrian detection systems usually scan the whole image through sliding window to find the pedestrian, this cause high computation cost. To solve this problem, this paper proposes a regions of interest based fast head detection algorithm. Motivated by the fact that the human head region usually has obvious gradient value and is not easy to be occluded, we set up the initial location model of the region of interest (ROI) by analyzing the distribution of the head gradient. After this, the K-means clustering algorithm is used to filter out the false ROIs and obtain refined candidates. Finally, the HOG-SVM framework is adopted to classify the ROIs after two times of choosing, so as to locate the human heads. Experimental results on real video sequences show that the proposed method can effectively improve the detection rate while ensuring the accuracy of detection.

Ling Li, Jiangtao Wang

Glasses Detection Using Convolutional Neural Networks

Glasses detection plays an important role in face recognition and soft biometrices for person identification. However, automatic glasses detection is still a challenging problem under real application scenarios, because face variations, light conditions, and self-occlusion, have significant influence on its performance. Inspired by the success of Deep Convolutional Neural Networks (DCNN) on face recognition, object detection and image classification, we propose a glasses detection method based on DCNN. Specifically, we devise a Glasses Network (GNet), and pre-train it as a face identification network with a large number of face images. The pre-trained GNet is finally fine-tuned as a glasses detection network by using another set of facial images wearing and not wearing glasses. Evaluation experiments have been done on two public databases, Multi-PIE and LFW. The results demonstrate the superior performance of the proposed method over competing methods.

Li Shao, Ronghang Zhu, Qijun Zhao

Face Occlusion Detection Using Cascaded Convolutional Neural Network

With the rise of crimes associated with ATM, face occlusion detection has gained more and more attention because it facilitates the surveillance system of ATM to enhance the safety by pinpointing disguised among customers and giving alarms when suspicious customer is found. Inspired by strong learning ability of deep learning from data and high efficient feature representation ability, this paper proposes a cascaded Convolutional Neural Network (CNN) based face occlusion detection method. In the proposed method, three cascaded CNNs are used to detect head, eye occlusion and mouth occlusion. Experimental results show that the proposed method is very effective on two test datasets.

Yongliang Zhang, Yang Lu, Hongtao Wu, Conglin Wen, Congcong Ge

Multiple Pedestrian Tracking Based on Multi-layer Graph with Tracklet Segmentation and Merging

Multiple pedestrian tracking is regarded as a challenging work due to difficulties of occlusion, abrupt motion and changes in appearance. In this paper, we propose a multi-layer graph based data association framework to address occlusion problem. Our framework is hierarchical with three association layers and each layer has its corresponding association method. We generate short tracklets and segment some of them into small pieces based on the segmentation condition in the first layer. The segmented tracklets are merged into long tracklets using spatial-temporal information in the second layer. In the last layer, tracklets in neighboring frame-window are merged to form object track mainly by searching the global maximum overlap ratio of the tracklets. Since appearance information is not available in various scenarios, we don’t use any appearance features in our work. We evaluate our algorithm on extensive sequences including two categories and demonstrate superior experimental results.

Wencheng Duan, Tao Yang, Jing Li, Yanning Zhang

DNA and Emerging Biometrics


An Adaptive Weighted Degree Kernel to Predict the Splice Site

The weighted degree kernel is a good means to predict the splice site. Its prediction performance is affected by positions in the DNA sequence of nucleotide bases. Based on this fact, we propose confusing positions in this article. Using the confusing positions and the key positions which we proposed in previous work, we construct a weight array to obtain adaptive weighted degree kernel, a kind of string kernel to predict the splice site. Then to prove the efficient and advance of the method, we use the public available dataset to train support vector machines to compare the performance of the adaptive weighted degree kernel and conventional weighted degree kernel. The results show that the adaptive weighted degree kernel has better performance than the weighted degree kernel.

Tianqi Wang, Ke Yan, Yong Xu, Jinxing Liu

The Prediction of Human Genes in DNA Based on a Generalized Hidden Markov Model

The Generalized Hidden Markov Model (GHMM) has been proved to be an excellently general probabilistic model of the gene structure of human genomic sequences. It can simultaneously incorporate different signal descriptions like splicing sites and content descriptions, for instance, compositional features of exons and introns. Enjoying its flexibility and convincing probabilistic underpinnings, we integrate some other modification of submodels and then implement a prediction program of Human Genes in DNA. The program has the capacity to predict multiple genes in a sequence, to deal with partial as well as complete genes, and to predict consistent sets of genes occurring on either or both DNA strands. More importantly, it also can perform well for longer sequences with an unknown number of genes in them. In the experiments, the results show that the proposed method has better performance in prediction accuracy than some existing methods, and over 70 % of exons can be identified exactly.

Rui Guo, Ke Yan, Wei He, Jian Zhang

User Authentication Using Motion Sensor Data from Both Wearables and Smartphones

With the increasing popularity of wearable devices, it is common to use several smart devices simultaneously including smartphones. With embedded accelerometers and gyroscopes, the smart devices naturally constitute a multiple sensor system to measure the activities of the user more comprehensively and accurately. This paper proposed a new approach to perform authentication by using motion data collected from both wearables and smartphones. We propose a set of simple timedomain features to characterize the motion data collected from daily activities such as walking and train a one-class classifier to differentiate legitimate and illegitimate users. The experiments on data collected from 20 subjects demonstrate the proposed multiple sensor approach does lead to obvious performance improvements compared with traditional single sensor approaches.

Jianmin Dong, Zhongmin Cai

Person Authentication Using Finger Snapping — A New Biometric Trait

This paper presents a new biometric trait, finger snapping, which can be applied for person authentication. We extract a set of features from finger snapping traces according to time and frequency domain analysis. A prototype is developed on Android smartphones to realize authentication for users. We collect 6160 snapping traces from 22 subjects for continuous 7 days and 324 traces from 54 volunteers across three weeks. Extensive experiments confirm the measurability, permanence, uniqueness, circumvention, universality and acceptability of the finger snapping to realize biometrics based authentication. It shows that the system achieves $$6.1\,\%$$6.1% average False Rejection Rate (FRR) and $$5.9\,\%$$5.9% average False Acceptance Rate (FAR).

Yanni Yang, Feng Hong, Yongtuo Zhang, Zhongwen Guo


Weitere Informationen

Premium Partner

BranchenIndex Online

Die B2B-Firmensuche für Industrie und Wirtschaft: Kostenfrei in Firmenprofilen nach Lieferanten, Herstellern, Dienstleistern und Händlern recherchieren.



Best Practices für die Mitarbeiter-Partizipation in der Produktentwicklung

Unternehmen haben das Innovationspotenzial der eigenen Mitarbeiter auch außerhalb der F&E-Abteilung erkannt. Viele Initiativen zur Partizipation scheitern in der Praxis jedoch häufig. Lesen Sie hier  - basierend auf einer qualitativ-explorativen Expertenstudie - mehr über die wesentlichen Problemfelder der mitarbeiterzentrierten Produktentwicklung und profitieren Sie von konkreten Handlungsempfehlungen aus der Praxis.
Jetzt gratis downloaden!