Human face profile recognition using attributed string
Introduction
Face profile matching can be an important aspect for the recognition of faces. A face profile provides a complementary structure of the face that is not seen in the frontal view. The combination of the matching results of both the frontal and profile faces can improve the false acceptance rate. In addition, the system would be more foolproof because it is difficult to cheat the profile face identification by a mask. Profile analysis has also been used to assess the profile changes due to surgical correction [1] and create three-dimensional (3D) facial models [2].
Previous works on facial profile [3], [4], [5] extracted fiducial marks from the profile by heuristic rules, and a set of features was detected in terms of the positions of these fiducials. The distance between the features of a test profile's fiducial points and that of the model's fiducial points was calculated. The model in the database with the minimum distance was considered as the match. Kaufman et al. [6] reported a face profile recognition system using profile silhouettes. A set of normalized autocorrelations expressed in polar coordinates was used as a feature vector. The classification was based on a distance weighted k-nearest neighbor rule. The best performance of recognition reached 90%. Harmon et al. [3] manually drew the outlines from profile photos of 256 males. Nine fiducial points, i.e. the forehead, bridge, nose tip, nose bottom, upper lip, mouth, lower lip, chin and throat, were selected. A set of 11 features was derived from these fiducial points. After aligning the two profiles to be matched by two selected fiducial marks, the matching was achieved by measuring the Euclidean distance of the feature vectors derived from the outlines. They extended their work in Ref. [7] by reducing these 11 features to 10, because the nose protrusion was found to be highly correlated with other two features. Set partitioning technique was used to reduce the number of candidates to be included in the Euclidean distance measure. Decreased computation time was reported. In a continuous study [4], they defined 17 fiducial points, and thresholding windows were used to prune searching space. A 96% recognition rate was reported with or without the pruning technique. Wu et al. [5] developed a face profile recognition procedure based on 24 fiducial points. The differences from the work of Harmon et al. are as follows: First, the outline curves were automatically obtained instead of by an artist's drawing. Then, they used a B-spline to extract turning points on the outline curve. Subsequently, six interesting points and 24 features were derived from these points. A database of 18 oriental faces was used to test the performance of their approach. The stored features were obtained in a training process that used three profiles per person. Out of the 18 test images, 17 were reported correctly recognized. Yu et al. [8], [9] proposed a rule-based fiducial mark extraction technique. Location and area constraints on nose lips and chin were used according to prior knowledge of human profile shape. Aibara et al. [10] proposed a method to recognize human face profiles based on P-Fourier descriptor (PFD). The P-Fourier descriptor is invariant to parallel translation and scale. They preprocessed the profile images by smoothing, edge detection, binarization, thinning and outline extraction to obtain outline curves of the profiles which consisted of 55 pixels upward from the nose tip and 90 pixels downward from the nose tip. A characteristic vector composed of 31 Fourier coefficients from the low frequency range was used. A 93.1% recognition rate was reported for 130 subjects in their experiment.
Most methods on profile recognition depend on the correct detection of fiducial points. Unfortunately, some features such as concave nose, protruding lips, flat chin, etc., make detection of such points difficult and unreliable. The human face profile is a highly structured geometric curve. From the viewpoint of representation, the set of fiducial points is a “sparse” representation of the underlying structures while the outline curve is a “dense” but honest representation of the shape. A high-level curve matching approach is, therefore, more appropriate and robust than point matching methods. A novel syntactic technique using attributed string is proposed here to recognize a chain of profile line segments rather than a set of inconsistent fiducial points. It highlights the favor of curve matching by suppressing the edit operations of “insert” and “delete”. The major operations are the “merge” and “change” of string primitives. A quadratic penalty function is proposed to prohibit large angle changes and overmerging. This technique provides strong discriminative power into the string matching method for similar shape classification and is found to be more accurate to distinguish one face from the other.
In the following, a brief overview of string matching together with its weakness on similar shape matching is highlighted in Section 2. In Section 3, the reason and principle of the merge dominant string matching approach are described in detail. An improved line attribute representation is adopted. Very encouraging experimental results are reported in Section 4. Finally, the paper is concluded in Section 5.
Section snippets
String matching
The structural and syntactic method is a high-level approach to find a symbolic and non-numeric description (e.g. string) of patterns [11], [12]. It has been applied on shape recognition, character recognition, and speech processing. To the best of our knowledge, there has been no report on face recognition using string matching. The most related work using string is the process of two-dimensional (2D) shape, trademark and logo. With this approach, the symbolic representation of an input sample
Merge dominant string match
The proposed string representation is based on the line segments generated from polygonal line fitting [21] on face profile outlines. Line segments are 2D entities with attributes of orientation, length, and the structural information of relative location with each other. The shape of an object can be described as a set of ordered line segments using appropriate string representation.
Experimental results
A face database of 30 persons with two profile images per person from the University of Bern [25] was used to test the capability of the proposed approach. Each image is of the size 512×342 pixels with very high contrast. Some examples are shown in Fig. 4. The two profile images of each person were used as model and input, respectively. There are 60 matching experiments in total if the roles of model and input are interchanged. The nose tip and chin point of each profile face were detected
Conclusion
We have proposed, in this paper, a new attributed string matching method for human face profile recognition. The technique relies mainly on the merge and change operations of string to tackle the inconsistency problem of feature point detection. Hence, the proposed approach suppresses the string edit operations of insert and delete. A quadratic penalty function is used to prohibit large angle changes and overmerging. This makes the proposed technique robust to the errors from previous low-level
About the Author—YONGSHENG GAO received the B.Sc and M.Sc degrees in Electronic Engineering from Zhejiang University, China, in 1985 and 1988, respectively, and his Ph.D. degree in Computer Engineering from Nanyang Technological University, Singapore. Currently, he is an assistant professor with Nanyang Technological University. His research interests include computer vision, pattern recognition and face recognition.
References (25)
- et al.
The analysis of facial profiles using scale space techniques
Pattern Recognition
(1993) - et al.
Machine identification of human faces
Pattern Recognition
(1981) - et al.
Identification of human face profiles by computer
Pattern Recognition
(1978) Polygonal shape recognition using string matching techniques
Pattern Recognition
(1991)- et al.
Applications of approximate string matching to 2D shape recognition
Pattern Recognition
(1993) - et al.
Parallel (PRAM EREW) algorithms for contour-based 2D shape recognition
Pattern Recognition
(1991) - et al.
Trademark shapes description by string-matching techniques
Pattern Recognition
(1994) - et al.
Dynamic two-strip algorithm in curve fitting
Pattern Recognition
(1990) Comparing face images using the modified Hausdorff distance
Pattern Recognition
(1998)- T.A. Akimoto, R. Wallace, Y. Suenaga, Feature extraction from front and side views of faces for 3D model creation,...
Automatic recognition of human face profiles
Comput. Graphics Image Process.
Human face profile recognition by computer
Pattern Recognition
Cited by (29)
Profile-based 3D-aided face recognition
2012, Pattern RecognitionCitation Excerpt :They reported a recognition rate of almost 90% on the University of Bern Database (UBD) of 30 subjects. Gao and Leung [21] introduced a method to encode profiles as attributed strings and developed an algorithm for attributed string matching. They reported nearly 100% recognition rate on the UBD database.
A compressed string matching algorithm for face recognition with partial occlusion
2021, Multimedia SystemsChallenges encountered in building a fast and efficient surveillance system: An overview
2020, Proceedings of the 4th International Conference on IoT in Social, Mobile, Analytics and Cloud, ISMAC 2020Role of Distance Measures in Approximate String Matching Algorithms for Face Recognition System
2020, IFIP Advances in Information and Communication TechnologyOcclusion detection and restoration techniques for 3D face recognition: a literature review
2018, Machine Vision and ApplicationsBoosting Radial Strings for 3D Face Recognition with Expressions and Occlusions
2016, 2016 International Conference on Digital Image Computing: Techniques and Applications, DICTA 2016
About the Author—YONGSHENG GAO received the B.Sc and M.Sc degrees in Electronic Engineering from Zhejiang University, China, in 1985 and 1988, respectively, and his Ph.D. degree in Computer Engineering from Nanyang Technological University, Singapore. Currently, he is an assistant professor with Nanyang Technological University. His research interests include computer vision, pattern recognition and face recognition.
About the Author—MAYLOR K.H. LEUNG received the B.Sc degree in Physics from the National Taiwan University in 1979, and the B.Sc, M.Sc and Ph.D. degrees in Computer Science from the University of Saskatchewan, Canada, in 1983, 1985 and 1992, respectively. Currently, Dr. Leung is an associate professor with Nanyang Technological University, Singapore. His research interests include structural pattern processing, face recognition, motion analysis and string processing.