Elsevier

Pattern Recognition

Volume 41, Issue 4, April 2008, Pages 1316-1328
Pattern Recognition

Palmprint verification based on principal lines

https://doi.org/10.1016/j.patcog.2007.08.016Get rights and content

Abstract

In this paper, we propose a novel palmprint verification approach based on principal lines. In feature extraction stage, the modified finite Radon transform is proposed, which can extract principal lines effectively and efficiently even in the case that the palmprint images contain many long and strong wrinkles. In matching stage, a matching algorithm based on pixel-to-area comparison is devised to calculate the similarity between two palmprints, which has shown good robustness for slight rotations and translations of palmprints. The experimental results for the verification on Hong Kong Polytechnic University Palmprint Database show that the discriminability of principal lines is also strong.

Introduction

In networked society, automatic personal verification is a crucial problem that needs to be solved properly. And in this field biometrics is one of the most important and effective solutions. Recently, palmprint based verification systems (PVS) have been receiving more attention from researchers [1]. Compared with fingerprint or iris based personal verification systems which have been widely used [2], [3], the PVS can also achieve satisfying performance. For example, it can provide reliable recognition rate with fast processing speed [1]. Particularly, the PVS has several special advantages such as rich texture feature, stable line feature, low-resolution imaging, low-cost capturing devices, and easy self-positioning, etc.

So far, there have been many approaches proposed for palmprint verification/identification, which can be mainly divided into five categories: (1) texture based approaches [1], [4]; (2) appearance based approaches [5], [6], [7], [8]; (3) multiple features based approaches [9]; (4) orientation based approaches [10], [11]; and (5) line based approaches [12], [13], [14], [15], [16], [17], [18]. The main approaches based on texture are to extract texture feature by exploiting 2-D Gabor filter, which have been shown to be of satisfying performance in terms of recognition rate and processing speed [1], [4]. Appearance based approaches were also reported to achieve exciting results in many literatures, but they may be sensitive to illumination, contrast, and position changes in real applications. In addition, it was reported in Ref. [9] that multiple features based approaches using information fusion technology could provide more reliable results. Recently, orientation codes are deemed to be the most promising methods, since the orientation feature contains more discriminative power than other features, and is more robust for the change of illumination.

Obviously, line is the basic feature of palmprint. Thus, line based approaches also play an important role in palmprint verification/identification field. Zhang et al. used overcomplete wavelet expansion and directional context modeling technique to extract principal lines-like features [12]. Han et al. proposed using Sobel and morphological operations to extract the line-like features from palmprint images obtained using a scanner [13]. Lin et al. applied the hierarchical decomposition mechanism to extract principal palmprint features from a region of interest (ROI), which includes directional and multi-resolution decompositions [14]. However, these methods cannot extract palm lines explicitly. Additionally, Wu et al. and Liu et al. proposed two different approaches based on palm lines, which will be discussed in later section [15], [16], [17], [18].

It is well known that palm lines consist of wrinkles and principal lines. And principal lines can be treated as a separate feature to characterize a palm. Therefore, there are several reasons to carefully study principal lines based approaches. At first, principal lines based approaches can be jointly considered with the person's habit. For instance, when human beings are comparing two palmprints, they instinctively compare principal lines. Secondly, principal lines are generally more stable than wrinkles. The latter is easily masked by bad illumination condition, compression, and noise. Thirdly, principal lines can act as an important component in multiple features based approaches. Fourthly, in some special cases, for example, when the police is searching for some palmprints with similar principal lines, other features cannot be used to replace principal lines. At last, principal lines can be used in palmprint classification or fast retrieval schemes. However, principal lines based approaches have not been studied adequately so far. The main reason is that it is very difficult to extract principal lines from complex palmprint images, which contain many strong and long wrinkles. At the same time, many researchers claimed that it was difficult to obtain a high recognition rate using only principal lines because of their similarity among different people [1]. In other words, they thought the discriminability of principal lines was limited. Nevertheless, they did not conduct related experiments to verify their viewpoints.

In this paper, we propose a novel palmprint verification approach based on principal lines, and further discuss the discriminability of principal lines. Here, before presenting the proposed approach, we shall first present the definition of principal lines used in the whole paper. To illustrate this definition, three typical palmprint images are shown in Fig. 1. Generally speaking, most palmprints have three principal lines: heart line, head line, and life line, which are longest, strongest, and widest lines in palmprint image, and have stable line initials and positions (see Fig. 1(a)) [15]. In addition, a lot of palmprints may have more or less principal lines due to their diversity and complexity (see Fig. 1(b)). In this paper, assuming that within a few of palmprints one or two longest and strongest wrinkles that have similar directions to three principal lines are also regarded as a part of principal lines (see Fig. 1(c)).

In principal lines extraction stage, what are criterions for distinguishing principal lines from wrinkles is an important issue. Through careful observation and analysis, we adopt two main differences between principal lines and wrinkles as the corresponding criterions. The one is the line energy of which principal lines are stronger than that of wrinkles. Another one is the direction of which most wrinkles obviously differ from that of principal lines. In addition, since the Radon transform and its variations are powerful tools to detect the directions and energies of lines in an image, they are used in our method. In matching stage, we devise a matching algorithm based on pixel-to-area comparison to calculate the similarity between two palmprints, which have shown good robustness for slight rotations and translations.

Additionally, what we must stress here is that in this paper, all palmprint images are obtained from Hong Kong Polytechnic University Palmprint Database [19], which were captured by a CCD-based device described in Ref. [1]. This paper is organized as follows. Section 2 presents the method of principal line extraction. Section 3 gives the palmprint matching method based on pixel-to-area comparison. Section 4 reports the experimental results including principal lines extraction, verification, and computational time. Section 5 discusses the discriminability of principal lines. And Section 6 concludes the whole paper with some conclusive remarks.

Section snippets

The Radon transform and the finite Radon transform

The Radon transform in Euclidean space was first established by Johann Radon in 1917 [20]. The Radon transform of a 2D function f(x,y) is defined asR(r,θ)[f(x,y)]=--f(x,y)δ(r-xcosθ-ysinθ)dxdy,where r is the perpendicular distance of a line from the origin and θ is the angle between the line and the y-axis. The Radon transform accentuates linear features by integrating image intensity along all possible lines in an image, thus it can be used to detect linear trends in the image.

However,

Palmprint matching

The task of palmprint matching is to calculate the degree of similarity between a test image and a training image. In our method, the similarity measurement is determined by line matching technique. In Palm-Code [1] and Fusion-Code schemes [4], the normalized Hamming distance was used to calculate similarity degree between a test image and a training image, while the Angular distance was adopted in Competitive-Code scheme [11]. However, Hamming distance and Angular distance based on

Experimental results

The proposed approach in this paper was tested in the Hong Kong Polytechnic University (PolyU) Palmprint Database, which is available from website [19]. The PolyU Palmprint Database contains 7752 grayscale images in BMP image format, corresponding to 386 different palms. In this database, around 20 samples from each of these palms were collected in two sessions, where around 10 samples were captured in the first session and the second session, respectively. The average interval between the

Discussions

In verification experiments, the EER for our approach is even better than that of Palmcode. We can conclude that the discriminability of principal lines is also strong. In the past, many researchers claimed that the discriminability of principal lines was limited due to their similarity among different people. Obviously, this conclusion is not right.

In fact, the similarities of the principal lines among different people include structure similarity and position similarity. For example, the

Conclusions

In this paper, we propose a novel palmprint verification approach based on principal lines, and analyze the discriminability of principal lines. The theoretic analyses and experimental results show that the proposed MFRAT can extract principal lines from complex palmprint images effectively and reliably. And pixel-to-area comparison is robust for slight rotations and translations. From the experimental results of verification, it can be concluded that the discriminability of principal lines is

Acknowledgments

The authors would like to express their sincere thanks to Biometric Research Center at the Hong Kong Polytechnic University for providing us the PolyU Palmprint Database. They would also like to thank Dr Zhenan Sun from Institute of Automation, CAS, China, and Dr Li Liu from Hong Kong Polytechnic University for their kindly help. And the authors are most grateful for the constructive advice and comments from the anonymous reviewers.

This work was supported by the grants of the National Science

About the Author—DE-SHUANG HUANG received the B.Sc. degree in electronic engineering from the Institute of Electronic Engineering, Hefei, China, in 1986, the M.Sc. degree in electronic engineering from the National Defense University of Science and Technology, Changsha, China, in 1989, and the Ph.D. degree in electronic engineering from Xidian University, Xian, China, in 1993. From 1993 to 1997, he was a Postdoctoral Student at the Beijing Institute of Technology, Beijing, China, and the

References (25)

  • L. Ma et al.

    Personal identification based on iris texture analysis

    IEEE Trans. Pattern Anal. Mach. Intell.

    (2003)
  • S. Ribaric et al.

    A biometric identification system based on eigenpalm and eigenfinger features

    IEEE Trans. Pattern Anal. Mach. Intell.

    (2005)
  • Cited by (280)

    • Double-cohesion learning based multiview and discriminant palmprint recognition

      2022, Information Fusion
      Citation Excerpt :

      When the palmprint image is captured under a contact-based, standard, and closed environment, different characteristics of the palmprint are clear and sufficient for recognition. For instance, Fei et al. [26] learned multiple curvature pattern, Naveena et al. [27] detected texture features, and Jia et al. [1] extracted the complete direction features from the contact-based palmprint images. However, some interference information, i.e., pose deformation, small scales, excessive wrinkles, low illumination, uneven illuminations, and deformation of the palm center shown in Fig. 1, will degrade the recognition accuracy, while the palmprint image is captured in contactless, unconstraint, and open environments.

    • Toward Efficient Palmprint Feature Extraction by Learning a Single-Layer Convolution Network

      2023, IEEE Transactions on Neural Networks and Learning Systems
    View all citing articles on Scopus

    About the Author—DE-SHUANG HUANG received the B.Sc. degree in electronic engineering from the Institute of Electronic Engineering, Hefei, China, in 1986, the M.Sc. degree in electronic engineering from the National Defense University of Science and Technology, Changsha, China, in 1989, and the Ph.D. degree in electronic engineering from Xidian University, Xian, China, in 1993. From 1993 to 1997, he was a Postdoctoral Student at the Beijing Institute of Technology, Beijing, China, and the National Key Laboratory of Pattern Recognition, Chinese Academy of Sciences (CAS), Beijing. In 2000, he was a professor, and joined the Institute of Intelligent Machines, CAS, as a member of the Hundred Talents Program of CAS. He had published over 190 papers and, in 1996, published a book entitled Systematic Theory of Neural Networks for Pattern Recognition. His research interests include pattern recognition, machine leaning, bioinformatics, and image processing.

    About the Author—WEI JIA received the B.Sc. degree in informatics from Center of China Normal University, Wuhan, China, in 1998, the M.Sc. degree in computer science from Hefei University of Technology, Hefei, China, in 2004. He is currently a Ph.D. student in the Department of Automation at the University of Science and Technology of China. His research interests include palmprint recognition, pattern recognition, and image processing.

    About the author—DAVID ZHANG graduated in computer science from Peking University. He received his M.Sc. in computer science in 1982 and his Ph.d. in 1985 from the Harbin Institute of Technology (HIT). From 1986 to 1988 he was a Postdoctoral Fellow at Tsinghua University and then an Associate Professor at the Academia Sinica, Beijing. In 1994 he received his second Ph.D. in electrical and computer engineering from the University of Waterloo, Ont., Canada. Currently, he is a Chair Professor at the Hong Kong Polytechnic University where he is the Founding Director of the Biometrics Technology Centre (UGC/CRC) supported by the Hong Kong SAR Government. He also serves as Adjunct Professor in Tsinghua University, Shanghai Jiao Tong University, Beihang University, Harbin Institute of Technology, and the University of Waterloo. He is the Founder and Editor-in-Chief, International Journal of Image and Graphics (IJIG); Book Editor, Springer International Series on Biometrics (KISB); Organizer, the International Conference on Biometrics Authentication (ICBA); Associate Editor of more than 10 international journals including IEEE Trans on SMC-A/SMC-C/Pattern Recognition; Technical Committee Chair of IEEE CIS, and the author of more than 10 books and 160 journal papers. Professor Zhang is a Croucher Senior Research Fellow, Distinguished Speaker of the IEEE Computer Society, and a Fellow of the International Association of Pattern Recognition (IAPR).

    View full text