Matching of dental X-ray images for human identification☆
Introduction
The objective of the research reported here is to automate the process of forensic dentistry. The main purpose of forensic dentistry is to identify deceased individuals, for whom other cues of biometric identification (e.g., fingerprint, face, etc. [1]) may not be available. In forensic dentistry, the postmortem (PM) dental record is compared against antemortem (AM) records pertaining to some presumed identity. A manual comparison between the AM and PM records is based on a systematic dental chart prepared by forensic experts [2], [3]. In this chart, a number of distinctive features are noted for each individual tooth. These features include properties of the teeth (e.g., tooth present/not present, crown and root morphology, pathology and dental restorations), periodontal tissue features, and anatomical features. Depending on the number of matches, the forensic expert rejects or confirms the tentative identity. There are several advantages for automating this procedure. First, an automatic process will be able to compare the PM records against AM records pertaining to multiple identities in order to determine the closest match. Second, while a manual (non-automated) system is useful for verification on a small data set, an automatic (or semi-automatic) system can perform identification on a large database.
For the automated identification, the dental records are usually available as radiographs (Fig. 1). An automated dental identification system consists of two main stages: feature extraction and feature matching [4]. During feature extraction, certain salient information of the teeth such as contour, artificial prosthesis, number of cuspids, etc. is extracted from the radiographs. In this paper, the feature extracted is the tooth contours because they remain more invariant over time compared to some other features of the teeth. A logical diagram of the proposed dental identification system is shown in Fig. 2. The feature extraction stage consists of the radiograph segmentation and the contour extraction. In an earlier paper [4], the authors presented a contour extraction method based on edge detection. However, due to substantial noise that is usually present in radiograph images, the edge-detection-based method does not perform consistently across all the images in our database. Also, the manual selection of the region of interest (ROI) in Ref. [4] is time consuming and is counter to our goal of automated identification. In this paper, we have developed a segmentation algorithm for the detection of ROI. Further, a probabilistic method is introduced to automatically find the contours of teeth. However, a fully automatic feature extraction method is still not capable of handling the large variance in image quality and the appearance of teeth (see Fig. 3). Thus, human intervention is needed to initialize certain algorithmic parameters and correct errors in some problematic images. In the feature matching stage, the extracted contours from the PM radiograph (query image) are compared against those extracted from AM records that are stored in a database. A matching score is computed to measure the similarity between the two given radiographs. A candidate list of potential matches is then generated for human experts to make further decisions.
A diagram of the processing flow is shown in Fig. 4. In the following sections, we will provide the detail of the radiograph segmentation (Section 2), contour extraction (Section 3) and matching (Section 4) stages of this proposed dental identification system. Experimental results on a small dental radiograph image database are also presented.
Section snippets
Radiograph segmentation
The goal of radiograph segmentation is to segment the radiograph into blocks such that each block has a tooth in it. This helps us define the ROI associated with every tooth. For simplicity, we assume there is one row each of maxillary (upper jaw) and mandibular (lower jaw) teeth in the image—this assumption is generally true except in the case of small children who are at the age of teeth formation (see Fig. 5). After the rows of upper and lower teeth are separated, each tooth needs to be
Contour extraction
A tooth has two main parts: the crown, which is above the gumline, and the root, which sits in the bone below the gum (Fig. 11). Due to the overlap of the tooth root image with the image of the jaws, the root is not as visible as the crown in the radiographs due to the lower differential in tissue density [6], [7]. Thus the crown is identified first, followed by the root contour extraction.
Shape matching
The contours extracted from the query image must be matched to the contours extracted from the database images. Because the PM images are usually captured several years after the AM images are acquired, the shapes of the teeth could have changed due to teeth extraction or the growth of teeth. If we assume there are no such changes, the AM and PM radiographs differ only in terms of scaling, rotation, translation and the change of the imaging angle. Because of the criteria of the intraoral
Experiments
The proposed dental X-ray-based identification method has been applied to 38 query images for retrieval from a database, which contains 130 AM images. Fig. 15 shows some of the AM images in the database. In each query image, we divide the teeth into two groups: the teeth in the upper jaw and the teeth in the lower jaw. The teeth in the same group will not change their relative positions, while teeth from different groups will probably change their relative positions because of the opening and
Conclusions and future work
A new semi-automatic method of human identification based on dental radiographs is proposed. This method involves three stages: radiograph segmentation, tooth feature extraction, and tooth feature matching. The feature utilized here is the contours of the teeth. A probabilistic model is used to describe the distribution of tooth pixels and background pixels in the image. After the tooth contours are extracted, a transformation is used to align the contours to correct the imaging geometric
About the Author—ANIL JAIN is a University Distinguished Professor in the Departments of Computer Science and Engineering and Electrical and Computer Engineering at Michigan State University. He was the Department Chair between 1995–1999. His research interests include statistical pattern recognition, exploratory pattern analysis, texture analysis, document image analysis and biometric authentication. Several of his papers have been reprinted in edited volumes on image processing and pattern
References (14)
- et al.
Oral and maxillofacial radiology
Oral Surg. Oral Med. Oral Pathol.
(1999) - et al.
Estimation of skeletal bone mineral density by means of the trabecular pattern of the alveolar bone, its interdental thickness, and the bone mass of the mandible
Oral Surg. Oral Med. Oral Pathol. Oral Radiol. Endod.
(2001) - et al.
Biometrics-Personal Identification in Networked Society
(1999) - American Board of Forensic Odontology, Body identification guidelines, J. Am. Dent. Assoc. 125 (1994)...
- et al.
A look at forensic dentistry—Part 1the role of teeth in the determination of human identity
Br. Dent. J.
(2001) - A.K. Jain, H. Chen, S. Minut, Dental biometrics: human identification using dental radiographs, in: Proceedings of the...
B(asic)-spline basics
Cited by (258)
A comprehensive survey of deep learning algorithms and applications in dental radiograph analysis
2023, Healthcare AnalyticsA novel method of human identification based on dental impression image
2023, Pattern Recognition<sup>3</sup>D–<sup>3</sup>D dentition superimposition for individual identification: A study of an Eastern Chinese population
2023, Forensic Science InternationalA novel technique for dental radiographic image segmentation based on neutrosophic logic
2023, Decision Analytics JournalArtificial intelligence in forensic anthropology: State of the art and Skeleton-ID project
2023, Methodological and Technological Advances in Death Investigations: Application and Case Studies
About the Author—ANIL JAIN is a University Distinguished Professor in the Departments of Computer Science and Engineering and Electrical and Computer Engineering at Michigan State University. He was the Department Chair between 1995–1999. His research interests include statistical pattern recognition, exploratory pattern analysis, texture analysis, document image analysis and biometric authentication. Several of his papers have been reprinted in edited volumes on image processing and pattern recognition. He received the best paper awards in 1987 and 1991, and received certificates for outstanding contributions in 1976, 1979, 1992, 1997 and 1998 from the Pattern Recognition Society. He also received the 1996 IEEE Transactions on Neural Networks Outstanding Paper Award. He is a fellow of the IEEE, ACM, and International Association of Pattern Recognition (IAPR). He has received a Fulbright Research Award, a Guggenheim fellowship and the Alexander von Humboldt Research Award. He delivered the 2002 Pierre Devijver lecture sponsored by the International Association of Pattern Recognition (IAPR). He holds six patents in the area of fingerprint matching. He is the author of the following books: Handbook of Fingerprint Recognition, Springer 2003, BIOMETRICS: Personal Identification in Networked Society, Kluwer 1999, 3D Object Recognition Systems, Elsevier 1993, Markov Random Fields: Theory and Applications, Academic Press 1993, Neural Networks and Statistical Pattern Recognition, North-Holland 1991, Analysis and Interpretation of Range Images, Springer-Verlag 1990, Algorithms For Clustering Data, Prentice-Hall 1988, and Real-Time Object Measurement and Classification, Springer-Verlag 1988.
About the Author—HONG CHEN received his B.Sc. and M.Sc. degrees in Computer Science from Fudan University, Shanghai, P.R. China. He is currently working towards his Ph.D. degree in Department of Computer Science and Engineering, Michigan State University, Lansing, MI, USA. His research interests are pattern recognition, computer vision and medical signal processing.
- ☆
This research was supported by the National Science Foundation Grant EIA-0131079.