Skip to main content

01.01.2020 | APPLIED PROBLEMS

Numbering and Classification of Panoramic Dental Images Using 6-Layer Convolutional Neural Network

verfasst von: Prerna Singh, Priti Sehgal

Erschienen in: Pattern Recognition and Image Analysis | Ausgabe 1/2020

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Deep Convolution Neural Network is one of the most powerful tools to solve complex problems of image classification, image recognition, financial analysis, medical diagnosis and many similar problems. A dental panoramic image consists of collection of teeth of both upper jaw and lower jaw. Automatic classification of dental panoramic images into various tooth types such as canines, incisors, premolars and molars has been a challenging task and involves crucial role of an experienced dentist. In this paper, we propose a technique for numbering and classification of the panoramic dental images. The proposed algorithm consists of four stages namely pre-processing, segmentation, numbering and classification. The pre-processed panoramic dental images are segmented using fuzzy c-mean clustering and subjected to vertical integral projection to extract a single tooth. The image dataset consists of 400 dental panoramic images collected from various dental clinics. The 400 dental images are divided into 240 training samples and 160 testing samples. The image data set is augmented by applying various transformations. Panoramic dental images are further numbered using a universal dental numbering system. Finally, the classification is done with the help of 6-layer deep convolution neural network (DCNN) consisting of 3 convolutional neural network and 3 fully connected network. The tooth is classified as canine, incisor, molar and premolar. An accuracy of 95% has been achieved for augmented database and 92% for original dataset with the proposed algorithm. The proposed numbering and classification of dental panoramic images is useful in biomedical application and postmortem recording of dental records. In case of big calamity, the system can also assist the dentist in recording post mortem dental record that is a very lengthy and arduous task.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat J. L. Fuller, G. E. Denehy, and T. M. Schulein, Concise Dental Anatomy and Morphology, 4th ed. (University of Iowa, Iowa City, 2001). ISBN 978-0874141252 J. L. Fuller, G. E. Denehy, and T. M. Schulein, Concise Dental Anatomy and Morphology, 4th ed. (University of Iowa, Iowa City, 2001). ISBN 978-0874141252
2.
Zurück zum Zitat F. Pongrácz and Z. Bárdosi, “Dentition planning with image-based occlusion analysis,” Int. J. Comput. Assist. Radiol. Surg. 1 (3), 149–156 (2006).CrossRef F. Pongrácz and Z. Bárdosi, “Dentition planning with image-based occlusion analysis,” Int. J. Comput. Assist. Radiol. Surg. 1 (3), 149–156 (2006).CrossRef
3.
Zurück zum Zitat M. L. Tangel, C. Fatichah, F. Van, J. P. Betancourt, M. R. Widyanto, F. Dong, and K. Hirota, “Dental numbering for periapical radiograph based on multiple fuzzy attribute approach,” J. Adv. Comput. Intell. Intell. Inf. 18 (3), 253–261 (2014).CrossRef M. L. Tangel, C. Fatichah, F. Van, J. P. Betancourt, M. R. Widyanto, F. Dong, and K. Hirota, “Dental numbering for periapical radiograph based on multiple fuzzy attribute approach,” J. Adv. Comput. Intell. Intell. Inf. 18 (3), 253–261 (2014).CrossRef
4.
Zurück zum Zitat M. Hosntalab, R. Aghaeizadeh Zoroofi, A. Abbaspour Tehrani-Fard, and G. Shirani, “Classification and numbering of teeth in multi-slice CT images using wavelet-Fourier descriptor,” Int. J. Comput. Assist. Radiol. Surg. 5 (3), 237–249 (2010).CrossRef M. Hosntalab, R. Aghaeizadeh Zoroofi, A. Abbaspour Tehrani-Fard, and G. Shirani, “Classification and numbering of teeth in multi-slice CT images using wavelet-Fourier descriptor,” Int. J. Comput. Assist. Radiol. Surg. 5 (3), 237–249 (2010).CrossRef
5.
Zurück zum Zitat J. Zhou and M. Abdel-Mottaleb, “A content-based system for human identification based on bitewing dental X-ray images,” Pattern Recogn. 38 (11), 2132–2142 (2005).CrossRef J. Zhou and M. Abdel-Mottaleb, “A content-based system for human identification based on bitewing dental X-ray images,” Pattern Recogn. 38 (11), 2132–2142 (2005).CrossRef
6.
Zurück zum Zitat P. L. Lin, Y. H. Lai, and P. W. Huang, “An effective classification and numbering system for dental bitewing radiographs using teeth region and contour information,” Pattern Recogn. 43 (4), 1380–1392 (2010).CrossRef P. L. Lin, Y. H. Lai, and P. W. Huang, “An effective classification and numbering system for dental bitewing radiographs using teeth region and contour information,” Pattern Recogn. 43 (4), 1380–1392 (2010).CrossRef
7.
Zurück zum Zitat R. B. Ali, R. Ejbali, and M. Zaied, “Detection and classification of dental caries in X-ray images using Deep Neural Networks,” in ICSEA 2016: The Eleventh International Conference on Software Engineering Advances (Rome, Italy, 2016), pp. 223–227. ISSN-978-1-61208-498-5 R. B. Ali, R. Ejbali, and M. Zaied, “Detection and classification of dental caries in X-ray images using Deep Neural Networks,” in ICSEA 2016: The Eleventh International Conference on Software Engineering Advances (Rome, Italy, 2016), pp. 223–227. ISSN-978-1-61208-498-5
8.
Zurück zum Zitat J. Punwutikorn, A. Waikakul, and V. Pairuchvej, “Clinically significant oroantral communications — A study of incidence and site,” Int. J. Oral Maxillofac. Surg. 23 (1), 19–21 (1994).CrossRef J. Punwutikorn, A. Waikakul, and V. Pairuchvej, “Clinically significant oroantral communications — A study of incidence and site,” Int. J. Oral Maxillofac. Surg. 23 (1), 19–21 (1994).CrossRef
9.
Zurück zum Zitat A. Zakirov, M. Ezhov, M. Gusarev, V. Alexandrovsky, and E. Shumilov, “End-to-end dental pathology detection in 3D cone-beam computed tomography images,” in Proc. 1st Conference on Medical Imaging with Deep Learning (MIDL 2018) (Amsterdam, The Netherlands, 2018), pp. 1–9. A. Zakirov, M. Ezhov, M. Gusarev, V. Alexandrovsky, and E. Shumilov, “End-to-end dental pathology detection in 3D cone-beam computed tomography images,” in Proc. 1st Conference on Medical Imaging with Deep Learning (MIDL 2018) (Amsterdam, The Netherlands, 2018), pp. 1–9.
10.
Zurück zum Zitat A. S. Lundervold and A. Lundervold, “An overview of deep learning in medical imaging focusing on MRI,” Z. Med. Phys. 29 (2), 2019, pp. 102–127.CrossRef A. S. Lundervold and A. Lundervold, “An overview of deep learning in medical imaging focusing on MRI,” Z. Med. Phys. 29 (2), 2019, pp. 102–127.CrossRef
11.
Zurück zum Zitat A. V. N. Reddy and Ch. P. Krishna, “A survey on applications and performance of deep convolution neural network architecture for image segmentation,” Int. J. Pure Appl. Math. 118 (19), 43–60 (2018). A. V. N. Reddy and Ch. P. Krishna, “A survey on applications and performance of deep convolution neural network architecture for image segmentation,” Int. J. Pure Appl. Math. 118 (19), 43–60 (2018).
12.
Zurück zum Zitat M. D. Zeiler and R. Fergus, “Visualizing and understanding convolutional networks,” in Computer Vision—ECCV 2014, Proc. 13th European Conference, Part I, Ed. by D. Fleet, T. Pajdla, B. Schiele, and T. Tuytelaars, Lecture Notes in Computer Science (Springer, Cham, 2014), Vol. 8689, pp. 818–833. M. D. Zeiler and R. Fergus, “Visualizing and understanding convolutional networks,” in Computer Vision—ECCV 2014, Proc. 13th European Conference, Part I, Ed. by D. Fleet, T. Pajdla, B. Schiele, and T. Tuytelaars, Lecture Notes in Computer Science (Springer, Cham, 2014), Vol. 8689, pp. 818–833.
13.
Zurück zum Zitat P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, and Y. LeCun, “OverFeat: Integrated recognition, localization, and detection using Convolutional Networks,” in Proc. Int. Conf. on Learning Representations (ICLR 2014) (Banff, Canada, 2014), pp. 1–16; arXiv preprint arXiv:1312.6229. Available at https://arxiv.org/abs/1312.6229. P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, and Y. LeCun, “OverFeat: Integrated recognition, localization, and detection using Convolutional Networks,” in Proc. Int. Conf. on Learning Representations (ICLR 2014) (Banff, Canada, 2014), pp. 1–16; arXiv preprint arXiv:1312.6229. Available at https://​arxiv.​org/​abs/​1312.​6229.​
14.
Zurück zum Zitat R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” in Proc. 2014 IEEE Conf. on Computer Vision and Pattern Recognition (CVPR 2014) (Columbus, OH, USA, 2014), pp. 580–587. R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” in Proc. 2014 IEEE Conf. on Computer Vision and Pattern Recognition (CVPR 2014) (Columbus, OH, USA, 2014), pp. 580–587.
15.
Zurück zum Zitat S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards real-time object detection with Region Proposal Networks,” IEEE Trans. Pattern Anal. Mach. Intell. 39 (6), 1137–1149 (2017).CrossRef S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards real-time object detection with Region Proposal Networks,” IEEE Trans. Pattern Anal. Mach. Intell. 39 (6), 1137–1149 (2017).CrossRef
16.
Zurück zum Zitat H. Agarwal, Deep Learning Methods for Visual Fault Diagnostics of Dental X-ray Systems, Thesis for the degree of Master of Science in Technology (Aalto University, School of Science, Otaniemi, 2018), pp. 1–64. H. Agarwal, Deep Learning Methods for Visual Fault Diagnostics of Dental X-ray Systems, Thesis for the degree of Master of Science in Technology (Aalto University, School of Science, Otaniemi, 2018), pp. 1–64.
17.
Zurück zum Zitat J. Zhou and M. Abdel-Mottaleb, “A content-based system for human identification based on bitewing dental X-ray images,” Pattern Recogn. 38 (11), 2132–2142 (2005).CrossRef J. Zhou and M. Abdel-Mottaleb, “A content-based system for human identification based on bitewing dental X-ray images,” Pattern Recogn. 38 (11), 2132–2142 (2005).CrossRef
18.
Zurück zum Zitat A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural network,” in Advances in Neural Information Processing Systems 25: Proc. 26th Annual Conf. NIPS 2012 (Lake Tahoe, NV, USA, 2012), Vol. 1, pp. 1097–1105. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural network,” in Advances in Neural Information Processing Systems 25: Proc. 26th Annual Conf. NIPS 2012 (Lake Tahoe, NV, USA, 2012), Vol. 1, pp. 1097–1105.
19.
Zurück zum Zitat P. M. Macho, N. Kurz, A. Ulges, R. Brylka, T. Gietzen, and U. Schwanecke, “Segmenting teeth from volumetric CT data with a hierarchical CNN-based approach,” in Proc. Conf. on Computer Graphics & Visual Computing (CGVC 2018) (Swansea, UK, 2018), pp. 109–113. P. M. Macho, N. Kurz, A. Ulges, R. Brylka, T. Gietzen, and U. Schwanecke, “Segmenting teeth from volumetric CT data with a hierarchical CNN-based approach,” in Proc. Conf. on Computer Graphics & Visual Computing (CGVC 2018) (Swansea, UK, 2018), pp. 109–113.
20.
Zurück zum Zitat S. Madhukumar and N. Santhiyakumari, “Evaluation of k-means and fuzzy c-means segmentation on MR images of brain,” Egypt. J. Radiol. Nucl. Med. 46 (2), 475–479 (2015).CrossRef S. Madhukumar and N. Santhiyakumari, “Evaluation of k-means and fuzzy c-means segmentation on MR images of brain,” Egypt. J. Radiol. Nucl. Med. 46 (2), 475–479 (2015).CrossRef
21.
Zurück zum Zitat Y. Miki, C. Muramatsu, T. Hayashi, X. Zhou, T. Hara, A. Katsumata, and H. Fujita, “Classification of teeth in cone-beam CT using deep convolutional neural network,” Comput. Biol. Med. 80, 24–29 (2017).CrossRef Y. Miki, C. Muramatsu, T. Hayashi, X. Zhou, T. Hara, A. Katsumata, and H. Fujita, “Classification of teeth in cone-beam CT using deep convolutional neural network,” Comput. Biol. Med. 80, 24–29 (2017).CrossRef
22.
Zurück zum Zitat H.-C. Shin, H. R. Roth, M. Gao, L. Lu, Z. Xu, I. Nogues, et al., “Deep convolutional neural networks for computer aided detection: CNN architectures, dataset characteristics and transfer learning,” IEEE Trans. Med. Imaging 35 (5), 1285–1298 (2016).CrossRef H.-C. Shin, H. R. Roth, M. Gao, L. Lu, Z. Xu, I. Nogues, et al., “Deep convolutional neural networks for computer aided detection: CNN architectures, dataset characteristics and transfer learning,” IEEE Trans. Med. Imaging 35 (5), 1285–1298 (2016).CrossRef
23.
Zurück zum Zitat Z. Jiao, X. Gao, Y. Wang, and J. Li, “A deep feature-based framework for breast masses classification,” Neurocomput. 197, 221–231 (2016).CrossRef Z. Jiao, X. Gao, Y. Wang, and J. Li, “A deep feature-based framework for breast masses classification,” Neurocomput. 197, 221–231 (2016).CrossRef
24.
Zurück zum Zitat G. Carneiro, J. Nascimento, and A. P. Bradley, “Automated analysis of unregistered multiview mammograms with deep learning,” IEEE Trans. Med. Imaging 36 (11), 2355–2365 (2017).CrossRef G. Carneiro, J. Nascimento, and A. P. Bradley, “Automated analysis of unregistered multiview mammograms with deep learning,” IEEE Trans. Med. Imaging 36 (11), 2355–2365 (2017).CrossRef
25.
Zurück zum Zitat T. Kooi, G. Litjens, B. van Ginneken, A. Gubern-Mérida, C. I. Sánchez, R. Mann, A. den Heeten, and N. Karssemeijer, “Large scale deep learning for computer aided detection of mammographic lesions,” Med. Image Anal. 35, 303–312 (2017).CrossRef T. Kooi, G. Litjens, B. van Ginneken, A. Gubern-Mérida, C. I. Sánchez, R. Mann, A. den Heeten, and N. Karssemeijer, “Large scale deep learning for computer aided detection of mammographic lesions,” Med. Image Anal. 35, 303–312 (2017).CrossRef
26.
Zurück zum Zitat S. Peck and L. Peck, “A time for change of tooth numbering system,” J. Dent. Educ. 57, 643–647 (1993). S. Peck and L. Peck, “A time for change of tooth numbering system,” J. Dent. Educ. 57, 643–647 (1993).
27.
Zurück zum Zitat “Designation for teeth,” in Dental Abbreviation, Symbols and Acronyms, 2nd ed. (Council on Dental Practice, American Dental Association, 2008), p. 35. “Designation for teeth,” in Dental Abbreviation, Symbols and Acronyms, 2nd ed. (Council on Dental Practice, American Dental Association, 2008), p. 35.
28.
Zurück zum Zitat M. F. Aydogdu, V. Celik, and M. F. Demirci, “Comparison of three different CNN architectures for age classification,” in Proc. IEEE 11th Int. Conf. on Semantic Computing (ICSC2017): 1st International Workshop on Semantics for Engineering and Robotics (IWSER 2017) (San Diego, CA, USA, 2017), pp. 372–377. M. F. Aydogdu, V. Celik, and M. F. Demirci, “Comparison of three different CNN architectures for age classification,” in Proc. IEEE 11th Int. Conf. on Semantic Computing (ICSC2017): 1st International Workshop on Semantics for Engineering and Robotics (IWSER 2017) (San Diego, CA, USA, 2017), pp. 372–377.
29.
Zurück zum Zitat Y.-F. Kuo, S.-Y. Lin, C. H. Wu, S.-L. Chen, T.-L. Lin, H.-H. Lin, C.-H. Mai, and J. F. Villaverde, “A Convolutional Neural Network approach for Dental Panoramic Radiographs classification,” J. Med. Imaging Health Inf. 7 (8), 1693–1704 (2017).CrossRef Y.-F. Kuo, S.-Y. Lin, C. H. Wu, S.-L. Chen, T.-L. Lin, H.-H. Lin, C.-H. Mai, and J. F. Villaverde, “A Convolutional Neural Network approach for Dental Panoramic Radiographs classification,” J. Med. Imaging Health Inf. 7 (8), 1693–1704 (2017).CrossRef
30.
Zurück zum Zitat W. Poedjiastoeti and S. Suebnukarn, “Application of Convolutional Neural Network in the diagnosis of jaw tumors,” Healthc. Inf. Res. 24 (3), 236–241 (2018).CrossRef W. Poedjiastoeti and S. Suebnukarn, “Application of Convolutional Neural Network in the diagnosis of jaw tumors,” Healthc. Inf. Res. 24 (3), 236–241 (2018).CrossRef
31.
Zurück zum Zitat Y.-J. Yu, “Machine learning for dental image analysis,” arXiv preprint arXiv:1611.09958 (2016). https://arxiv.org/abs/1611.09958. Y.-J. Yu, “Machine learning for dental image analysis,” arXiv preprint arXiv:1611.09958 (2016). https://arxiv.org/abs/1611.09958.
32.
Zurück zum Zitat L. Perez and J. Wang, “The effectiveness of data augmentation in image classification using deep learning,” arXiv preprint arXiv:1712.04621 (2017). https://arxiv.org/abs/1712.04621. L. Perez and J. Wang, “The effectiveness of data augmentation in image classification using deep learning,” arXiv preprint arXiv:1712.04621 (2017). https://arxiv.org/abs/1712.04621.
33.
Zurück zum Zitat J.-H. Lee, D.-H. Kim, S. -N. Jeong, and S.-H. Choi, “Detection and diagnosis of dental caries using a deep learning-based convolutional neural network algorithm,” J. Dent. 77, 106–111 (2018).CrossRef J.-H. Lee, D.-H. Kim, S. -N. Jeong, and S.-H. Choi, “Detection and diagnosis of dental caries using a deep learning-based convolutional neural network algorithm,” J. Dent. 77, 106–111 (2018).CrossRef
Metadaten
Titel
Numbering and Classification of Panoramic Dental Images Using 6-Layer Convolutional Neural Network
verfasst von
Prerna Singh
Priti Sehgal
Publikationsdatum
01.01.2020
Verlag
Pleiades Publishing
Erschienen in
Pattern Recognition and Image Analysis / Ausgabe 1/2020
Print ISSN: 1054-6618
Elektronische ISSN: 1555-6212
DOI
https://doi.org/10.1134/S1054661820010149