Skip to main content
Top

2021 | OriginalPaper | Chapter

Leveraging Deep Learning and IoT for Monitoring COVID19 Safety Guidelines Within College Campus

Authors : Sahai Vedant, D’Costa Jason, Srivastava Mayank, Mehra Mahendra, Kalbande Dhananjay

Published in: Advanced Computing

Publisher: Springer Singapore

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

The widespread coronavirus pandemic 2019 (COVID-19) has brought global emergency with its deadly spread to roundabout 215 countries, and about 4,448,082 Active cases along with 535,098 deaths globally as on July 5, 2020 [1]. The non-availability of any vaccine and low immunity against COVID19 upsurges the exposure of human beings to this virus. In the absence of any vaccine, WHO guidelines like social distancing, wearing masks, washing hands and using sanitizers is the only solution against this pandemic. However, there is no idea when the pandemic situation that the world is going through will come to an end, we can take a breath of relief that someday we will surely go back to our colleges. Although having students wait in line to be screened for COVID19 symptoms may prove logistically challenging. Enthused by this belief, this paper proposes an IoT and deep learning-based framework for automating the task of verifying mask protection and measuring the body temperature of all the students entering the campus. This paper provides a human-less screening solution using a deep learning model to flag no facemasks on students entering the campus and non-contact temperature sensor MLX90614 to detect elevated body temperatures to reduce the risk of exposure to COVID19.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literature
7.
go back to reference Ting, D.S.W., Carin, L., Dzau, V., Wong, T.Y.: Digital technology and COVID-19. Nat. Med. 26(4), 459–461 (2020)CrossRef Ting, D.S.W., Carin, L., Dzau, V., Wong, T.Y.: Digital technology and COVID-19. Nat. Med. 26(4), 459–461 (2020)CrossRef
9.
go back to reference Nguyen-Meidine, L.T., Granger, E., Kiran, M., Blais-Morin, L.: A comparison of CNN-based face and head detectors for real-time video surveillance applications. In: 2017 Seventh International Conference on Image Processing Theory, Tools and Applications (IPTA), Montreal, QC, pp. 1–7 (2017). https://doi.org/10.1109/ipta.2017.8310113 Nguyen-Meidine, L.T., Granger, E., Kiran, M., Blais-Morin, L.: A comparison of CNN-based face and head detectors for real-time video surveillance applications. In: 2017 Seventh International Conference on Image Processing Theory, Tools and Applications (IPTA), Montreal, QC, pp. 1–7 (2017). https://​doi.​org/​10.​1109/​ipta.​2017.​8310113
10.
go back to reference Alabort-i-medina, J., Antonakos, E., Booth, J., Snape, P.: Menpo: a comprehensive platform for parametric image alignment and visual deformable models categories and subject descriptors, pp. 3–6 (2014) Alabort-i-medina, J., Antonakos, E., Booth, J., Snape, P.: Menpo: a comprehensive platform for parametric image alignment and visual deformable models categories and subject descriptors, pp. 3–6 (2014)
11.
go back to reference Zhu, X., Ramanan, D.: Face detection, pose estimation, and landmark localization in the wild. In: CVPR (2012) Zhu, X., Ramanan, D.: Face detection, pose estimation, and landmark localization in the wild. In: CVPR (2012)
12.
go back to reference Morency, L.-P., Whitehill, J., Movellan, J.R.: Generalized adaptive view-based appearance model: integrated frame-work for monocular head pose estimation. In: FG (2008) Morency, L.-P., Whitehill, J., Movellan, J.R.: Generalized adaptive view-based appearance model: integrated frame-work for monocular head pose estimation. In: FG (2008)
13.
go back to reference Fanelli, G., Gall, J., Gool, L.V.: Real time head pose estimation with random regression forests. In: CVPR, pp. 617–624 (2011) Fanelli, G., Gall, J., Gool, L.V.: Real time head pose estimation with random regression forests. In: CVPR, pp. 617–624 (2011)
14.
go back to reference Asthana, A., Zafeiriou, S., Cheng, S., Pantic, M.: Robust discriminative response map fitting with constrained local models. In: CVPR (2013) Asthana, A., Zafeiriou, S., Cheng, S., Pantic, M.: Robust discriminative response map fitting with constrained local models. In: CVPR (2013)
15.
go back to reference Asthana, A., Zafeiriou, S., Cheng, S. Pantic, M.: Incremental face alignment in the wild. In: CVPR (2014) Asthana, A., Zafeiriou, S., Cheng, S. Pantic, M.: Incremental face alignment in the wild. In: CVPR (2014)
16.
go back to reference Hansen, D.W., Ji, Q.: In the eye of the beholder: a survey of models for eyes and gaze. IEEE Trans. Pattern Anal. Mach. Intell. 32, 478–500 (2010)CrossRef Hansen, D.W., Ji, Q.: In the eye of the beholder: a survey of models for eyes and gaze. IEEE Trans. Pattern Anal. Mach. Intell. 32, 478–500 (2010)CrossRef
17.
go back to reference Lidegaard, M., Hansen, D.W., Krüger, N.: Head mounted device for point-of-gaze estimation in three dimensions. In: Proceedings of the Symposium on Eye Tracking Research and Applications - ETRA 2014 (2014) Lidegaard, M., Hansen, D.W., Krüger, N.: Head mounted device for point-of-gaze estimation in three dimensions. In: Proceedings of the Symposium on Eye Tracking Research and Applications - ETRA 2014 (2014)
18.
go back to reference Świrski, L., Bulling, A., Dodgson, N.A.: Robust real-time pupil tracking in highly off-axis images. In: Proceedings of ETRA (2012) Świrski, L., Bulling, A., Dodgson, N.A.: Robust real-time pupil tracking in highly off-axis images. In: Proceedings of ETRA (2012)
19.
go back to reference Ferhat, O., Vilarino, F.: A cheap portable eye–tracker solution for common setups. In: 3rd International Workshop on Pervasive Eye Tracking and Mobile Eye-Based Interaction (2013) Ferhat, O., Vilarino, F.: A cheap portable eye–tracker solution for common setups. In: 3rd International Workshop on Pervasive Eye Tracking and Mobile Eye-Based Interaction (2013)
20.
go back to reference Wood, E., Bulling, A.: EyeTab: model-based gaze estimation on unmodified tablet computers. In: Proceedings of ETRA, March 2014 Wood, E., Bulling, A.: EyeTab: model-based gaze estimation on unmodified tablet computers. In: Proceedings of ETRA, March 2014
21.
go back to reference Zielinski, P.: Opengazer: open-source gaze tracker for ordinary webcams (2007) Zielinski, P.: Opengazer: open-source gaze tracker for ordinary webcams (2007)
23.
go back to reference Szegedy, C., et al.: Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9 (2015) Szegedy, C., et al.: Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9 (2015)
24.
go back to reference He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on computer Vision and Pattern Recognition, pp. 770–778 (2016) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on computer Vision and Pattern Recognition, pp. 770–778 (2016)
25.
28.
29.
go back to reference Amos, B., Ludwiczuk, B., Satyanarayanan, M.: OpenFace: a general-purpose face recognition library with mobile applications. CMU-CS-16-118, CMU School of Computer Science, Technical report (2016) Amos, B., Ludwiczuk, B., Satyanarayanan, M.: OpenFace: a general-purpose face recognition library with mobile applications. CMU-CS-16-118, CMU School of Computer Science, Technical report (2016)
34.
go back to reference Bromley, J., et al.: Signature verification using a siamese time delay neural network. Int. J. Pattern Recogn. Artif. Intell. 7(04), 669–688 (1993)CrossRef Bromley, J., et al.: Signature verification using a siamese time delay neural network. Int. J. Pattern Recogn. Artif. Intell. 7(04), 669–688 (1993)CrossRef
35.
go back to reference Koch, G.: Siamese neural networks for one-shot image recognition. In: ICML Deep Learning Workshop (2015) Koch, G.: Siamese neural networks for one-shot image recognition. In: ICML Deep Learning Workshop (2015)
43.
go back to reference Viola, P., Jones, M.J.: Robust real-time face detection. J. Comput. Vis. 57(2), 137–154 (2004)CrossRef Viola, P., Jones, M.J.: Robust real-time face detection. J. Comput. Vis. 57(2), 137–154 (2004)CrossRef
44.
go back to reference Yan, J., Zhang, X., Lei, Z., Li, S.Z.: Real-time high-performance deformable model for face detection in the wild Yan, J., Zhang, X., Lei, Z., Li, S.Z.: Real-time high-performance deformable model for face detection in the wild
45.
go back to reference Liu, W., et al.: SSD: single shot multibox detector. CoRR, abs/1512.02325 (2015) Liu, W., et al.: SSD: single shot multibox detector. CoRR, abs/1512.02325 (2015)
46.
go back to reference Ren, S., et al.: Faster R-CNN: towards real-time object detection with region proposal networks. CoRR, abs/1506.01497 (2015) Ren, S., et al.: Faster R-CNN: towards real-time object detection with region proposal networks. CoRR, abs/1506.01497 (2015)
47.
go back to reference Dai, J., Li, Y., He, K., Sun, J.: R-FCN: object detection via region-based fully convolutional networks. CoRR, abs/1605.06409 (2016) Dai, J., Li, Y., He, K., Sun, J.: R-FCN: object detection via region-based fully convolutional networks. CoRR, abs/1605.06409 (2016)
48.
go back to reference Kim, K., Cheon, Y., Hong, S., Roh, B., Park, M.: PVANET: deep but lightweight neural networks for real-time object detection. CoRR, abs/1608.08021 (2016) Kim, K., Cheon, Y., Hong, S., Roh, B., Park, M.: PVANET: deep but lightweight neural networks for real-time object detection. CoRR, abs/1608.08021 (2016)
49.
go back to reference Vu, T., Osokin, A., Laptev, I.: Context-aware CNNs for person head detection. In: ICCV (2015) Vu, T., Osokin, A., Laptev, I.: Context-aware CNNs for person head detection. In: ICCV (2015)
50.
go back to reference Redmon, J., Farhadi, A.: YOLO9000: better, faster, stronger. CoRR,abs/1612.08242 (2016) Redmon, J., Farhadi, A.: YOLO9000: better, faster, stronger. CoRR,abs/1612.08242 (2016)
51.
go back to reference Huang, G.B., Ramesh, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: a database for studying face recognition in unconstrained environments. Technical Report 07-49, University of Massachusetts, Amherst, October 2007 Huang, G.B., Ramesh, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: a database for studying face recognition in unconstrained environments. Technical Report 07-49, University of Massachusetts, Amherst, October 2007
Metadata
Title
Leveraging Deep Learning and IoT for Monitoring COVID19 Safety Guidelines Within College Campus
Authors
Sahai Vedant
D’Costa Jason
Srivastava Mayank
Mehra Mahendra
Kalbande Dhananjay
Copyright Year
2021
Publisher
Springer Singapore
DOI
https://doi.org/10.1007/978-981-16-0401-0_3

Premium Partner