Skip to main content
Erschienen in: Peer-to-Peer Networking and Applications 2/2022

15.11.2021

100+ FPS detector of personal protective equipment for worker safety: A deep learning approach for green edge computing

verfasst von: Xiao Ke, Wenyao Chen, Wenzhong Guo

Erschienen in: Peer-to-Peer Networking and Applications | Ausgabe 2/2022

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

In industrial production, personal protective equipment (PPE) protects workers from accidental injuries. However, wearing PPE is not strictly enforced among workers due to all kinds of reasons. To enhance the monitoring of workers and thus avoid safety accidents, it is essential to design an automatic detection method for PPE. In this paper, we constructed a dataset called FZU-PPE for our study, which contains four types of PPE (helmet, safety vest, mask, and gloves). To reduce the model size and resource consumption, we propose a lightweight object detection method based on deep learning for superfast detection of whether workers are wearing PPE or not. We use two lightweight methods to optimize the network structure of the object detection algorithm to reduce the computational effort and parameters of the detection model by 32% and 25%, respectively, with minimal accuracy loss. We propose a channel pruning algorithm based on the BN layer scaling factor γ to further reduce the size of the detection model. Experiments show that the automatic detection of PPE using our lightweight object detection method takes only 9.5 ms to detect a single video frame and achieves a detection speed of 105 FPS. Our detection model has a minimum size of 1.82 MB and a model size compression rate of 86.7%, which can meet the strict requirements of memory occupation and computational resources for embedded and mobile devices. Our approach is a superfast detection method for green edge computing.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
7.
Zurück zum Zitat Kim J, Mesmakhosroshahi M (2015) Stereo-based region of interest generation for real-time pedestrian detection. Peer-to-Peer Netw Appl 8(2):181–188CrossRef Kim J, Mesmakhosroshahi M (2015) Stereo-based region of interest generation for real-time pedestrian detection. Peer-to-Peer Netw Appl 8(2):181–188CrossRef
8.
Zurück zum Zitat Sudha M (2021) Traffic sign detection and recognition using RGSM and a novel feature extraction method. Peer-to-Peer Netw Appl 1–12 Sudha M (2021) Traffic sign detection and recognition using RGSM and a novel feature extraction method. Peer-to-Peer Netw Appl 1–12
9.
Zurück zum Zitat Jin S, Gao Y, Chen L (2020) Improved deep distance learning for visual loop closure detection in smart city. Peer-to-Peer Netw Appl 13(4):1260–1271CrossRef Jin S, Gao Y, Chen L (2020) Improved deep distance learning for visual loop closure detection in smart city. Peer-to-Peer Netw Appl 13(4):1260–1271CrossRef
10.
Zurück zum Zitat Sun Z, Tang S, Huang H et al (2017) SOS: Real-time and accurate physical assault detection using smartphone. Peer-to-Peer Netw Appl 10(2):395–410CrossRef Sun Z, Tang S, Huang H et al (2017) SOS: Real-time and accurate physical assault detection using smartphone. Peer-to-Peer Netw Appl 10(2):395–410CrossRef
11.
Zurück zum Zitat Yang F, Li F, Zhang K et al (2021) Influencing factors analysis in pear disease recognition using deep learning. Peer-to-Peer Netw Appl 14(3):1816–1828CrossRef Yang F, Li F, Zhang K et al (2021) Influencing factors analysis in pear disease recognition using deep learning. Peer-to-Peer Netw Appl 14(3):1816–1828CrossRef
12.
Zurück zum Zitat Meng T, Wolter K, Wu H et al (2018) A secure and cost-efficient offloading policy for mobile cloud computing against timing attacks. Pervasive Mob Comput 45:4–18CrossRef Meng T, Wolter K, Wu H et al (2018) A secure and cost-efficient offloading policy for mobile cloud computing against timing attacks. Pervasive Mob Comput 45:4–18CrossRef
13.
Zurück zum Zitat Lu Y, Yi S, Zeng N et al (2017) Identification of rice diseases using deep convolutional neural networks. Neurocomputing 267:378–384CrossRef Lu Y, Yi S, Zeng N et al (2017) Identification of rice diseases using deep convolutional neural networks. Neurocomputing 267:378–384CrossRef
14.
Zurück zum Zitat Zhang H, Yan X, Li H et al (2019) Real-time alarming, monitoring, and locating for non-hard-hat use in construction. J Constr Eng Manag 145(3):04019006CrossRef Zhang H, Yan X, Li H et al (2019) Real-time alarming, monitoring, and locating for non-hard-hat use in construction. J Constr Eng Manag 145(3):04019006CrossRef
15.
Zurück zum Zitat Dong S, He Q, Li H et al (2015) Automated PPE misuse identification and assessment for safety performance enhancement. In ICCREM 2015 204–214 Dong S, He Q, Li H et al (2015) Automated PPE misuse identification and assessment for safety performance enhancement. In ICCREM 2015 204–214
16.
Zurück zum Zitat Zhang C, Tian Z, Song J et al (2021) Construction worker hardhat-wearing detection based on an improved BiFPN. 2020 25th International Conference on Pattern Recognition (ICPR), IEEE, p 8600–8607 Zhang C, Tian Z, Song J et al (2021) Construction worker hardhat-wearing detection based on an improved BiFPN. 2020 25th International Conference on Pattern Recognition (ICPR), IEEE, p 8600–8607
17.
Zurück zum Zitat Wang L, Xie L, Yang P et al (2020) Hardhat-wearing detection based on a lightweight convolutional neural network with multi-scale features and a top-down module. Sensors 20(7):1868CrossRef Wang L, Xie L, Yang P et al (2020) Hardhat-wearing detection based on a lightweight convolutional neural network with multi-scale features and a top-down module. Sensors 20(7):1868CrossRef
18.
Zurück zum Zitat Filatov N, Maltseva N, Bakhshiev A (2020) Development of hard hat wearing monitoring system using deep neural networks with high inference speed. International Russian Automation Conference (RusAutoCon), IEEE, p 459–463 Filatov N, Maltseva N, Bakhshiev A (2020) Development of hard hat wearing monitoring system using deep neural networks with high inference speed. International Russian Automation Conference (RusAutoCon), IEEE, p 459–463
19.
Zurück zum Zitat Wu J, Cai N, Chen W et al (2019) Automatic detection of hardhats worn by construction personnel: A deep learning approach and benchmark dataset. Autom Constr 106:102894 Wu J, Cai N, Chen W et al (2019) Automatic detection of hardhats worn by construction personnel: A deep learning approach and benchmark dataset. Autom Constr 106:102894
20.
Zurück zum Zitat Mneymneh BE, Abbas M, Khoury H (2019) Vision-based framework for intelligent monitoring of hardhat wearing on construction sites. J Comput Civ Eng 33(2):04018066CrossRef Mneymneh BE, Abbas M, Khoury H (2019) Vision-based framework for intelligent monitoring of hardhat wearing on construction sites. J Comput Civ Eng 33(2):04018066CrossRef
21.
Zurück zum Zitat Wójcik B, Żarski M, Książek K et al (2021) Hard hat wearing detection based on head keypoint localization. arXiv preprint: arXiv:2106.10944 Wójcik B, Żarski M, Książek K et al (2021) Hard hat wearing detection based on head keypoint localization. arXiv preprint: arXiv:2106.10944
22.
Zurück zum Zitat Fang Q, Li H, Luo X et al (2018) Detecting non-hardhat-use by a deep learning method from far-field surveillance videos. Autom Constr 85:1–9CrossRef Fang Q, Li H, Luo X et al (2018) Detecting non-hardhat-use by a deep learning method from far-field surveillance videos. Autom Constr 85:1–9CrossRef
23.
Zurück zum Zitat Ren S, He K, Girshick R et al (2015) Faster r-cnn: Towards real-time object detection with region proposal networks. Adv Neural Inf Process Syst 28:91–99 Ren S, He K, Girshick R et al (2015) Faster r-cnn: Towards real-time object detection with region proposal networks. Adv Neural Inf Process Syst 28:91–99
24.
Zurück zum Zitat Seong H, Son H, Kim C (2018) A comparative study of machine learning classification for color-based safety vest detection on construction-site images. KSCE J Civ Eng 22(11):4254–4262CrossRef Seong H, Son H, Kim C (2018) A comparative study of machine learning classification for color-based safety vest detection on construction-site images. KSCE J Civ Eng 22(11):4254–4262CrossRef
25.
Zurück zum Zitat Yu J, Zhang W (2021) Face mask wearing detection algorithm based on improved YOLO-v4. Sensors 21(9):3263CrossRef Yu J, Zhang W (2021) Face mask wearing detection algorithm based on improved YOLO-v4. Sensors 21(9):3263CrossRef
26.
Zurück zum Zitat Loey M, Manogaran G, Taha MHN et al (2021) A hybrid deep transfer learning model with machine learning methods for face mask detection in the era of the COVID-19 pandemic. Measurement 167:108288 Loey M, Manogaran G, Taha MHN et al (2021) A hybrid deep transfer learning model with machine learning methods for face mask detection in the era of the COVID-19 pandemic. Measurement 167:108288
27.
Zurück zum Zitat Nagrath P, Jain R, Madan A et al (2021) SSDMNV2: A real time DNN-based face mask detection system using single shot multibox detector and MobileNetV2. Sustain Cities Soc 66:102692 Nagrath P, Jain R, Madan A et al (2021) SSDMNV2: A real time DNN-based face mask detection system using single shot multibox detector and MobileNetV2. Sustain Cities Soc 66:102692
28.
Zurück zum Zitat Li H, Liu J, Liu RW, Xiong N, Wu K, Kim T (2017) A dimensionality reduction-based multi-step clustering method for robust vessel trajectory analysis. Sensors 17(8):1792CrossRef Li H, Liu J, Liu RW, Xiong N, Wu K, Kim T (2017) A dimensionality reduction-based multi-step clustering method for robust vessel trajectory analysis. Sensors 17(8):1792CrossRef
30.
Zurück zum Zitat Nath ND, Behzadan AH, Paal SG (2020) Deep learning for site safety: Real-time detection of personal protective equipment. Autom Constr 112:103085 Nath ND, Behzadan AH, Paal SG (2020) Deep learning for site safety: Real-time detection of personal protective equipment. Autom Constr 112:103085
31.
Zurück zum Zitat Fang W, Yao X, Zhao X, Yin J, Xiong N (2016) A stochastic control approach to maximize profit on service provisioning for mobile cloudlet platforms. IEEE Trans Syst Man Cybern Syst 48(4):522–534CrossRef Fang W, Yao X, Zhao X, Yin J, Xiong N (2016) A stochastic control approach to maximize profit on service provisioning for mobile cloudlet platforms. IEEE Trans Syst Man Cybern Syst 48(4):522–534CrossRef
32.
Zurück zum Zitat Kyrkou C, Plastiras G, Theocharides T et al (2018) DroNet: Efficient convolutional neural network detector for real-time UAV applications. 2018 Design, Automation & Test in Europe Conference & Exhibition (DATE), IEEE, p 967–972 Kyrkou C, Plastiras G, Theocharides T et al (2018) DroNet: Efficient convolutional neural network detector for real-time UAV applications. 2018 Design, Automation & Test in Europe Conference & Exhibition (DATE), IEEE, p 967–972
33.
Zurück zum Zitat Szegedy C, Toshev A, Erhan D (2013) Deep neural networks for object detection Szegedy C, Toshev A, Erhan D (2013) Deep neural networks for object detection
34.
Zurück zum Zitat Girshick R (2015) Fast r-cnn. Proceedings of the IEEE international conference on computer vision, p 1440–1448 Girshick R (2015) Fast r-cnn. Proceedings of the IEEE international conference on computer vision, p 1440–1448
35.
Zurück zum Zitat He K, Gkioxari G, Dollár P et al (2017) Mask r-cnn. Proceedings of the IEEE international conference on computer vision, p 2961–2969 He K, Gkioxari G, Dollár P et al (2017) Mask r-cnn. Proceedings of the IEEE international conference on computer vision, p 2961–2969
36.
Zurück zum Zitat Duan K, Xie L, Qi H et al (2020) Corner proposal network for anchor-free, two-stage object detection. Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part III 16, Springer International Publishing, p 399–416 Duan K, Xie L, Qi H et al (2020) Corner proposal network for anchor-free, two-stage object detection. Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part III 16, Springer International Publishing, p 399–416
37.
Zurück zum Zitat Liu W, Anguelov D, Erhan D et al (2016) Ssd: Single shot multibox detector. European conference on computer vision, Springer, Cham, p 21–37 Liu W, Anguelov D, Erhan D et al (2016) Ssd: Single shot multibox detector. European conference on computer vision, Springer, Cham, p 21–37
38.
Zurück zum Zitat Bochkovskiy A, Wang CY, Liao HYM (2020) Yolov4: Optimal speed and accuracy of object detection. arXiv preprint: arXiv:2004.10934 Bochkovskiy A, Wang CY, Liao HYM (2020) Yolov4: Optimal speed and accuracy of object detection. arXiv preprint: arXiv:2004.10934
39.
Zurück zum Zitat Duan K, Bai S, Xie L et al (2019) Centernet: Keypoint triplets for object detection. Proceedings of the IEEE/CVF International Conference on Computer Vision, p 6569–6578 Duan K, Bai S, Xie L et al (2019) Centernet: Keypoint triplets for object detection. Proceedings of the IEEE/CVF International Conference on Computer Vision, p 6569–6578
40.
Zurück zum Zitat Chen Q, Wang Y, Yang T et al (2021) You only look one-level feature. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, p 13039–13048 Chen Q, Wang Y, Yang T et al (2021) You only look one-level feature. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, p 13039–13048
41.
Zurück zum Zitat Wang CY, Liao HYM, Wu YH et al (2020) CSPNet: A new backbone that can enhance learning capability of CNN. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, p 390–391 Wang CY, Liao HYM, Wu YH et al (2020) CSPNet: A new backbone that can enhance learning capability of CNN. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, p 390–391
42.
Zurück zum Zitat Liu S, Qi L, Qin H et al (2018) Path aggregation network for instance segmentation. Proceedings of the IEEE conference on computer vision and pattern recognition, p 8759–8768 Liu S, Qi L, Qin H et al (2018) Path aggregation network for instance segmentation. Proceedings of the IEEE conference on computer vision and pattern recognition, p 8759–8768
43.
Zurück zum Zitat Wang RJ, Li X, Ling CX (2018) Pelee: A real-time object detection system on mobile devices. arXiv preprint: arXiv:1804.06882 Wang RJ, Li X, Ling CX (2018) Pelee: A real-time object detection system on mobile devices. arXiv preprint: arXiv:1804.06882
44.
Zurück zum Zitat Liu Z, Li J, Shen Z et al (2017) Learning efficient convolutional networks through network slimming. Proceedings of the IEEE international conference on computer vision, p 2736–2744 Liu Z, Li J, Shen Z et al (2017) Learning efficient convolutional networks through network slimming. Proceedings of the IEEE international conference on computer vision, p 2736–2744
45.
Zurück zum Zitat Han S, Pool J, Tran J et al (2015) Learning both weights and connections for efficient neural networks. arXiv preprint: arXiv:1506.02626 Han S, Pool J, Tran J et al (2015) Learning both weights and connections for efficient neural networks. arXiv preprint: arXiv:1506.02626
46.
Zurück zum Zitat Srinivas S, Subramanya A, Venkatesh Babu R (2017) Training sparse neural networks. Proceedings of the IEEE conference on computer vision and pattern recognition workshops, p 138–145 Srinivas S, Subramanya A, Venkatesh Babu R (2017) Training sparse neural networks. Proceedings of the IEEE conference on computer vision and pattern recognition workshops, p 138–145
47.
Zurück zum Zitat Li H, Kadav A, Durdanovic I et al (2016) Pruning filters for efficient convnets. arXiv preprint: arXiv:1608.08710 Li H, Kadav A, Durdanovic I et al (2016) Pruning filters for efficient convnets. arXiv preprint: arXiv:1608.08710
48.
Zurück zum Zitat Changpinyo S, Sandler M, Zhmoginov A (2017) The power of sparsity in convolutional neural network. arXiv preprint: arXiv:1702.06257 Changpinyo S, Sandler M, Zhmoginov A (2017) The power of sparsity in convolutional neural network. arXiv preprint: arXiv:1702.06257
49.
Zurück zum Zitat Wang CY, Bochkovskiy A, Liao HYM (2021) Scaled-yolov4: Scaling cross stage partial network. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, p 13029–13038 Wang CY, Bochkovskiy A, Liao HYM (2021) Scaled-yolov4: Scaling cross stage partial network. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, p 13029–13038
50.
Zurück zum Zitat Ge Z, Liu S, Wang F et al (2021) Yolox: Exceeding yolo series in 2021. arXiv preprint: arXiv:2107.08430 Ge Z, Liu S, Wang F et al (2021) Yolox: Exceeding yolo series in 2021. arXiv preprint: arXiv:2107.08430
Metadaten
Titel
100+ FPS detector of personal protective equipment for worker safety: A deep learning approach for green edge computing
verfasst von
Xiao Ke
Wenyao Chen
Wenzhong Guo
Publikationsdatum
15.11.2021
Verlag
Springer US
Erschienen in
Peer-to-Peer Networking and Applications / Ausgabe 2/2022
Print ISSN: 1936-6442
Elektronische ISSN: 1936-6450
DOI
https://doi.org/10.1007/s12083-021-01258-4

Weitere Artikel der Ausgabe 2/2022

Peer-to-Peer Networking and Applications 2/2022 Zur Ausgabe