Skip to main content
Erschienen in: Wireless Networks 8/2023

13.06.2023 | Original Paper

Knowledge discovery of suspicious objects using hybrid approach with video clips and UAV images in distributed environments: a novel approach

verfasst von: Rayees Ahamad, Kamta Nath Mishra

Erschienen in: Wireless Networks | Ausgabe 8/2023

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

The current video surveillance systems that employ manual face detection and automatic face recognition in unmanned aerial vehicles (UAVs) have limited accuracy, typically below 90%. This is due to the utilization of a small number of Eigenfaces for principal component analysis transformation. Detecting faces in cloud-based Internet of Things (IoT) video frames involves separating video/image windows into two classes: one with faces (to train the surroundings) and the other with matches (in the foreground). The face detection process is further complicated by geometries, inconsistent image/video qualities, and lighting conditions, as well as the possibility of partial occlusion and disguises. Moreover, a fully automated iris image-based face recognition and detection system could prove useful in surveillance applications such as automated teller machine user security, whereas an automated face recognition system using UAV video frames in a cloud-integrated-IoT-based distributed computing environment is better suited for mug-shot matching and surveillance of distrustful objects. This is because controlled conditions are present when capturing mug shots. The proposed hybrid approach was rigorously tested, and the experimental results suggest that its real-world performance will be far more accurate than existing systems. Intelligent surveillance knowledge databases contain vast amounts of information on landmarks, terrain, events, activities, and entities that need to be efficiently and accurately processed and disseminated. Therefore, discovering the appropriate knowledge to detect distrustful objects plays a crucial role in future analysis. The experimental findings indicate that the proposed hybrid approach has high accuracy, low overall and average error rates, and very high average recall rates for benchmark and self-generated datasets. These results demonstrate the robustness, efficiency, and reliability of the authors’ choices. Although further improvements in results are possible, the proposed approach is sufficient for detecting distrustful objects.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Donlon, E., Dong, S., Liu, M., Li, J., Adelson, E., & Rodriguez, A. (2018). Gelslim: A high-resolution, compact, robust, and calibrated tactile-sensing finger. In IEEE/RSJ IEEE/RSJ international conference on intelligent robots and systems (pp. 1–6). Donlon, E., Dong, S., Liu, M., Li, J., Adelson, E., & Rodriguez, A. (2018). Gelslim: A high-resolution, compact, robust, and calibrated tactile-sensing finger. In IEEE/RSJ IEEE/RSJ international conference on intelligent robots and systems (pp. 1–6).
2.
Zurück zum Zitat Pranav, K. B., & Manikandan, J. (2020). Design and evaluation of a real-time face recognition system using convolution neural networks. Procedia Computer Science, 171, 1651–1659. Pranav, K. B., & Manikandan, J. (2020). Design and evaluation of a real-time face recognition system using convolution neural networks. Procedia Computer Science, 171, 1651–1659.
3.
Zurück zum Zitat Alais, D., Xu, Y., Wardle, S. G., & Taubert, J. (2021). A shared mechanism for facial expression in human faces and face pareidolia. Proceedings of the Royal Society B, 288(20210966), 1–8. Alais, D., Xu, Y., Wardle, S. G., & Taubert, J. (2021). A shared mechanism for facial expression in human faces and face pareidolia. Proceedings of the Royal Society B, 288(20210966), 1–8.
4.
Zurück zum Zitat Teoh, K. H., Ismail, R. C., Naziri, S. Z. M., Hussin, R., Isa, M. N. M., & Basir, M. S. S. M. (2020). Face recognition and identification using deep learning approach. In 5th Int Conf on Electr Design (pp. 1–9). Teoh, K. H., Ismail, R. C., Naziri, S. Z. M., Hussin, R., Isa, M. N. M., & Basir, M. S. S. M. (2020). Face recognition and identification using deep learning approach. In 5th Int Conf on Electr Design (pp. 1–9).
5.
Zurück zum Zitat Tolba, A. S., El-Baz, A. H., & El-Harby, A. A. (2017). Face recognition: A literature review. International Journal of Signal Processing, 2(2), 88–103. Tolba, A. S., El-Baz, A. H., & El-Harby, A. A. (2017). Face recognition: A literature review. International Journal of Signal Processing, 2(2), 88–103.
6.
Zurück zum Zitat Jie, Xu. (2021). A deep learning approach to building an intelligent video surveillance system. Multimedia Tools and Applications, 80, 5495–5515. Jie, Xu. (2021). A deep learning approach to building an intelligent video surveillance system. Multimedia Tools and Applications, 80, 5495–5515.
7.
Zurück zum Zitat Ding, C., & Tao, D. (2018). Trunk-branch ensemble convolutional neural networks for video-based face recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(4), 1002–1014. Ding, C., & Tao, D. (2018). Trunk-branch ensemble convolutional neural networks for video-based face recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(4), 1002–1014.
8.
Zurück zum Zitat Edwin A.S.C., Claudio R. J., & Carlos H.E.F. (2017) Facial expression recognition using temporal POEM features. Pattern Recognition Letters, 1–9. Edwin A.S.C., Claudio R. J., & Carlos H.E.F. (2017) Facial expression recognition using temporal POEM features. Pattern Recognition Letters, 1–9.
9.
Zurück zum Zitat Raphael A., Jules-R., & Aderemi A. (2018) Age estimation via face images: A survey. EURASIP Journal on Image and Video Processing, 1–35. Raphael A., Jules-R., & Aderemi A. (2018) Age estimation via face images: A survey. EURASIP Journal on Image and Video Processing, 1–35.
10.
Zurück zum Zitat Chellappa, R., Chen, J. C., Ranjan, R., Sankaranarayanan, S., Kumar, A., Patel, V. M., & Castillo, C. D. (2016). Towards the design of an end-to-end automated system for image and video-based recognition. CoRR abs/1601.07883. Chellappa, R., Chen, J. C., Ranjan, R., Sankaranarayanan, S., Kumar, A., Patel, V. M., & Castillo, C. D. (2016). Towards the design of an end-to-end automated system for image and video-based recognition. CoRR abs/1601.07883.
11.
Zurück zum Zitat Huang, G. B., Lee, H., & Learned-Miller, E. (2012). Learning hierarchical representations for face verification with convolutional deep belief networks. In CVPR (2012) (pp. 1–7). Huang, G. B., Lee, H., & Learned-Miller, E. (2012). Learning hierarchical representations for face verification with convolutional deep belief networks. In CVPR (2012) (pp. 1–7).
12.
Zurück zum Zitat Schroff, F., Kalenichenko, D., & Philbin, J. (2015). FaceNet: A unified embedding for face recognition and clustering. In IEEE conference on computer vision and pattern recognition (pp. 815–823). Schroff, F., Kalenichenko, D., & Philbin, J. (2015). FaceNet: A unified embedding for face recognition and clustering. In IEEE conference on computer vision and pattern recognition (pp. 815–823).
13.
Zurück zum Zitat Sun, Y., Wang, X., & Tang, X. (2013). Hybrid deep learning for face verification. In ICVV (pp. 1–6). Sun, Y., Wang, X., & Tang, X. (2013). Hybrid deep learning for face verification. In ICVV (pp. 1–6).
14.
Zurück zum Zitat Sun, Y., Wang, X., & Tang, X. (2014). Deep learning face representation from predicting 10,000 classes. In 2014 IEEE conference on computer vision and pattern recognition (pp. 1891–1898). Sun, Y., Wang, X., & Tang, X. (2014). Deep learning face representation from predicting 10,000 classes. In 2014 IEEE conference on computer vision and pattern recognition (pp. 1891–1898).
15.
Zurück zum Zitat Taigman, Y., Yang, M., Ranzato, M. A., & Wolf, L. (2014). Deepface: Closing the gap to human-level performance in face verification. In CVPR (2014) (pp. 1–6). Taigman, Y., Yang, M., Ranzato, M. A., & Wolf, L. (2014). Deepface: Closing the gap to human-level performance in face verification. In CVPR (2014) (pp. 1–6).
16.
Zurück zum Zitat Ding, C., & Tao, D. (2017). Trunk-branch ensemble convolutional neural networks for video-based face 542 recognition. IEEE Trans on PAMI PP(99), 1–14. Ding, C., & Tao, D. (2017). Trunk-branch ensemble convolutional neural networks for video-based face 542 recognition. IEEE Trans on PAMI PP(99), 1–14.
17.
Zurück zum Zitat Parchami, M., Bashbaghi, S., & Granger, E. (2017). Cnns with cross-correlation matching for face recognition in video surveillance using a single training sample per person. In AVSS Conference (pp. 1–6). Parchami, M., Bashbaghi, S., & Granger, E. (2017). Cnns with cross-correlation matching for face recognition in video surveillance using a single training sample per person. In AVSS Conference (pp. 1–6).
18.
Zurück zum Zitat Parchami, M., Bashbaghi, S., & Granger, E. (2017). Video-based face recognition using an ensemble of haar-like deep convolutional neural networks. In IJCNN (pp. 1–8). Parchami, M., Bashbaghi, S., & Granger, E. (2017). Video-based face recognition using an ensemble of haar-like deep convolutional neural networks. In IJCNN (pp. 1–8).
19.
Zurück zum Zitat Parkhi, O.M., Vedaldi, A., & Zisserman, A. (2015). Deep face recognition. BMVC (pp. 1–12). Parkhi, O.M., Vedaldi, A., & Zisserman, A. (2015). Deep face recognition. BMVC (pp. 1–12).
20.
Zurück zum Zitat Gao, S., Zhang, Y., Jia, K., Lu, J., & Zhang, Y. (2015). Single sample face recognition via learning deep supervised autoencoders. IEEE Transactions on Information Forensics and Security, 10(10), 2108–2118. Gao, S., Zhang, Y., Jia, K., Lu, J., & Zhang, Y. (2015). Single sample face recognition via learning deep supervised autoencoders. IEEE Transactions on Information Forensics and Security, 10(10), 2108–2118.
21.
Zurück zum Zitat Parchami, M., Bashbaghi, S., Granger, E., & Sayed, S. (2017). Using deep autoencoders to learn robust domain-invariant representations for still-to-video face recognition. In AVSS (pp. 1–6). Parchami, M., Bashbaghi, S., Granger, E., & Sayed, S. (2017). Using deep autoencoders to learn robust domain-invariant representations for still-to-video face recognition. In AVSS (pp. 1–6).
22.
Zurück zum Zitat Bashbaghi, S., Granger, E., Sabourin, R, & Parchami, M. (2018). Deep learning architectures for face recognition in video surveillance. In Deep learning in Object Detection and Recognition (pp. 1–22). Bashbaghi, S., Granger, E., Sabourin, R, & Parchami, M. (2018). Deep learning architectures for face recognition in video surveillance. In Deep learning in Object Detection and Recognition (pp. 1–22).
23.
Zurück zum Zitat Viola, P., & Jones, M. J. (2001). Robust real-time face detection. International Journal of Computer Vision, 57(2), 137–154. Viola, P., & Jones, M. J. (2001). Robust real-time face detection. International Journal of Computer Vision, 57(2), 137–154.
24.
Zurück zum Zitat Dou, P., Wu, Y., Shah, S., & Kakadiaris, I. A. (2014). Benchmarking 3D pose estimation for face recognition. In 22nd International Conference on Pattern Recognition (pp. 1–6). Dou, P., Wu, Y., Shah, S., & Kakadiaris, I. A. (2014). Benchmarking 3D pose estimation for face recognition. In 22nd International Conference on Pattern Recognition (pp. 1–6).
25.
Zurück zum Zitat Sharma, S., Bhatt, M., & Sharma, P. (2020). Face recognition system using machine learning algorithm. In 5th IEEE International Conference on Communication and Electronics Systems (ICCES) (pp. 1162–1168). Sharma, S., Bhatt, M., & Sharma, P. (2020). Face recognition system using machine learning algorithm. In 5th IEEE International Conference on Communication and Electronics Systems (ICCES) (pp. 1162–1168).
26.
Zurück zum Zitat Vikas, M., Suneeta A. , Vinay K. S., & Sushila M. (2012). Face recognition using geometric measurements, directional edges and directional multiresolution information. In 2nd Int Conf on Comm, Comp & Amp Security, Procedia Tech. (vol. 6, pp. 939–946). Vikas, M., Suneeta A. , Vinay K. S., & Sushila M. (2012). Face recognition using geometric measurements, directional edges and directional multiresolution information. In 2nd Int Conf on Comm, Comp & Amp Security, Procedia Tech. (vol. 6, pp. 939–946).
27.
Zurück zum Zitat Cendrillon, R.,& Lowell, B.C. (2000). Real-time face recognition using eigenfaces. In International conference on visual communications and image processing (vol. 4067, pp. 269–276). Cendrillon, R.,& Lowell, B.C. (2000). Real-time face recognition using eigenfaces. In International conference on visual communications and image processing (vol. 4067, pp. 269–276).
28.
Zurück zum Zitat Zhang, C., & Zhang, Z. (2010). Boosting-based face detection and adaptation. Sams Python, Chap- 1 (pp. 1–8). Zhang, C., & Zhang, Z. (2010). Boosting-based face detection and adaptation. Sams Python, Chap- 1 (pp. 1–8).
29.
Zurück zum Zitat Lienhart, R., & Maydt, J. (2002). An extended set of haar-like features for rapid object detection. In International conference on image processing (ICIP) (pp. 1–6). Lienhart, R., & Maydt, J. (2002). An extended set of haar-like features for rapid object detection. In International conference on image processing (ICIP) (pp. 1–6).
30.
Zurück zum Zitat Zhang, C., & Zhang, Z. (2010). A survey of recent advances in face detection (pp. 1–17). Microsoft Research. Zhang, C., & Zhang, Z. (2010). A survey of recent advances in face detection (pp. 1–17). Microsoft Research.
31.
Zurück zum Zitat Çarikç, M., & Ozen, F. (2012). A face recognition system based on eigenfaces method. Procedia Technology, 118–123. Çarikç, M., & Ozen, F. (2012). A face recognition system based on eigenfaces method. Procedia Technology, 118–123.
32.
Zurück zum Zitat Hasan, M. K., Ahsan, M. S., Newaz, S. S., & Lee, G. M. (2021). Human face detection techniques: A comprehensive review and future research directions. Electronics, 10, 2354. Hasan, M. K., Ahsan, M. S., Newaz, S. S., & Lee, G. M. (2021). Human face detection techniques: A comprehensive review and future research directions. Electronics, 10, 2354.
34.
Zurück zum Zitat Martinez, A., & Kak, A. (2001). PCA versus LDA. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(2), 228–233. Martinez, A., & Kak, A. (2001). PCA versus LDA. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(2), 228–233.
35.
Zurück zum Zitat Sahoolizadeh, H., & Aliyari, Y. (2008). Face recognition using eigenfaces, fisher-faces, and neural networks. In 2008. CIS 2008. 7th IEEE international conference on cybernetic intelligent systems (pp. 1–6). Sahoolizadeh, H., & Aliyari, Y. (2008). Face recognition using eigenfaces, fisher-faces, and neural networks. In 2008. CIS 2008. 7th IEEE international conference on cybernetic intelligent systems (pp. 1–6).
36.
Zurück zum Zitat Moller, D.P.F. (2016).Guide to Computing Fundamentals in Cyber-Physical Systems. Digital Manufacturing/Industry 4.0, Compu Comm & Networks (pp. 1–12). Moller, D.P.F. (2016).Guide to Computing Fundamentals in Cyber-Physical Systems. Digital Manufacturing/Industry 4.0, Compu Comm & Networks (pp. 1–12).
37.
Zurück zum Zitat Gilchrist, A. (2016). Introducing Industry 4.0 (pp. 195–215). Springer. Gilchrist, A. (2016). Introducing Industry 4.0 (pp. 195–215). Springer.
38.
Zurück zum Zitat Chen, S., Xu, H., Liu, D., Hu, B., & Wang, H. A. (2014). Vision of IoT: Applications, challenges, and opportunities with China PERSPECTIVE. IEEE Internet of Things Journal, 1(4), 349–359. Chen, S., Xu, H., Liu, D., Hu, B., & Wang, H. A. (2014). Vision of IoT: Applications, challenges, and opportunities with China PERSPECTIVE. IEEE Internet of Things Journal, 1(4), 349–359.
39.
Zurück zum Zitat Suárez-A, M., Fernández-Caramés, T. M., Fraga-Lamas, P., & Castedo, L. (2017). A practical evaluation of a high-security energy-efficient gateway for IoT fog computing applications. Sensors, 17, 1–39. Suárez-A, M., Fernández-Caramés, T. M., Fraga-Lamas, P., & Castedo, L. (2017). A practical evaluation of a high-security energy-efficient gateway for IoT fog computing applications. Sensors, 17, 1–39.
40.
Zurück zum Zitat Ferrández-P., F. J., García-Chamizo, J. M., Nieto-Hidalgo, M., Mora-P., J., & Mora-M., J. (2016). Developing ubiquitous sensor network platform using internet of things: Application in precision agriculture. Sensors, 1141. Ferrández-P., F. J., García-Chamizo, J. M., Nieto-Hidalgo, M., Mora-P., J., & Mora-M., J. (2016). Developing ubiquitous sensor network platform using internet of things: Application in precision agriculture. Sensors, 1141.
41.
Zurück zum Zitat Ganpathyraja, R., & Balamurugan, S. P. (2022). Suspicious Loitering detection using a contour-based object tracking andimage moment for intelligent video surveillance system. Journa of Algebraic Statistics, 13(2), 1294–1303. Ganpathyraja, R., & Balamurugan, S. P. (2022). Suspicious Loitering detection using a contour-based object tracking andimage moment for intelligent video surveillance system. Journa of Algebraic Statistics, 13(2), 1294–1303.
42.
Zurück zum Zitat Abdolamir, K., Abtahi, F., & Sjöström, M. (2022). Event detection in surveillance videos: A review. Multimedia Tools and Applications, 81, 35463–35501. Abdolamir, K., Abtahi, F., & Sjöström, M. (2022). Event detection in surveillance videos: A review. Multimedia Tools and Applications, 81, 35463–35501.
43.
Zurück zum Zitat Shakir, K., & Lulwah, A. (2022). Agricultural monitoring system in video surveillance object detection using feature extraction and classification by deep learning techniques. Computers and Electrical Engineering, 102, 108201 (1–14). Shakir, K., & Lulwah, A. (2022). Agricultural monitoring system in video surveillance object detection using feature extraction and classification by deep learning techniques. Computers and Electrical Engineering, 102, 108201 (1–14).
44.
Zurück zum Zitat Sheng, R., Li, J., Tu, T., Peng, Y., & Jiang, J. (2021). Towards efficient video detection object super-resolution with deep fusion network for public safety. Security and Communication Networks, 2021, 1–14. Sheng, R., Li, J., Tu, T., Peng, Y., & Jiang, J. (2021). Towards efficient video detection object super-resolution with deep fusion network for public safety. Security and Communication Networks, 2021, 1–14.
45.
Zurück zum Zitat Guruh F.S., Noersasongko, E., Nugraha, A., Andono, P. N., Jumanto, J., & Kusuma, E. J. (2019). A systematic review of intelligence video surveillance: Trends, techniques, frameworks, and datasets. IEEE Access, 7, 170457 (1–17). Guruh F.S., Noersasongko, E., Nugraha, A., Andono, P. N., Jumanto, J., & Kusuma, E. J. (2019). A systematic review of intelligence video surveillance: Trends, techniques, frameworks, and datasets. IEEE Access, 7, 170457 (1–17).
46.
Zurück zum Zitat Rasha, S., Moussa, M. M., & El Nemr, H. A. (2023). Attribute based spatio-temporal person retrieval in video surveillance. Alexandria Engineering Journal, 63, 441–454. Rasha, S., Moussa, M. M., & El Nemr, H. A. (2023). Attribute based spatio-temporal person retrieval in video surveillance. Alexandria Engineering Journal, 63, 441–454.
47.
Zurück zum Zitat Fernández-C, T. M., & Fraga-L, P. (2017). A review on human-centered IoT-connected smart labels for the Industry 4.0. IEEE Access, 6, 25939–25957. Fernández-C, T. M., & Fraga-L, P. (2017). A review on human-centered IoT-connected smart labels for the Industry 4.0. IEEE Access, 6, 25939–25957.
48.
Zurück zum Zitat Wan, J., Tang, S., Yan, H., Li, D., Wang, S., & Vasilakos, A. V. (2016). Cloud robotics: Current status and open issues. IEEE Access, 4, 2797–2807. Wan, J., Tang, S., Yan, H., Li, D., Wang, S., & Vasilakos, A. V. (2016). Cloud robotics: Current status and open issues. IEEE Access, 4, 2797–2807.
49.
Zurück zum Zitat Robla-Gömez, S., Becerra, V. M., Llata, J. R., González-Sarabia, E., Ferrero, C. T., & Pérez-Oria, J. (2017). Working together: A review on safe human-robot collaboration in industrial environments. IEEE Access, 5, 26754–26773. Robla-Gömez, S., Becerra, V. M., Llata, J. R., González-Sarabia, E., Ferrero, C. T., & Pérez-Oria, J. (2017). Working together: A review on safe human-robot collaboration in industrial environments. IEEE Access, 5, 26754–26773.
50.
Zurück zum Zitat Koch, P. J., van Amstel, M., De˛bska, P., Thormann, M. A., Tetzlaff, A. J., Bøgh, S., Chrysostomou, D. (2017). A skill-based robot co-worker for industrial maintenance tasks. In 27th Int Conf on Flex Automa & Intell Manu (FAIM 2017) (pp. 1–6). Koch, P. J., van Amstel, M., De˛bska, P., Thormann, M. A., Tetzlaff, A. J., Bøgh, S., Chrysostomou, D. (2017). A skill-based robot co-worker for industrial maintenance tasks. In 27th Int Conf on Flex Automa & Intell Manu (FAIM 2017) (pp. 1–6).
51.
Zurück zum Zitat Andreasson, H., Bouguerra, A., Cirillo, M., Dimitrov, D. N., Driankov, D., Karlsson, L., & Stoyanov, T. (2015). Autonomous transport vehicles: Where we are and what is missing. IEEE Robotics & Automation Magazine, 22, 64–75. Andreasson, H., Bouguerra, A., Cirillo, M., Dimitrov, D. N., Driankov, D., Karlsson, L., & Stoyanov, T. (2015). Autonomous transport vehicles: Where we are and what is missing. IEEE Robotics & Automation Magazine, 22, 64–75.
52.
Zurück zum Zitat Alsamhi, S. H., Ma, O., Ansari, M. S., & Gupta, S. K. (2019). Collaboration of drone and internet of public safety things in smart cities: An overview of QoS and network performance optimization. Drones, 3(13), 1–18. Alsamhi, S. H., Ma, O., Ansari, M. S., & Gupta, S. K. (2019). Collaboration of drone and internet of public safety things in smart cities: An overview of QoS and network performance optimization. Drones, 3(13), 1–18.
53.
Zurück zum Zitat Soorki, M. N., Mozaffari, M., Saad, W., Manshaei, M. H., & Saidi, H. (2016). Resource allocation for machine-to-machine communications with unmanned aerial vehicles. In 2016 IEEE Globecom Workshops (pp. 1–6). Soorki, M. N., Mozaffari, M., Saad, W., Manshaei, M. H., & Saidi, H. (2016). Resource allocation for machine-to-machine communications with unmanned aerial vehicles. In 2016 IEEE Globecom Workshops (pp. 1–6).
54.
Zurück zum Zitat Shakhatreh, H., Sawalmeh, A. H., Al-Fuqaha, A., Dou, Z., Almaita, E., Khalil, I., & Guizani, M. (2019). Unmanned aerial vehicles (UAVs): A survey on civil applications and key research challenges. In IEEE Access (vol. 7, pp. 48572–48634). Shakhatreh, H., Sawalmeh, A. H., Al-Fuqaha, A., Dou, Z., Almaita, E., Khalil, I., & Guizani, M. (2019). Unmanned aerial vehicles (UAVs): A survey on civil applications and key research challenges. In IEEE Access (vol. 7, pp. 48572–48634).
55.
Zurück zum Zitat Larrauri, J. I., Sorrosal, G., & González, M. (2013). Automatic system for overhead power line inspection using an unmanned aerial vehicle RELIFO project. In International conference on unmanned aircraft systems (pp. 244–252). Larrauri, J. I., Sorrosal, G., & González, M. (2013). Automatic system for overhead power line inspection using an unmanned aerial vehicle RELIFO project. In International conference on unmanned aircraft systems (pp. 244–252).
57.
Zurück zum Zitat Sacchi, C., & Regazzoni, C. S. (2000). A distributed surveillance system for detection of abandoned objects in unmanned railway environments. IEEE Transactions on Vehicular Technology, 49(5), 2013–2026. Sacchi, C., & Regazzoni, C. S. (2000). A distributed surveillance system for detection of abandoned objects in unmanned railway environments. IEEE Transactions on Vehicular Technology, 49(5), 2013–2026.
58.
Zurück zum Zitat Foresti, G. L., Marcenaro, L., & Regazzoni, C. S. (2002). Automatic detection and indexing of video event shots for surveillance applications. IEEE Transactions on Multimedia, 4(4), 459–471. Foresti, G. L., Marcenaro, L., & Regazzoni, C. S. (2002). Automatic detection and indexing of video event shots for surveillance applications. IEEE Transactions on Multimedia, 4(4), 459–471.
59.
Zurück zum Zitat Lavee, G., Khan, L., & Thuraisingham, B. (2005) A framework for a video analysis tool for suspicious event detection (pp. 79–84). Lavee, G., Khan, L., & Thuraisingham, B. (2005) A framework for a video analysis tool for suspicious event detection (pp. 79–84).
60.
Zurück zum Zitat Lavee, G., Khan, L., & Thuraisingham, B. (2007). A framework for a video analysis tool for suspicious event detection. Multimedia Tools and Applications, 35(1), 109–123. Lavee, G., Khan, L., & Thuraisingham, B. (2007). A framework for a video analysis tool for suspicious event detection. Multimedia Tools and Applications, 35(1), 109–123.
61.
Zurück zum Zitat Ellingsen, K. (2008). Salient event-detection in video surveillance scenarios. In ACM workshop on analysis and retrieval of events/actions and workflows in video streams (pp 57–64). Ellingsen, K. (2008). Salient event-detection in video surveillance scenarios. In ACM workshop on analysis and retrieval of events/actions and workflows in video streams (pp 57–64).
62.
Zurück zum Zitat Porikli, F., Ivanov, Y., & Haga, T. (2008). Robust abandoned object detection using dual foregrounds. EURASIP Journal of Advanced in Signal Processing, 2008(30), 1–11.MATH Porikli, F., Ivanov, Y., & Haga, T. (2008). Robust abandoned object detection using dual foregrounds. EURASIP Journal of Advanced in Signal Processing, 2008(30), 1–11.MATH
63.
Zurück zum Zitat Mart’ınez, J. M., & Miguel, J. C. S. (2008). Robust unattended and stolen object detection by fusing simple algorithms. In IEEE International conference on advanced video and signal-based surveillance (AVSS’08) (pp 18–25). Mart’ınez, J. M., & Miguel, J. C. S. (2008). Robust unattended and stolen object detection by fusing simple algorithms. In IEEE International conference on advanced video and signal-based surveillance (AVSS’08) (pp 18–25).
64.
Zurück zum Zitat Chuang, C. H., Hsieh, J. W., Tsai, L. W., Chen, S. Y., & Fan, K. C. (2009). Carried object detection using ratio histogram and its application to suspicious event analysis. IEEE Transactions on Circuits and Systems for Video Technology, 19(6), 911–916. Chuang, C. H., Hsieh, J. W., Tsai, L. W., Chen, S. Y., & Fan, K. C. (2009). Carried object detection using ratio histogram and its application to suspicious event analysis. IEEE Transactions on Circuits and Systems for Video Technology, 19(6), 911–916.
65.
Zurück zum Zitat Bhargava, M., Chen, C. C., Ryoo, M. S., & Aggarwal, J. K. (2009). Detection of object abandonment using temporal logic. Machine Vision and Applications, 20(5), 271–281. Bhargava, M., Chen, C. C., Ryoo, M. S., & Aggarwal, J. K. (2009). Detection of object abandonment using temporal logic. Machine Vision and Applications, 20(5), 271–281.
66.
Zurück zum Zitat Li, Q., Mao, Y., Wang, Z., & Xiang, W. (2009). Robust real-time detection of abandoned and removed objects. In 5th IEEE International conference on image and graphics (pp 156–161). Li, Q., Mao, Y., Wang, Z., & Xiang, W. (2009). Robust real-time detection of abandoned and removed objects. In 5th IEEE International conference on image and graphics (pp 156–161).
67.
Zurück zum Zitat Li, X., Zhang, C., & Zhang, D. (2010). Abandoned objects detection using double illumination invariant foreground masks. In 20th IEEE international conference on pattern recognition (ICPR) (vol. 2010, pp. 436–439). Li, X., Zhang, C., & Zhang, D. (2010). Abandoned objects detection using double illumination invariant foreground masks. In 20th IEEE international conference on pattern recognition (ICPR) (vol. 2010, pp. 436–439).
68.
Zurück zum Zitat Evangelio, R. H., & Sikora, T. (2011). Static object detection based on a dual background model and a finite-state machine. EURASIP Journal on Image and Video Processing, 2011(1), 858,502. Evangelio, R. H., & Sikora, T. (2011). Static object detection based on a dual background model and a finite-state machine. EURASIP Journal on Image and Video Processing, 2011(1), 858,502.
69.
Zurück zum Zitat Singh, R., Vishwakarma, S., Agrawal, A., & Tiwari, M. D. (2010). Unusual activity detection for video surveillance. In International conference on intelligent interactive technologies and multimedia (pp 297–305). ACM Singh, R., Vishwakarma, S., Agrawal, A., & Tiwari, M. D. (2010). Unusual activity detection for video surveillance. In International conference on intelligent interactive technologies and multimedia (pp 297–305). ACM
70.
Zurück zum Zitat Rothkrantz, L., & Yang, Z. (2011). Surveillance system using abandoned object detection. In Proceedings of the 12th international conference on computer systems and technologies (pp 380–386). ACM Rothkrantz, L., & Yang, Z. (2011). Surveillance system using abandoned object detection. In Proceedings of the 12th international conference on computer systems and technologies (pp 380–386). ACM
71.
Zurück zum Zitat Tian, Y., Feris, R. S., Liu, H., Hampapur, A., & Sun, M. T. (2011). Robust detection of abandoned and removed objects in complex surveillance videos. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), 41(5), 565–576. Tian, Y., Feris, R. S., Liu, H., Hampapur, A., & Sun, M. T. (2011). Robust detection of abandoned and removed objects in complex surveillance videos. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), 41(5), 565–576.
72.
Zurück zum Zitat Sanmiguel, J. C., Caro, L., & Martínez, J. M. (2012). Pixel-based colour contrast for abandoned and stolen object discrimination in video surveillance. Electronics letters, 48(2), 86–87. Sanmiguel, J. C., Caro, L., & Martínez, J. M. (2012). Pixel-based colour contrast for abandoned and stolen object discrimination in video surveillance. Electronics letters, 48(2), 86–87.
73.
Zurück zum Zitat Tian, Y., Senior, A., & Lu, M. (2012). Robust and efficient foreground analysis in complex surveillance videos. Machine Vision and Applications, 23(5), 967–983. Tian, Y., Senior, A., & Lu, M. (2012). Robust and efficient foreground analysis in complex surveillance videos. Machine Vision and Applications, 23(5), 967–983.
74.
Zurück zum Zitat Fan, Q., & Pankanti, S. (2012). Robust foreground and abandonment analysis for large-scale abandoned object detection in complex surveillance videos. In IEEE 9 th Int conf on adv video and signal- based surveill, (AVSS) (pp. 58–63). Fan, Q., & Pankanti, S. (2012). Robust foreground and abandonment analysis for large-scale abandoned object detection in complex surveillance videos. In IEEE 9 th Int conf on adv video and signal- based surveill, (AVSS) (pp. 58–63).
75.
Zurück zum Zitat Zin, T. T., Tin, P., Toriu, T., & Hama, H. (2012b) A probability-based model for detecting abandoned objects in video surveillance systems. In Proceedings of the world congress on engineering (vol. II. pp. 1–6). Zin, T. T., Tin, P., Toriu, T., & Hama, H. (2012b) A probability-based model for detecting abandoned objects in video surveillance systems. In Proceedings of the world congress on engineering (vol. II. pp. 1–6).
76.
Zurück zum Zitat Prabhakar, G., & Ramasubramanian, B. (2012). An efficient approach for real-time tracking of intruder and abandoned object in video surveillance system. International Journal of Computers and Applications, 54(17), 22–27. Prabhakar, G., & Ramasubramanian, B. (2012). An efficient approach for real-time tracking of intruder and abandoned object in video surveillance system. International Journal of Computers and Applications, 54(17), 22–27.
77.
Zurück zum Zitat Fernández-Caballero, A., Castillo, J. C., & Rodríguez-Sánchez, J. M. (2012). Human activity monitoring by local and global finite state machines. Expert Systems with Applications, 39(8), 6982–6993. Fernández-Caballero, A., Castillo, J. C., & Rodríguez-Sánchez, J. M. (2012). Human activity monitoring by local and global finite state machines. Expert Systems with Applications, 39(8), 6982–6993.
78.
Zurück zum Zitat Chitra, M., Geetha, M. K., & Menaka, L. (2013.). Occlusion and abandoned object detection for surveillance applications. International Journal of Computer Applications Technology and Research, 2(6), 708–meta. Chitra, M., Geetha, M. K., & Menaka, L. (2013.). Occlusion and abandoned object detection for surveillance applications. International Journal of Computer Applications Technology and Research, 2(6), 708–meta.
79.
Zurück zum Zitat Petrosino, A., & Maddalena, L. (2013). Stopped object detection by learning foreground model in videos. IEEE Transactions on Neural Networks and Learning Systems, 24(5), 723–735. Petrosino, A., & Maddalena, L. (2013). Stopped object detection by learning foreground model in videos. IEEE Transactions on Neural Networks and Learning Systems, 24(5), 723–735.
80.
Zurück zum Zitat Fan, Q., Gabbur, P., & Pankanti, S. (2013). Relative attributes for large-scale abandoned object detection. In IEEE international conference on computer vision (ICCV) (pp. 2736–2743). Fan, Q., Gabbur, P., & Pankanti, S. (2013). Relative attributes for large-scale abandoned object detection. In IEEE international conference on computer vision (ICCV) (pp. 2736–2743).
81.
Zurück zum Zitat Tripathi, R. K., & Jalal, A. S. (2014). A framework for suspicious object detection from surveillance video. International Journal of Machine Intelligence and Sensory Signal Processing, 1(3), 251–266. Tripathi, R. K., & Jalal, A. S. (2014). A framework for suspicious object detection from surveillance video. International Journal of Machine Intelligence and Sensory Signal Processing, 1(3), 251–266.
82.
Zurück zum Zitat Pavithradevi, M. K., & Aruljothi, S. (2014). Detection of suspicious activities in public areas using staged matching technique. IJAICT, 1(1), 140–144. Pavithradevi, M. K., & Aruljothi, S. (2014). Detection of suspicious activities in public areas using staged matching technique. IJAICT, 1(1), 140–144.
83.
Zurück zum Zitat Nam, Y. (2016). Real-time abandoned and stolen object detection based on spatiotemporal features in crowded scenes. Multimedia Tools and Applications, 75(12), 7003–7028. Nam, Y. (2016). Real-time abandoned and stolen object detection based on spatiotemporal features in crowded scenes. Multimedia Tools and Applications, 75(12), 7003–7028.
84.
Zurück zum Zitat Kong, H., Audibert, J. Y., & Ponce, J. (2010). Detecting abandoned objects with a moving camera. IEEE Transactions on Image Processing, 19(8), 2201–2210.MathSciNetMATH Kong, H., Audibert, J. Y., & Ponce, J. (2010). Detecting abandoned objects with a moving camera. IEEE Transactions on Image Processing, 19(8), 2201–2210.MathSciNetMATH
85.
Zurück zum Zitat Ahamad, R., & Mishra K. N. (2023) Hybrid approach for suspicious object surveillance using video clips and UAV images in cloud-integrated-IoT-based computing environment. Cluster Computing, 1–22. Ahamad, R., & Mishra K. N. (2023) Hybrid approach for suspicious object surveillance using video clips and UAV images in cloud-integrated-IoT-based computing environment. Cluster Computing, 1–22.
86.
Zurück zum Zitat Jhuang, H., Serre, T., Wolf, L., & Poggio, T. (2007). A biologically inspired system for action recognition. In IEEE 11th international conference on computer vision (pp. 1–8). Jhuang, H., Serre, T., Wolf, L., & Poggio, T. (2007). A biologically inspired system for action recognition. In IEEE 11th international conference on computer vision (pp. 1–8).
87.
Zurück zum Zitat Lin Z, Zhuolin Jiang, & Davis L.S. (2009). Recognizing actions by shape-motion prototype trees. In 12th international conference on computer vision (pp. 444–51). Lin Z, Zhuolin Jiang, & Davis L.S. (2009). Recognizing actions by shape-motion prototype trees. In 12th international conference on computer vision (pp. 444–51).
88.
Zurück zum Zitat Liu, J., Luo, J., & Shah, M. (2009). Recognizing realistic actions from videos in the wild. In IEEE conference on computer vision and pattern recognition (pp. 1996–2003). Liu, J., Luo, J., & Shah, M. (2009). Recognizing realistic actions from videos in the wild. In IEEE conference on computer vision and pattern recognition (pp. 1996–2003).
89.
Zurück zum Zitat Kim T. K., Wong S. F., & Cipolla R. (2007). Tensor canonical correlation analysis for action classification. In IEEE conference on computer vision and pattern recognition (vol. 2007. pp. 1–8). Kim T. K., Wong S. F., & Cipolla R. (2007). Tensor canonical correlation analysis for action classification. In IEEE conference on computer vision and pattern recognition (vol. 2007. pp. 1–8).
90.
Zurück zum Zitat Padmaja, B., Myneni, M. B., & Krishna Rao Patro, E. (2020). A comparison on visual prediction models for MAMO (multi activity-multi object) recognition using deep learning. Journal of Big Data, 7(24), 1–15. Padmaja, B., Myneni, M. B., & Krishna Rao Patro, E. (2020). A comparison on visual prediction models for MAMO (multi activity-multi object) recognition using deep learning. Journal of Big Data, 7(24), 1–15.
91.
Zurück zum Zitat Cho, J., Lee, M., Chang, H. J., & Oh, S. (2014). Robust action recognition using local motion and group sparsity. Pattern Recognition, 47(5), 1813–1825. Cho, J., Lee, M., Chang, H. J., & Oh, S. (2014). Robust action recognition using local motion and group sparsity. Pattern Recognition, 47(5), 1813–1825.
92.
Zurück zum Zitat Ravanbakhsh, M., Mousavi, H., Rastegari, M., Murino, V., & Davis, L. S. (2015). Action recognition with image based CNN features. In IEEE conference on computer vision and pattern recognition (pp. 1–10). Ravanbakhsh, M., Mousavi, H., Rastegari, M., Murino, V., & Davis, L. S. (2015). Action recognition with image based CNN features. In IEEE conference on computer vision and pattern recognition (pp. 1–10).
93.
Zurück zum Zitat Ulutan, O., Rallapalli, S., Srivatsa, M., Torres, C., & Manjunath, B. S. (2019). Actor conditioned attention maps for video action detection. In Computer vision and pattern recognition (pp. 527–536). Ulutan, O., Rallapalli, S., Srivatsa, M., Torres, C., & Manjunath, B. S. (2019). Actor conditioned attention maps for video action detection. In Computer vision and pattern recognition (pp. 527–536).
94.
Zurück zum Zitat Choi, W., & Savarese, S. (2014). Understanding collective activities of people from videos. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36, 1242–1257. Choi, W., & Savarese, S. (2014). Understanding collective activities of people from videos. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36, 1242–1257.
95.
Zurück zum Zitat Choi, W., Shahid, K., & Savarese, S. (2011). Learning context for collective activity recognition. In IEEE conference on computer vision and pattern recognition (pp. 3273–3280). Choi, W., Shahid, K., & Savarese, S. (2011). Learning context for collective activity recognition. In IEEE conference on computer vision and pattern recognition (pp. 3273–3280).
96.
Zurück zum Zitat Li, J., Xia, C., & Chen, X. (2018). A benchmark dataset and saliency-guided stacked autoencoders for video-based salient object detection. IEEE Transactions on Image Processing, 27(1), 349–364.MathSciNetMATH Li, J., Xia, C., & Chen, X. (2018). A benchmark dataset and saliency-guided stacked autoencoders for video-based salient object detection. IEEE Transactions on Image Processing, 27(1), 349–364.MathSciNetMATH
97.
Zurück zum Zitat Achanta, R., Hemami, S., Estrada, F., & Susstrunk, S. (2009) Frequency-tuned salient region detection. In Proc. IEEE conference on computer vision and pattern recognition (pp. 1597–1604). Achanta, R., Hemami, S., Estrada, F., & Susstrunk, S. (2009) Frequency-tuned salient region detection. In Proc. IEEE conference on computer vision and pattern recognition (pp. 1597–1604).
98.
Zurück zum Zitat Tsai, D., Flagg, M., & Rehg, J. M. (2010). Motion coherent tracking with multi-label MRF optimization. In Proc. Brit. Mach. Vis. Conf. (pp. 1–11). Tsai, D., Flagg, M., & Rehg, J. M. (2010). Motion coherent tracking with multi-label MRF optimization. In Proc. Brit. Mach. Vis. Conf. (pp. 1–11).
99.
Zurück zum Zitat Li, F., Kim, T., Humayun, A., Tsai, D., & Rehg, J. M. (2013). Video segmentation by tracking many figure-ground segments. In Proceedings of the IEEE international conference on computer vision (pp. 2192–2199). Li, F., Kim, T., Humayun, A., Tsai, D., & Rehg, J. M. (2013). Video segmentation by tracking many figure-ground segments. In Proceedings of the IEEE international conference on computer vision (pp. 2192–2199).
100.
Zurück zum Zitat Wang, W., Shen, J., & Shao, L. (2015). Consistent video saliency using local gradient flow optimization and global refinement. IEEE Transactions on Image Processing, 24(11), 4185–4196.MathSciNetMATH Wang, W., Shen, J., & Shao, L. (2015). Consistent video saliency using local gradient flow optimization and global refinement. IEEE Transactions on Image Processing, 24(11), 4185–4196.MathSciNetMATH
101.
Zurück zum Zitat Ahamad, R., & Mishra, K. N. (2023). Hybrid approach for suspicious object surveillance using video clips and UAV images in cloud-IoT-based computing environment. Cluster Computer. Ahamad, R., & Mishra, K. N. (2023). Hybrid approach for suspicious object surveillance using video clips and UAV images in cloud-IoT-based computing environment. Cluster Computer.
Metadaten
Titel
Knowledge discovery of suspicious objects using hybrid approach with video clips and UAV images in distributed environments: a novel approach
verfasst von
Rayees Ahamad
Kamta Nath Mishra
Publikationsdatum
13.06.2023
Verlag
Springer US
Erschienen in
Wireless Networks / Ausgabe 8/2023
Print ISSN: 1022-0038
Elektronische ISSN: 1572-8196
DOI
https://doi.org/10.1007/s11276-023-03394-6

Weitere Artikel der Ausgabe 8/2023

Wireless Networks 8/2023 Zur Ausgabe

Neuer Inhalt