Skip to main content
Erschienen in: Wireless Personal Communications 1/2019

30.03.2019

Real Time Multi Object Detection for Blind Using Single Shot Multibox Detector

verfasst von: Adwitiya Arora, Atul Grover, Raksha Chugh, S. Sofana Reka

Erschienen in: Wireless Personal Communications | Ausgabe 1/2019

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

According to world health statistics 285 million out of 7.6 billion population suffers visual impairment; hence 4 out of 100 people are blind. Absence of vision restricts the mobility of a person to pronounced extent and hence there is a need to build an explicit device to conquer guiding aid to the prospect. This paper proposes to build a prototype that performs real time object detection using image segmentation and deep neural network. Further the object, its position with respect to the person and accuracy of detection is prompted through speech stimulus to the blind person. The accuracy of detection is also prompted to the device holder. This work uses a combination of single-shot multibox detection framework with mobileNet architecture to build rapid real time multi object detection for a compact, portable and minimal response time device construction.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
2.
Zurück zum Zitat Shoval, S., Ulrich, I., & Borenstein, J. (2003). NavBelt and the Guide-Cane [obstacle-avoidance systems for the blind and visually impaired. IEEE Robotics and Automation Magazine, 10(1), 9–20.CrossRef Shoval, S., Ulrich, I., & Borenstein, J. (2003). NavBelt and the Guide-Cane [obstacle-avoidance systems for the blind and visually impaired. IEEE Robotics and Automation Magazine, 10(1), 9–20.CrossRef
3.
Zurück zum Zitat Wang, Y., & Kuchenbecker, K. J. (2012). HALO: Haptic alerts for low-hanging obstacles in white cane navigation. In 2012 IEEE haptics symposium (HAPTICS), Vancouver (pp. 527–532). Wang, Y., & Kuchenbecker, K. J. (2012). HALO: Haptic alerts for low-hanging obstacles in white cane navigation. In 2012 IEEE haptics symposium (HAPTICS), Vancouver (pp. 527–532).
4.
Zurück zum Zitat Chumkamon, S., Tuvaphanthaphiphat, P., & Keeratiwintakorn, P. (2008). A blind navigation system using RFID for indoor environments. In 2008 5th International conference on electrical engineering/electronics, computer, telecommunications and information technology, Krabi (pp. 765–768). Chumkamon, S., Tuvaphanthaphiphat, P., & Keeratiwintakorn, P. (2008). A blind navigation system using RFID for indoor environments. In 2008 5th International conference on electrical engineering/electronics, computer, telecommunications and information technology, Krabi (pp. 765–768).
5.
Zurück zum Zitat Faria, J., Lopes, S., Fernandes, H., Martins, P., & Barroso, J. (2010). Electronic white cane for blind people navigation assistance. In 2010 World automation congress, Kobe (pp. 1–7). Faria, J., Lopes, S., Fernandes, H., Martins, P., & Barroso, J. (2010). Electronic white cane for blind people navigation assistance. In 2010 World automation congress, Kobe (pp. 1–7).
6.
Zurück zum Zitat Lavanya, G., Preethy, W., Shameem, A., & Sushmitha, R. (2013). Passenger BUS alert system for easy navigation of blind. In 2013 international conference on circuits, power and computing technologies (ICCPCT), Nagercoil (pp. 798–802). Lavanya, G., Preethy, W., Shameem, A., & Sushmitha, R. (2013). Passenger BUS alert system for easy navigation of blind. In 2013 international conference on circuits, power and computing technologies (ICCPCT), Nagercoil (pp. 798–802).
7.
Zurück zum Zitat Adame, M. R., Yu, J., Moller, K., & Seemann, E. (2013). A wearable navigation aid for blind people using a vibrotactile information transfer system. In 2013 ICME international conference on complex medical engineering, Beijing (pp. 13–18). Adame, M. R., Yu, J., Moller, K., & Seemann, E. (2013). A wearable navigation aid for blind people using a vibrotactile information transfer system. In 2013 ICME international conference on complex medical engineering, Beijing (pp. 13–18).
8.
Zurück zum Zitat Ando, B. (2003). Electronic sensory systems for the visually impaired. IEEE Instrumentation and Measurement Magazine, 6(2), 62–67.CrossRef Ando, B. (2003). Electronic sensory systems for the visually impaired. IEEE Instrumentation and Measurement Magazine, 6(2), 62–67.CrossRef
9.
Zurück zum Zitat Wachaja, A., Agarwal, P., Zink, M., Adame, M. R., Möller, K., & Burgard, W. (2015). Navigating blind people with a smart walker. In 2015 IEEE/RSJ international conference on intelligent robots and systems (IROS), Hamburg (pp. 6014–6019). Wachaja, A., Agarwal, P., Zink, M., Adame, M. R., Möller, K., & Burgard, W. (2015). Navigating blind people with a smart walker. In 2015 IEEE/RSJ international conference on intelligent robots and systems (IROS), Hamburg (pp. 6014–6019).
10.
Zurück zum Zitat Dakopoulos, D., & Bourbakis, N. G. (2010). Wearable obstacle avoidance electronic travel aids for blind: A survey. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), 40(1), 25–35.CrossRef Dakopoulos, D., & Bourbakis, N. G. (2010). Wearable obstacle avoidance electronic travel aids for blind: A survey. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), 40(1), 25–35.CrossRef
11.
Zurück zum Zitat Balasuriya, B. K., Lokuhettiarachchi, N. P., Ranasinghe, A. R. M. D. N., Shiwantha, K. D. C., & Jayawardena, C. (2017). Learning platform for visually impaired children through artificial intelligence and computer vision. In 2017 11th International conference on software, knowledge, information management and applications (SKIMA), Malabe, Sri Lanka (pp. 1–7). Balasuriya, B. K., Lokuhettiarachchi, N. P., Ranasinghe, A. R. M. D. N., Shiwantha, K. D. C., & Jayawardena, C. (2017). Learning platform for visually impaired children through artificial intelligence and computer vision. In 2017 11th International conference on software, knowledge, information management and applications (SKIMA), Malabe, Sri Lanka (pp. 1–7).
12.
Zurück zum Zitat Mancini, A, Frontoni, E, & Zingaretti, P. (2018). Mechatronic system to help visually impaired users during walking and running. IEEE Transactions on Intelligent Transportation Systems, 19, 649–660. ISSN 1524-9050. Mancini, A, Frontoni, E, & Zingaretti, P. (2018). Mechatronic system to help visually impaired users during walking and running. IEEE Transactions on Intelligent Transportation Systems, 19, 649–660. ISSN 1524-9050.
13.
Zurück zum Zitat Dunai, L. D., Lengua, I. L., Tortajada, I., & Simon, F. B. (2014) Obstacle detectors for visually impaired people. In 2014 International conference on optimization of electrical and electronic equipment (OPTIM), Bran (pp. 809–816). Dunai, L. D., Lengua, I. L., Tortajada, I., & Simon, F. B. (2014) Obstacle detectors for visually impaired people. In 2014 International conference on optimization of electrical and electronic equipment (OPTIM), Bran (pp. 809–816).
14.
Zurück zum Zitat Xiong, J. (2018). Tutorial-1: Machine learning and deep learning. In 2018 23rd Asia and South Pacific design automation conference (ASP-DAC), Jeju, Korea (South) (pp. 19–25). Xiong, J. (2018). Tutorial-1: Machine learning and deep learning. In 2018 23rd Asia and South Pacific design automation conference (ASP-DAC), Jeju, Korea (South) (pp. 19–25).
15.
Zurück zum Zitat Noble, F. K. (2017) A mobile robot platform for supervised machine learning applications. In 2017 24th International conference on mechatronics and machine vision in practice (M2VIP), Auckland (pp. 1–6). Noble, F. K. (2017) A mobile robot platform for supervised machine learning applications. In 2017 24th International conference on mechatronics and machine vision in practice (M2VIP), Auckland (pp. 1–6).
16.
Zurück zum Zitat Barbosa, C., Santana, O., & Silva, B. (2017). An unsupervised machine learning algorithm for visual target identification in the context of a robotics competition. In 2017 Latin American robotics symposium (LARS) and 2017 Brazilian symposium on robotics (SBR), Curitiba (pp. 1–6). Barbosa, C., Santana, O., & Silva, B. (2017). An unsupervised machine learning algorithm for visual target identification in the context of a robotics competition. In 2017 Latin American robotics symposium (LARS) and 2017 Brazilian symposium on robotics (SBR), Curitiba (pp. 1–6).
17.
Zurück zum Zitat DiStasio, M. M., Francis, J. T., & Boraud, T. (2013). Use of frontal lobe hemodynamics as reinforcement signals to an adaptive controller. PLoS ONE, 8, e69541. ISSN 1932-6203. DiStasio, M. M., Francis, J. T., & Boraud, T. (2013). Use of frontal lobe hemodynamics as reinforcement signals to an adaptive controller. PLoS ONE, 8, e69541. ISSN 1932-6203.
18.
Zurück zum Zitat Chhatbar, P. Y., Francis, J. T., Fridman, E. A. (2013). Towards a naturalistic brain–machine interface: Hybrid torque and position control allows generalization to novel dynamics. PLoS ONE, 8, e52286. ISSN 1932-6203. Chhatbar, P. Y., Francis, J. T., Fridman, E. A. (2013). Towards a naturalistic brain–machine interface: Hybrid torque and position control allows generalization to novel dynamics. PLoS ONE, 8, e52286. ISSN 1932-6203.
19.
Zurück zum Zitat Moshovos, et al. (2018). Value-based deep-learning acceleration. IEEE Micro, 38(1), 41–55.CrossRef Moshovos, et al. (2018). Value-based deep-learning acceleration. IEEE Micro, 38(1), 41–55.CrossRef
20.
Zurück zum Zitat Ranganathan, H., Venkateswara, H., Chakraborty, S., & Panchanathan, S. (2017). Deep active learning for image classification. In 2017 IEEE international conference on image processing (ICIP), Beijing, China (pp. 3934–3938). Ranganathan, H., Venkateswara, H., Chakraborty, S., & Panchanathan, S. (2017). Deep active learning for image classification. In 2017 IEEE international conference on image processing (ICIP), Beijing, China (pp. 3934–3938).
21.
Zurück zum Zitat da Silva, L. C. B., de Oliveira Rocha, H. R., Castellani, C. E. S., Segatto, M. E. V., & Pontes, M. J. (2017) Improving temperature resolution of distributee temperature sensor using Artificial Neural Network. In Microwave and optoelectronics conference (IMOC) 2017 SBMO/IEEE MTT-S international (pp. 1–5). da Silva, L. C. B., de Oliveira Rocha, H. R., Castellani, C. E. S., Segatto, M. E. V., & Pontes, M. J. (2017) Improving temperature resolution of distributee temperature sensor using Artificial Neural Network. In Microwave and optoelectronics conference (IMOC) 2017 SBMO/IEEE MTT-S international (pp. 1–5).
22.
Zurück zum Zitat Han, W. S., & Han, I. S. (2017). Bio-inspired neuromorphic visual processing with neural networks for cyclist detection in vehicle’s blind spot and segmentation in medical CT images. In 2017 Computing conference, London (pp. 744–750). Han, W. S., & Han, I. S. (2017). Bio-inspired neuromorphic visual processing with neural networks for cyclist detection in vehicle’s blind spot and segmentation in medical CT images. In 2017 Computing conference, London (pp. 744–750).
23.
Zurück zum Zitat Yang, H., Yuan, C., Xing, J., & Hu, W. (2017). SCNN: Sequential convolutional neural network for human action recognition in videos. In 2017 IEEE international conference on image processing (ICIP), Beijing, China (pp. 355–359). Yang, H., Yuan, C., Xing, J., & Hu, W. (2017). SCNN: Sequential convolutional neural network for human action recognition in videos. In 2017 IEEE international conference on image processing (ICIP), Beijing, China (pp. 355–359).
24.
Zurück zum Zitat Deng, Z., Fan, H., Xie, F., Cui, Y., & Liu, J. (2017). Segmentation of dermoscopy images based on fully convolutional neural network. In 2017 IEEE international conference on image processing (ICIP), Beijing, China (pp. 1732–1736). Deng, Z., Fan, H., Xie, F., Cui, Y., & Liu, J. (2017). Segmentation of dermoscopy images based on fully convolutional neural network. In 2017 IEEE international conference on image processing (ICIP), Beijing, China (pp. 1732–1736).
25.
Zurück zum Zitat Cho, C., Lee, Y. H., & Lee, S., (2017). Prostate detection and segmentation based on convolutional neural network and topological derivative. In 2017 IEEE international conference on image processing (ICIP), Beijing, China (pp 3071–3074). Cho, C., Lee, Y. H., & Lee, S., (2017). Prostate detection and segmentation based on convolutional neural network and topological derivative. In 2017 IEEE international conference on image processing (ICIP), Beijing, China (pp 3071–3074).
26.
Zurück zum Zitat Ren, S., He, K., Girshick, R., & Sun, J. (2017). Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(6), 1137–1149.CrossRef Ren, S., He, K., Girshick, R., & Sun, J. (2017). Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(6), 1137–1149.CrossRef
27.
Zurück zum Zitat Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. (2016). You only look once: Unified, real-time object detection. In 2016 IEEE conference on computer vision and pattern recognition (CVPR), Las Vegas, NV (pp. 779–788). Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. (2016). You only look once: Unified, real-time object detection. In 2016 IEEE conference on computer vision and pattern recognition (CVPR), Las Vegas, NV (pp. 779–788).
28.
Zurück zum Zitat Liu, W., et al. (2016). SSD: Single shot MultiBox detector. In B. Leibe, J. Matas, N. Sebe, & M. Welling (Eds.), Computer vision—ECCV 2016. ECCV 2016. Liu, W., et al. (2016). SSD: Single shot MultiBox detector. In B. Leibe, J. Matas, N. Sebe, & M. Welling (Eds.), Computer vision—ECCV 2016. ECCV 2016.
29.
Zurück zum Zitat Ning, C., Zhou, H., Song, Y., & Tang, J. (2017). Inception single shot MultiBox detector for object detection. In 2017 IEEE international conference on multimedia & expo workshops (ICMEW), Hong Kong (pp. 549–554). Ning, C., Zhou, H., Song, Y., & Tang, J. (2017). Inception single shot MultiBox detector for object detection. In 2017 IEEE international conference on multimedia & expo workshops (ICMEW), Hong Kong (pp. 549–554).
30.
Zurück zum Zitat Cengil, E., Çınar, A., & Özbay, E. (2017). Image classification with caffe deep learning framework. In 2017 International conference on computer science and engineering (UBMK), Antalya (pp. 440–444). Cengil, E., Çınar, A., & Özbay, E. (2017). Image classification with caffe deep learning framework. In 2017 International conference on computer science and engineering (UBMK), Antalya (pp. 440–444).
Metadaten
Titel
Real Time Multi Object Detection for Blind Using Single Shot Multibox Detector
verfasst von
Adwitiya Arora
Atul Grover
Raksha Chugh
S. Sofana Reka
Publikationsdatum
30.03.2019
Verlag
Springer US
Erschienen in
Wireless Personal Communications / Ausgabe 1/2019
Print ISSN: 0929-6212
Elektronische ISSN: 1572-834X
DOI
https://doi.org/10.1007/s11277-019-06294-1

Weitere Artikel der Ausgabe 1/2019

Wireless Personal Communications 1/2019 Zur Ausgabe

Neuer Inhalt