Skip to main content
Top

2020 | OriginalPaper | Chapter

10. Digital Enhancement of Cultural Experience and Accessibility for the Visually Impaired

Authors : Dimitris K. Iakovidis, Dimitrios Diamantis, George Dimas, Charis Ntakolia, Evaggelos Spyrou

Published in: Technological Trends in Improved Mobility of the Visually Impaired

Publisher: Springer International Publishing

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Visual impairment restricts everyday mobility and limits the accessibility of places, which for the non-visually impaired is taken for granted. A short walk to a close destination, such as a market or a school becomes an everyday challenge. In this chapter, we present a novel solution to this problem that can evolve into an everyday visual aid for people with limited sight or total blindness. The proposed solution is a digital system, wearable like smart-glasses, equipped with cameras. An intelligent system module, incorporating efficient deep learning and uncertainty-aware decision-making algorithms, interprets the video scenes, translates them into speech, and describes them to the user through audio. The user can almost naturally interact with the system via a speech-based user interface, which is also capable of understanding the user’s emotions. The capabilities of this system are investigated in the context of accessibility and guidance to outdoor environments of cultural interest, such as the historic triangle of Athens. A survey of relevant state-of-the-art systems, technologies and services is performed, identifying critical system components that better adapt to the goals of the system, user needs and requirements, toward a user-centered architecture design.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literature
go back to reference Aladren, A., López-Nicolás, G., Puig, L., & Guerrero, J. J. (2016). Navigation assistance for the visually impaired using RGB-D sensor with range expansion. IEEE Systems Journal, 10, 922–932.CrossRef Aladren, A., López-Nicolás, G., Puig, L., & Guerrero, J. J. (2016). Navigation assistance for the visually impaired using RGB-D sensor with range expansion. IEEE Systems Journal, 10, 922–932.CrossRef
go back to reference Alkhafaji, A., Fallahkhair, S., Cocea, M., & Crellin, J. (2016). A survey study to gather requirements for designing a mobile service to enhance learning from cultural heritage. In European Conference on Technology Enhanced Learning (pp. 547–550). Cham: Springer. Alkhafaji, A., Fallahkhair, S., Cocea, M., & Crellin, J. (2016). A survey study to gather requirements for designing a mobile service to enhance learning from cultural heritage. In European Conference on Technology Enhanced Learning (pp. 547–550). Cham: Springer.
go back to reference Anagnostopoulos, C.-N., Iliou, T., & Giannoukos, I. (2015). Features and classifiers for emotion recognition from speech: A survey from 2000 to 2011. Artificial Intelligence Review, 43, 155–177.CrossRef Anagnostopoulos, C.-N., Iliou, T., & Giannoukos, I. (2015). Features and classifiers for emotion recognition from speech: A survey from 2000 to 2011. Artificial Intelligence Review, 43, 155–177.CrossRef
go back to reference Asakawa, S., Guerreiro, J., Ahmetovic, D., Kitani, K. M., & Asakawa, C. (2018). The present and future of museum accessibility for people with visual impairments. In Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility (pp. 382–384). New York, NY: ACM.CrossRef Asakawa, S., Guerreiro, J., Ahmetovic, D., Kitani, K. M., & Asakawa, C. (2018). The present and future of museum accessibility for people with visual impairments. In Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility (pp. 382–384). New York, NY: ACM.CrossRef
go back to reference Badrinarayanan, V., Kendall, A., & Cipolla, R. (2017). SegNet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39, 2481–2495.CrossRef Badrinarayanan, V., Kendall, A., & Cipolla, R. (2017). SegNet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39, 2481–2495.CrossRef
go back to reference Baltrusaitis, T., McDuff, D., Banda, N., Mahmoud, M., el Kaliouby, R., Robinson, P., & Picard, R. (2011). Real-time inference of mental states from facial expressions and upper body gestures. In Proceedings of 2011 IEEE International Conference on Automatic Face Gesture Recognition and Workshops (FG 2011) (pp. 909–914). Washington, DC: IEEE. Baltrusaitis, T., McDuff, D., Banda, N., Mahmoud, M., el Kaliouby, R., Robinson, P., & Picard, R. (2011). Real-time inference of mental states from facial expressions and upper body gestures. In Proceedings of 2011 IEEE International Conference on Automatic Face Gesture Recognition and Workshops (FG 2011) (pp. 909–914). Washington, DC: IEEE.
go back to reference Caraiman, S., Morar, A., Owczarek, M., Burlacu, A., Rzeszotarski, D., Botezatu, N., … Moldoveanu, A. (2017). Computer vision for the visually impaired: The sound of vision system. In 2017 IEEE International Conference on Computer Vision Workshop (ICCVW) (pp. 1480–1489). Washington, DC: IEEE.CrossRef Caraiman, S., Morar, A., Owczarek, M., Burlacu, A., Rzeszotarski, D., Botezatu, N., … Moldoveanu, A. (2017). Computer vision for the visually impaired: The sound of vision system. In 2017 IEEE International Conference on Computer Vision Workshop (ICCVW) (pp. 1480–1489). Washington, DC: IEEE.CrossRef
go back to reference Conradie, P., Goedelaan, G. K. de, Mioch, T., & Saldien, J. (2014). Blind user requirements to support tactile mobility. In Tactile Haptic User Interfaces for Tabletops and Tablets (TacTT 2014) (pp. 48–53). Conradie, P., Goedelaan, G. K. de, Mioch, T., & Saldien, J. (2014). Blind user requirements to support tactile mobility. In Tactile Haptic User Interfaces for Tabletops and Tablets (TacTT 2014) (pp. 48–53).
go back to reference Cowie, R., Douglas-Cowie, E., Tsapatsoulis, N., Votsis, G., Kollias, S., Fellenz, W., & Taylor, J. G. (2001). Emotion recognition in human-computer interaction. IEEE Signal Processing Magazine, 18, 32–80.CrossRef Cowie, R., Douglas-Cowie, E., Tsapatsoulis, N., Votsis, G., Kollias, S., Fellenz, W., & Taylor, J. G. (2001). Emotion recognition in human-computer interaction. IEEE Signal Processing Magazine, 18, 32–80.CrossRef
go back to reference Csapó, Á., Wersényi, G., Nagy, H., & Stockman, T. (2015). A survey of assistive technologies and applications for blind users on mobile platforms: A review and foundation for research. Journal on Multimodal User Interfaces, 9, 275–286.CrossRef Csapó, Á., Wersényi, G., Nagy, H., & Stockman, T. (2015). A survey of assistive technologies and applications for blind users on mobile platforms: A review and foundation for research. Journal on Multimodal User Interfaces, 9, 275–286.CrossRef
go back to reference Cui, L. (2018). MDSSD: Multi-scale Deconvolutional Single Shot Detector for small objects. arXiv preprint arXiv:1805.07009. Cui, L. (2018). MDSSD: Multi-scale Deconvolutional Single Shot Detector for small objects. arXiv preprint arXiv:1805.07009.
go back to reference Dai, J., Li, Y., He, K., & Sun, J. (2016). R-fcn: Object detection via region-based fully convolutional networks. In Advances in Neural Information Processing Systems (pp. 379–387). Dai, J., Li, Y., He, K., & Sun, J. (2016). R-fcn: Object detection via region-based fully convolutional networks. In Advances in Neural Information Processing Systems (pp. 379–387).
go back to reference Diamantis, D., Iakovidis, D. K., & Koulaouzidis, A. (2018). Investigating cross-dataset abnormality detection in endoscopy with a weakly-supervised multiscale convolutional neural network. In 2018 25th IEEE International Conference on Image Processing (ICIP) (pp. 3124–3128). Washington, DC: IEEE.CrossRef Diamantis, D., Iakovidis, D. K., & Koulaouzidis, A. (2018). Investigating cross-dataset abnormality detection in endoscopy with a weakly-supervised multiscale convolutional neural network. In 2018 25th IEEE International Conference on Image Processing (ICIP) (pp. 3124–3128). Washington, DC: IEEE.CrossRef
go back to reference Diamantis, E. D., Iakovidis, D. K., & Koulaouzidis, A. (2019). Look-behind fully convolutional neural network for computer-aided endoscopy. Biomedical Signal Processing and Control, 49, 192–201.CrossRef Diamantis, E. D., Iakovidis, D. K., & Koulaouzidis, A. (2019). Look-behind fully convolutional neural network for computer-aided endoscopy. Biomedical Signal Processing and Control, 49, 192–201.CrossRef
go back to reference Dimas, G., Spyrou, E., Iakovidis, D. K., & Koulaouzidis, A. (2017). Intelligent visual localization of wireless capsule endoscopes enhanced by color information. Computers in Biology and Medicine, 89, 429–440.CrossRef Dimas, G., Spyrou, E., Iakovidis, D. K., & Koulaouzidis, A. (2017). Intelligent visual localization of wireless capsule endoscopes enhanced by color information. Computers in Biology and Medicine, 89, 429–440.CrossRef
go back to reference Elmannai, W., & Elleithy, K. (2017). Sensor-based assistive devices for visually-impaired people: Current status, challenges, and future directions. Sensors, 17, 565.CrossRef Elmannai, W., & Elleithy, K. (2017). Sensor-based assistive devices for visually-impaired people: Current status, challenges, and future directions. Sensors, 17, 565.CrossRef
go back to reference Erhan, D., Szegedy, C., Toshev, A., & Anguelov, D. (2014). Scalable object detection using deep neural networks. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Erhan, D., Szegedy, C., Toshev, A., & Anguelov, D. (2014). Scalable object detection using deep neural networks. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
go back to reference Fang, Z., & Scherer, S. (2015). Real-time onboard 6dof localization of an indoor mav in degraded visual environments using a rgb-d camera. In 2015 IEEE International Conference on Robotics and Automation (ICRA) (pp. 5253–5259). Washington, DC: IEEE.CrossRef Fang, Z., & Scherer, S. (2015). Real-time onboard 6dof localization of an indoor mav in degraded visual environments using a rgb-d camera. In 2015 IEEE International Conference on Robotics and Automation (ICRA) (pp. 5253–5259). Washington, DC: IEEE.CrossRef
go back to reference Forster, C., Zhang, Z., Gassner, M., Werlberger, M., & Scaramuzza, D. (2017). Svo: Semidirect visual odometry for monocular and multicamera systems. IEEE Transactions on Robotics, 33, 249–265.CrossRef Forster, C., Zhang, Z., Gassner, M., Werlberger, M., & Scaramuzza, D. (2017). Svo: Semidirect visual odometry for monocular and multicamera systems. IEEE Transactions on Robotics, 33, 249–265.CrossRef
go back to reference Fryer, L. (2013). Putting it into words: The impact of visual impairment on perception, experience and presence. Doctoral dissertation, Goldsmiths, University of London. Fryer, L. (2013). Putting it into words: The impact of visual impairment on perception, experience and presence. Doctoral dissertation, Goldsmiths, University of London.
go back to reference Fu, C.-Y., Liu, W., Ranga, A., Tyagi, A., & Berg, A. C. (2017). DSSD: Deconvolutional single shot detector. arXiv preprint arXiv:1701.06659. Fu, C.-Y., Liu, W., Ranga, A., Tyagi, A., & Berg, A. C. (2017). DSSD: Deconvolutional single shot detector. arXiv preprint arXiv:1701.06659.
go back to reference Girshick, R. (2015). Fast r-cnn. In Proceedings of the IEEE International Conference on Computer Vision (pp. 1440–1448). Girshick, R. (2015). Fast r-cnn. In Proceedings of the IEEE International Conference on Computer Vision (pp. 1440–1448).
go back to reference Girshick, R., Donahue, J., Darrell, T., & Malik, J. (2014). Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 580–587). Girshick, R., Donahue, J., Darrell, T., & Malik, J. (2014). Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 580–587).
go back to reference Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., … Bengio, Y. (2014). Generative adversarial nets. In Advances in Neural Information Processing Systems (pp. 2672–2680). Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., … Bengio, Y. (2014). Generative adversarial nets. In Advances in Neural Information Processing Systems (pp. 2672–2680).
go back to reference Gubbi, J., Buyya, R., Marusic, S., & Palaniswami, M. (2013). Internet of Things (IoT): A vision, architectural elements, and future directions. Future Generation Computer Systems, 29, 1645–1660.CrossRef Gubbi, J., Buyya, R., Marusic, S., & Palaniswami, M. (2013). Internet of Things (IoT): A vision, architectural elements, and future directions. Future Generation Computer Systems, 29, 1645–1660.CrossRef
go back to reference Haag, A., Goronzy, S., Schaich, P., & Williams, J. (2004). Emotion recognition using bio-sensors: First steps towards an automatic system. In Tutorial and Research Workshop on Affective Dialogue Systems (pp. 36–48). New York, NY: Springer.CrossRef Haag, A., Goronzy, S., Schaich, P., & Williams, J. (2004). Emotion recognition using bio-sensors: First steps towards an automatic system. In Tutorial and Research Workshop on Affective Dialogue Systems (pp. 36–48). New York, NY: Springer.CrossRef
go back to reference Handa, K., Dairoku, H., & Toriyama, Y. (2010). Investigation of priority needs in terms of museum service accessibility for visually impaired visitors. British Journal of Visual Impairment, 28, 221–234.CrossRef Handa, K., Dairoku, H., & Toriyama, Y. (2010). Investigation of priority needs in terms of museum service accessibility for visually impaired visitors. British Journal of Visual Impairment, 28, 221–234.CrossRef
go back to reference Hao, M., Yu, H., & Li, D. (2015). The measurement of fish size by machine vision-a review. In International Conference on Computer and Computing Technologies in Agriculture (pp. 15–32). Cham: Springer. Hao, M., Yu, H., & Li, D. (2015). The measurement of fish size by machine vision-a review. In International Conference on Computer and Computing Technologies in Agriculture (pp. 15–32). Cham: Springer.
go back to reference He, K., Gkioxari, G., Dollár, P., & Girshick, R. (2017). Mask r-cnn. In 2017 IEEE International Conference on Computer Vision (ICCV) (pp. 2980–2988). Washington, DC: IEEE.CrossRef He, K., Gkioxari, G., Dollár, P., & Girshick, R. (2017). Mask r-cnn. In 2017 IEEE International Conference on Computer Vision (ICCV) (pp. 2980–2988). Washington, DC: IEEE.CrossRef
go back to reference He, K., Zhang, X., Ren, S., & Sun, J. (2015). Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 37, 1904–1916.CrossRef He, K., Zhang, X., Ren, S., & Sun, J. (2015). Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 37, 1904–1916.CrossRef
go back to reference He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 770–778). He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 770–778).
go back to reference Held, D., Thrun, S., & Savarese, S. (2016). Learning to track at 100 FPS with deep regression networks. In European Conference Computer Vision (ECCV). Held, D., Thrun, S., & Savarese, S. (2016). Learning to track at 100 FPS with deep regression networks. In European Conference Computer Vision (ECCV).
go back to reference Hersh, M. A., & Johnson, M. A. (2010). A robotic guide for blind people. Part 1. A multi-national survey of the attitudes, requirements and preferences of potential end-users. Applied Bionics and Biomechanics, 7, 277–288.CrossRef Hersh, M. A., & Johnson, M. A. (2010). A robotic guide for blind people. Part 1. A multi-national survey of the attitudes, requirements and preferences of potential end-users. Applied Bionics and Biomechanics, 7, 277–288.CrossRef
go back to reference Hochreiter, S., & Schmidhuber, J. (1997). Long short-term memory. Neural Computation, 9, 1735–1780.CrossRef Hochreiter, S., & Schmidhuber, J. (1997). Long short-term memory. Neural Computation, 9, 1735–1780.CrossRef
go back to reference Hornik, K., Stinchcombe, M., & White, H. (1989). Multilayer feedforward networks are universal approximators. Neural Networks, 2, 359–366.CrossRef Hornik, K., Stinchcombe, M., & White, H. (1989). Multilayer feedforward networks are universal approximators. Neural Networks, 2, 359–366.CrossRef
go back to reference Howard, A. G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., … Adam, H. (2017). Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861. Howard, A. G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., … Adam, H. (2017). Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861.
go back to reference Huang, G., Liu, Z., Van Der Maaten, L., & Weinberger, K. Q. (2017). Densely connected convolutional networks. In CVPR (p. 3). Huang, G., Liu, Z., Van Der Maaten, L., & Weinberger, K. Q. (2017). Densely connected convolutional networks. In CVPR (p. 3).
go back to reference Huang, G., Sun, Y., Liu, Z., Sedra, D., & Weinberger, K. Q. (2016). Deep networks with stochastic depth. In European Conference on Computer Vision (pp. 646–661). Cham: Springer. Huang, G., Sun, Y., Liu, Z., Sedra, D., & Weinberger, K. Q. (2016). Deep networks with stochastic depth. In European Conference on Computer Vision (pp. 646–661). Cham: Springer.
go back to reference Iakovidis, D. K., Georgakopoulos, S. V., Vasilakakis, M., Koulaouzidis, A., & Plagianakos, V. P. (2018). Detecting and locating gastrointestinal anomalies using deep learning and iterative cluster unification. IEEE Transactions on Medical Imaging, 37, 2196–2210.CrossRef Iakovidis, D. K., Georgakopoulos, S. V., Vasilakakis, M., Koulaouzidis, A., & Plagianakos, V. P. (2018). Detecting and locating gastrointestinal anomalies using deep learning and iterative cluster unification. IEEE Transactions on Medical Imaging, 37, 2196–2210.CrossRef
go back to reference Iandola, F. N., Han, S., Moskewicz, M. W., Ashraf, K., Dally, W. J., & Keutzer, K. (2016). Squeezenet: Alexnet-level accuracy with 50× fewer parameters and <0.5 mb model size. arXiv preprint arXiv:1602.07360. Iandola, F. N., Han, S., Moskewicz, M. W., Ashraf, K., Dally, W. J., & Keutzer, K. (2016). Squeezenet: Alexnet-level accuracy with 50× fewer parameters and <0.5 mb model size. arXiv preprint arXiv:1602.07360.
go back to reference Kaur, B., & Bhattacharya, J. (2018). A scene perception system for visually impaired based on object detection and classification using multi-modal DCNN. arXiv preprint arXiv:1805.08798. Kaur, B., & Bhattacharya, J. (2018). A scene perception system for visually impaired based on object detection and classification using multi-modal DCNN. arXiv preprint arXiv:1805.08798.
go back to reference Konda, K. R., & Memisevic, R. (2015). Learning visual odometry with a convolutional network. VISAPP, 1, 486–490. Konda, K. R., & Memisevic, R. (2015). Learning visual odometry with a convolutional network. VISAPP, 1, 486–490.
go back to reference Kovács, L., Iantovics, L., & Iakovidis, D. (2018). IntraClusTSP—An incremental intra-cluster refinement heuristic algorithm for symmetric travelling salesman problem. Symmetry, 10, 663.CrossRef Kovács, L., Iantovics, L., & Iakovidis, D. (2018). IntraClusTSP—An incremental intra-cluster refinement heuristic algorithm for symmetric travelling salesman problem. Symmetry, 10, 663.CrossRef
go back to reference Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems (pp. 1097–1105). Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems (pp. 1097–1105).
go back to reference LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86, 2278–2324.CrossRef LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86, 2278–2324.CrossRef
go back to reference Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., … Twitter, W. S. (2017). Photo-realistic single image super-resolution using a generative adversarial network. In CVPR (p. 4). Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., … Twitter, W. S. (2017). Photo-realistic single image super-resolution using a generative adversarial network. In CVPR (p. 4).
go back to reference Leng, H., Lin, Y., & Zanzi, L. (2007). An experimental study on physiological parameters toward driver emotion recognition. In International Conference on Ergonomics and Health Aspects of Work with Computers (pp. 237–246). Berlin, Heidelberg: Springer.CrossRef Leng, H., Lin, Y., & Zanzi, L. (2007). An experimental study on physiological parameters toward driver emotion recognition. In International Conference on Ergonomics and Health Aspects of Work with Computers (pp. 237–246). Berlin, Heidelberg: Springer.CrossRef
go back to reference Li, R., Wang, S., Long, Z., & Gu, D. (2018). Undeepvo: Monocular visual odometry through unsupervised deep learning. In 2018 IEEE International Conference on Robotics and Automation (ICRA) (pp. 7286–7291). Washington, DC: IEEE.CrossRef Li, R., Wang, S., Long, Z., & Gu, D. (2018). Undeepvo: Monocular visual odometry through unsupervised deep learning. In 2018 IEEE International Conference on Robotics and Automation (ICRA) (pp. 7286–7291). Washington, DC: IEEE.CrossRef
go back to reference Lin, B.-S., Lee, C.-C., & Chiang, P.-Y. (2017). Simple smartphone-based guiding system for visually impaired people. Sensors, 17, 1371.CrossRef Lin, B.-S., Lee, C.-C., & Chiang, P.-Y. (2017). Simple smartphone-based guiding system for visually impaired people. Sensors, 17, 1371.CrossRef
go back to reference Lin, D. T., Kannappan, A., & Lau, J. N. (2013). The assessment of emotional intelligence among candidates interviewing for general surgery residency. Journal of Surgical Education, 70, 514–521.CrossRef Lin, D. T., Kannappan, A., & Lau, J. N. (2013). The assessment of emotional intelligence among candidates interviewing for general surgery residency. Journal of Surgical Education, 70, 514–521.CrossRef
go back to reference Lin, S., Cheng, R., Wang, K., & Yang, K. (2018). Visual localizer: Outdoor localization based on convnet descriptor and global optimization for visually impaired pedestrians. Sensors, 18, 2476.CrossRef Lin, S., Cheng, R., Wang, K., & Yang, K. (2018). Visual localizer: Outdoor localization based on convnet descriptor and global optimization for visually impaired pedestrians. Sensors, 18, 2476.CrossRef
go back to reference Lin, S., Wang, K., Yang, K., & Cheng, R. (2018). KrNet: A kinetic real-time convolutional neural network for navigational assistance. In International Conference on Computers Helping People with Special Needs (pp. 55–62). Berlin: Springer.CrossRef Lin, S., Wang, K., Yang, K., & Cheng, R. (2018). KrNet: A kinetic real-time convolutional neural network for navigational assistance. In International Conference on Computers Helping People with Special Needs (pp. 55–62). Berlin: Springer.CrossRef
go back to reference Lin, T.-Y., Dollár, P., Girshick, R. B., He, K., Hariharan, B., & Belongie, S. J. (2017). Feature pyramid networks for object detection. In CVPR (p. 4). Lin, T.-Y., Dollár, P., Girshick, R. B., He, K., Hariharan, B., & Belongie, S. J. (2017). Feature pyramid networks for object detection. In CVPR (p. 4).
go back to reference Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.-Y., & Berg, A. C. (2016). Ssd: Single shot multibox detector. In European Conference on Computer Vision (pp. 21–37). Cham: Springer. Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.-Y., & Berg, A. C. (2016). Ssd: Single shot multibox detector. In European Conference on Computer Vision (pp. 21–37). Cham: Springer.
go back to reference Liu, Y., Yu, X., Chen, S., & Tang, W. (2016). Object localization and size measurement using networked address event representation imagers. IEEE Sensors Journal, 16, 2894–2895.CrossRef Liu, Y., Yu, X., Chen, S., & Tang, W. (2016). Object localization and size measurement using networked address event representation imagers. IEEE Sensors Journal, 16, 2894–2895.CrossRef
go back to reference Luo, W., Li, J., Yang, J., Xu, W., & Zhang, J. (2018). Convolutional sparse autoencoders for image classification. IEEE Transactions on Neural Networks and Learning Systems, 29, 3289–3294.MathSciNet Luo, W., Li, J., Yang, J., Xu, W., & Zhang, J. (2018). Convolutional sparse autoencoders for image classification. IEEE Transactions on Neural Networks and Learning Systems, 29, 3289–3294.MathSciNet
go back to reference Magnusson, C., Hedvall, P.-O., & Caltenco, H. (2018). Co-designing together with persons with visual impairments. In Mobility of visually impaired people (pp. 411–434). Switzerland: Springer.CrossRef Magnusson, C., Hedvall, P.-O., & Caltenco, H. (2018). Co-designing together with persons with visual impairments. In Mobility of visually impaired people (pp. 411–434). Switzerland: Springer.CrossRef
go back to reference Maimone, M., Cheng, Y., & Matthies, L. (2007). Two years of visual odometry on the mars exploration rovers. Journal of Field Robotics, 24, 169–186.CrossRef Maimone, M., Cheng, Y., & Matthies, L. (2007). Two years of visual odometry on the mars exploration rovers. Journal of Field Robotics, 24, 169–186.CrossRef
go back to reference Mustafah, Y. M., Noor, R., Hasbi, H., & Azma, A. W. (2012). Stereo vision images processing for real-time object distance and size measurements. In 2012 International Conference on Computer and Communication Engineering (ICCCE) (pp. 659–663). Washington, DC: IEEE.CrossRef Mustafah, Y. M., Noor, R., Hasbi, H., & Azma, A. W. (2012). Stereo vision images processing for real-time object distance and size measurements. In 2012 International Conference on Computer and Communication Engineering (ICCCE) (pp. 659–663). Washington, DC: IEEE.CrossRef
go back to reference Nistér, D., Naroditsky, O., & Bergen, J. (2004). Visual odometry. In Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004 (CVPR 2004) (pp. I652–I659). Washington, DC: IEEE.CrossRef Nistér, D., Naroditsky, O., & Bergen, J. (2004). Visual odometry. In Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004 (CVPR 2004) (pp. I652–I659). Washington, DC: IEEE.CrossRef
go back to reference Pan, J., Ferrer, C. C., McGuinness, K., O’Connor, N. E., Torres, J., Sayrol, E., & Giro-i-Nieto, X. (2017). Salgan: Visual saliency prediction with generative adversarial networks. arXiv preprint arXiv:1701.01081. Pan, J., Ferrer, C. C., McGuinness, K., O’Connor, N. E., Torres, J., Sayrol, E., & Giro-i-Nieto, X. (2017). Salgan: Visual saliency prediction with generative adversarial networks. arXiv preprint arXiv:1701.01081.
go back to reference Panchanathan, S., Black, J., Rush, M., & Iyer, V. (2003). iCare-a user centric approach to the development of assistive devices for the blind and visually impaired. In Proceedings. 15th IEEE International Conference on Tools with Artificial Intelligence, 2003 (pp. 641–648). Washington, DC: IEEE.CrossRef Panchanathan, S., Black, J., Rush, M., & Iyer, V. (2003). iCare-a user centric approach to the development of assistive devices for the blind and visually impaired. In Proceedings. 15th IEEE International Conference on Tools with Artificial Intelligence, 2003 (pp. 641–648). Washington, DC: IEEE.CrossRef
go back to reference Papageorgiou, E. I., & Iakovidis, D. K. (2013). Intuitionistic fuzzy cognitive maps. IEEE Transactions on Fuzzy Systems, 21, 342–354.CrossRef Papageorgiou, E. I., & Iakovidis, D. K. (2013). Intuitionistic fuzzy cognitive maps. IEEE Transactions on Fuzzy Systems, 21, 342–354.CrossRef
go back to reference Papageorgiou, E. I., & Salmeron, J. L. (2013). A review of fuzzy cognitive maps research during the last decade. IEEE Transactions on Fuzzy Systems, 21, 66–79.CrossRef Papageorgiou, E. I., & Salmeron, J. L. (2013). A review of fuzzy cognitive maps research during the last decade. IEEE Transactions on Fuzzy Systems, 21, 66–79.CrossRef
go back to reference Papakostas, M., Spyrou, E., Giannakopoulos, T., Siantikos, G., Sgouropoulos, D., Mylonas, P., & Makedon, F. (2017). Deep visual attributes vs. hand-crafted audio features on multidomain speech emotion recognition. Computation, 5, 26.CrossRef Papakostas, M., Spyrou, E., Giannakopoulos, T., Siantikos, G., Sgouropoulos, D., Mylonas, P., & Makedon, F. (2017). Deep visual attributes vs. hand-crafted audio features on multidomain speech emotion recognition. Computation, 5, 26.CrossRef
go back to reference Perakovic, D., Periša, M., & Prcic, A. B. (2015). Possibilities of applying ICT to improve safe movement of blind and visually impaired persons. In C. Volosencu (Ed.), Cutting edge research in technologies. London: IntechOpen. Perakovic, D., Periša, M., & Prcic, A. B. (2015). Possibilities of applying ICT to improve safe movement of blind and visually impaired persons. In C. Volosencu (Ed.), Cutting edge research in technologies. London: IntechOpen.
go back to reference Petrushin, V. (1999). Emotion in speech: Recognition and application to call centers. In Proceedings of Artificial Neural Networks in Engineering (p. 22). Petrushin, V. (1999). Emotion in speech: Recognition and application to call centers. In Proceedings of Artificial Neural Networks in Engineering (p. 22).
go back to reference Piana, S., Stagliano, A., Odone, F., Verri, A., & Camurri, A. (2014). Real-time automatic emotion recognition from body gestures. arXiv preprint arXiv:1402.5047. Piana, S., Stagliano, A., Odone, F., Verri, A., & Camurri, A. (2014). Real-time automatic emotion recognition from body gestures. arXiv preprint arXiv:1402.5047.
go back to reference Poggi, M., & Mattoccia, S. (2016). A wearable mobility aid for the visually impaired based on embedded 3D vision and deep learning. In 2016 IEEE Symposium on Computers and Communication (ISCC) (pp. 208–213). Poggi, M., & Mattoccia, S. (2016). A wearable mobility aid for the visually impaired based on embedded 3D vision and deep learning. In 2016 IEEE Symposium on Computers and Communication (ISCC) (pp. 208–213).
go back to reference Psaltis, A., Kaza, K., Stefanidis, K., Thermos, S., Apostolakis, K. C., Dimitropoulos, K., & Daras, P. (2016). Multimodal affective state recognition in serious games applications. In 2016 IEEE International Conference on Imaging Systems and Techniques (IST) (pp. 435–439). Washington, DC: IEEE.CrossRef Psaltis, A., Kaza, K., Stefanidis, K., Thermos, S., Apostolakis, K. C., Dimitropoulos, K., & Daras, P. (2016). Multimodal affective state recognition in serious games applications. In 2016 IEEE International Conference on Imaging Systems and Techniques (IST) (pp. 435–439). Washington, DC: IEEE.CrossRef
go back to reference Pu, L., Tian, R., Wu, H.-C., & Yan, K. (2016). Novel object-size measurement using the digital camera. In Advanced Information Management, Communicates, Electronic and Automation Control Conference (IMCEC), 2016 IEEE (pp. 543–548). Washington, DC: IEEE. Pu, L., Tian, R., Wu, H.-C., & Yan, K. (2016). Novel object-size measurement using the digital camera. In Advanced Information Management, Communicates, Electronic and Automation Control Conference (IMCEC), 2016 IEEE (pp. 543–548). Washington, DC: IEEE.
go back to reference Ramesh, K., Nagananda, S., Ramasangu, H., & Deshpande, R. (2018). Real-time localization and navigation in an indoor environment using monocular camera for visually impaired. In 2018 Fifth International Conference on Industrial Engineering and Applications (ICIEA) (pp. 122–128). Washington, DC: IEEE.CrossRef Ramesh, K., Nagananda, S., Ramasangu, H., & Deshpande, R. (2018). Real-time localization and navigation in an indoor environment using monocular camera for visually impaired. In 2018 Fifth International Conference on Industrial Engineering and Applications (ICIEA) (pp. 122–128). Washington, DC: IEEE.CrossRef
go back to reference Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. (2016). You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 779–788). Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. (2016). You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 779–788).
go back to reference Redmon, J., & Farhadi, A. (2017). YOLO9000: Better, faster, stronger. arXiv preprint. Redmon, J., & Farhadi, A. (2017). YOLO9000: Better, faster, stronger. arXiv preprint.
go back to reference Ren, S., He, K., Girshick, R., & Sun, J. (2015). Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in Neural Information Processing Systems (pp. 91–99). Ren, S., He, K., Girshick, R., & Sun, J. (2015). Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in Neural Information Processing Systems (pp. 91–99).
go back to reference Roberts, L. G. (1963). Machine perception of three-dimensional solids. Lexington, MA: Massachusetts Institute of Technology (MIT). Lincoln Laboratory. Roberts, L. G. (1963). Machine perception of three-dimensional solids. Lexington, MA: Massachusetts Institute of Technology (MIT). Lincoln Laboratory.
go back to reference Schwarze, T., Lauer, M., Schwaab, M., Romanovas, M., Böhm, S., & Jürgensohn, T. (2016). A camera-based mobility aid for visually impaired people. KI-Künstliche Intelligenz, 30, 29–36.CrossRef Schwarze, T., Lauer, M., Schwaab, M., Romanovas, M., Böhm, S., & Jürgensohn, T. (2016). A camera-based mobility aid for visually impaired people. KI-Künstliche Intelligenz, 30, 29–36.CrossRef
go back to reference Sermanet, P., Eigen, D., Zhang, X., Mathieu, M., Fergus, R., & LeCun, Y. (2013). Overfeat: Integrated recognition, localization and detection using convolutional networks. arXiv preprint arXiv:1312.6229. Sermanet, P., Eigen, D., Zhang, X., Mathieu, M., Fergus, R., & LeCun, Y. (2013). Overfeat: Integrated recognition, localization and detection using convolutional networks. arXiv preprint arXiv:1312.6229.
go back to reference Shah, N. F. M. N., & Ghazali, M. (2018). A systematic review on digital technology for enhancing user experience in museums. In International Conference on User Science and Engineering (pp. 35–46). Singapore: Springer.CrossRef Shah, N. F. M. N., & Ghazali, M. (2018). A systematic review on digital technology for enhancing user experience in museums. In International Conference on User Science and Engineering (pp. 35–46). Singapore: Springer.CrossRef
go back to reference Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.
go back to reference Sosa-Garcia, J., & Odone, F. (2017). “Hands on” visual recognition for visually impaired users. ACM Transactions on Accessible Computing (TACCESS), 10, 8. Sosa-Garcia, J., & Odone, F. (2017). “Hands on” visual recognition for visually impaired users. ACM Transactions on Accessible Computing (TACCESS), 10, 8.
go back to reference Spyrou, E., Vretos, N., Pomazanskyi, A., Asteriadis, S., & Leligou, H. C. (2018). Exploiting IoT technologies for personalized learning. In 2018 IEEE Conference on Computational Intelligence and Games (CIG) (pp. 1–8). Washington, DC: IEEE. Spyrou, E., Vretos, N., Pomazanskyi, A., Asteriadis, S., & Leligou, H. C. (2018). Exploiting IoT technologies for personalized learning. In 2018 IEEE Conference on Computational Intelligence and Games (CIG) (pp. 1–8). Washington, DC: IEEE.
go back to reference Suresh, A., Arora, C., Laha, D., Gaba, D., & Bhambri, S. (2017). Intelligent smart glass for visually impaired using deep learning machine vision techniques and robot operating system (ROS). In International Conference on Robot Intelligence Technology and Applications (pp. 99–112). Switzerland: Springer. Suresh, A., Arora, C., Laha, D., Gaba, D., & Bhambri, S. (2017). Intelligent smart glass for visually impaired using deep learning machine vision techniques and robot operating system (ROS). In International Conference on Robot Intelligence Technology and Applications (pp. 99–112). Switzerland: Springer.
go back to reference Szegedy, C., Ioffe, S., Vanhoucke, V., & Alemi, A. A. (2017). Inception-v4, inception-resnet and the impact of residual connections on learning. In AAAI (p. 12). Szegedy, C., Ioffe, S., Vanhoucke, V., & Alemi, A. A. (2017). Inception-v4, inception-resnet and the impact of residual connections on learning. In AAAI (p. 12).
go back to reference Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., … Rabinovich, A. (2015). Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 1–9). Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., … Rabinovich, A. (2015). Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 1–9).
go back to reference Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., & Wojna, Z. (2016). Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 2818–2826). Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., & Wojna, Z. (2016). Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 2818–2826).
go back to reference Tapu, R., Mocanu, B., & Zaharia, T. (2017). DEEP-SEE: Joint object detection, tracking and recognition with application to visually impaired navigational assistance. Sensors, 17, 2473.CrossRef Tapu, R., Mocanu, B., & Zaharia, T. (2017). DEEP-SEE: Joint object detection, tracking and recognition with application to visually impaired navigational assistance. Sensors, 17, 2473.CrossRef
go back to reference Theodoridis, S., & Koutroumbas, K. (2009). Pattern recognition (4th ed.). Boston: Academic Press.MATH Theodoridis, S., & Koutroumbas, K. (2009). Pattern recognition (4th ed.). Boston: Academic Press.MATH
go back to reference Tsatsou, D., Pomazanskyi, A., Hortal, E., Spyrou, E., Leligou, H. C., Asteriadis, S., … Daras, P. (2018). Adaptive learning based on affect sensing. In International Conference on Artificial Intelligence in Education (pp. 475–479). Switzerland: Springer.CrossRef Tsatsou, D., Pomazanskyi, A., Hortal, E., Spyrou, E., Leligou, H. C., Asteriadis, S., … Daras, P. (2018). Adaptive learning based on affect sensing. In International Conference on Artificial Intelligence in Education (pp. 475–479). Switzerland: Springer.CrossRef
go back to reference Uijlings, J. R., Van De Sande, K. E., Gevers, T., & Smeulders, A. W. (2013). Selective search for object recognition. International Journal of Computer Vision, 104, 154–171.CrossRef Uijlings, J. R., Van De Sande, K. E., Gevers, T., & Smeulders, A. W. (2013). Selective search for object recognition. International Journal of Computer Vision, 104, 154–171.CrossRef
go back to reference Vašcák, J., & Hvizdoš, J. (2016). Vehicle navigation by fuzzy cognitive maps using sonar and RFID technologies. In 2016 IEEE 14th International Symposium on Applied Machine Intelligence and Informatics (SAMI) (pp. 75–80). Washington, DC: IEEE.CrossRef Vašcák, J., & Hvizdoš, J. (2016). Vehicle navigation by fuzzy cognitive maps using sonar and RFID technologies. In 2016 IEEE 14th International Symposium on Applied Machine Intelligence and Informatics (SAMI) (pp. 75–80). Washington, DC: IEEE.CrossRef
go back to reference Vasilakakis, M. D., Diamantis, D., Spyrou, E., Koulaouzidis, A., & Iakovidis, D. K. (2018). Weakly supervised multilabel classification for semantic interpretation of endoscopy video frames. Evolving Systems, 1–13. Vasilakakis, M. D., Diamantis, D., Spyrou, E., Koulaouzidis, A., & Iakovidis, D. K. (2018). Weakly supervised multilabel classification for semantic interpretation of endoscopy video frames. Evolving Systems, 1–13.
go back to reference Wang, H., Hu, J., & Deng, W. (2018). Face feature extraction: A complete review. IEEE Access, 6, 6001–6039.CrossRef Wang, H., Hu, J., & Deng, W. (2018). Face feature extraction: A complete review. IEEE Access, 6, 6001–6039.CrossRef
go back to reference Wang, H.-C., Katzschmann, R. K., Teng, S., Araki, B., Giarré, L., & Rus, D. (2017). Enabling independent navigation for visually impaired people through a wearable vision-based feedback system. In 2017 IEEE International Conference on Robotics and Automation (ICRA) (pp. 6533–6540). Washington, DC: IEEE.CrossRef Wang, H.-C., Katzschmann, R. K., Teng, S., Araki, B., Giarré, L., & Rus, D. (2017). Enabling independent navigation for visually impaired people through a wearable vision-based feedback system. In 2017 IEEE International Conference on Robotics and Automation (ICRA) (pp. 6533–6540). Washington, DC: IEEE.CrossRef
go back to reference Wang, J., Yang, Y., Mao, J., Huang, Z., Huang, C., & Xu, W. (2016). CNN-RNN: A unified framework for multi-label image classification. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Wang, J., Yang, Y., Mao, J., Huang, Z., Huang, C., & Xu, W. (2016). CNN-RNN: A unified framework for multi-label image classification. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
go back to reference Wang, S., Clark, R., Wen, H., & Trigoni, N. (2017). Deepvo: Towards end-to-end visual odometry with deep recurrent convolutional neural networks. In 2017 IEEE International Conference on Robotics and Automation (ICRA) (pp. 2043–2050). Washington, DC: IEEE.CrossRef Wang, S., Clark, R., Wen, H., & Trigoni, N. (2017). Deepvo: Towards end-to-end visual odometry with deep recurrent convolutional neural networks. In 2017 IEEE International Conference on Robotics and Automation (ICRA) (pp. 2043–2050). Washington, DC: IEEE.CrossRef
go back to reference Wang, X., Gao, L., Song, J., & Shen, H. (2017). Beyond frame-level CNN: Saliency-aware 3-D CNN with LSTM for video action recognition. IEEE Signal Processing Letters, 24, 510–514.CrossRef Wang, X., Gao, L., Song, J., & Shen, H. (2017). Beyond frame-level CNN: Saliency-aware 3-D CNN with LSTM for video action recognition. IEEE Signal Processing Letters, 24, 510–514.CrossRef
go back to reference Xiao, J., Joseph, S. L., Zhang, X., Li, B., Li, X., & Zhang, J. (2015). An assistive navigation framework for the visually impaired. IEEE Transactions on Human-Machine Systems, 45, 635–640.CrossRef Xiao, J., Joseph, S. L., Zhang, X., Li, B., Li, X., & Zhang, J. (2015). An assistive navigation framework for the visually impaired. IEEE Transactions on Human-Machine Systems, 45, 635–640.CrossRef
go back to reference Xie, S., Girshick, R., Dollár, P., Tu, Z., & He, K. (2017). Aggregated residual transformations for deep neural networks. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 5987–5995). Washington, DC: IEEE.CrossRef Xie, S., Girshick, R., Dollár, P., Tu, Z., & He, K. (2017). Aggregated residual transformations for deep neural networks. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 5987–5995). Washington, DC: IEEE.CrossRef
go back to reference Yang, K., Wang, K., Bergasa, L. M., Romera, E., Hu, W., Sun, D., … López, E. (2018). Unifying terrain awareness for the visually impaired through real-time semantic segmentation. Sensors, 18, 1506.CrossRef Yang, K., Wang, K., Bergasa, L. M., Romera, E., Hu, W., Sun, D., … López, E. (2018). Unifying terrain awareness for the visually impaired through real-time semantic segmentation. Sensors, 18, 1506.CrossRef
go back to reference Yang, K., Wang, K., Zhao, X., Cheng, R., Bai, J., Yang, Y., & Liu, D. (2017). IR stereo realsense: Decreasing minimum range of navigational assistance for visually impaired individuals. Journal of Ambient Intelligence and Smart Environments, 9, 743–755.CrossRef Yang, K., Wang, K., Zhao, X., Cheng, R., Bai, J., Yang, Y., & Liu, D. (2017). IR stereo realsense: Decreasing minimum range of navigational assistance for visually impaired individuals. Journal of Ambient Intelligence and Smart Environments, 9, 743–755.CrossRef
go back to reference Yang, Z., Duarte, M. F., & Ganz, A. (2018). A novel crowd-resilient visual localization algorithm via robust PCA background extraction. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 1922–1926). Washington, DC: IEEE.CrossRef Yang, Z., Duarte, M. F., & Ganz, A. (2018). A novel crowd-resilient visual localization algorithm via robust PCA background extraction. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 1922–1926). Washington, DC: IEEE.CrossRef
go back to reference Yu, X., Yang, G., Jones, S., & Saniie, J. (2018). AR marker aided obstacle localization system for assisting visually impaired. In 2018 IEEE International Conference on Electro/Information Technology (EIT) (pp. 271–276). Washington, DC: IEEE.CrossRef Yu, X., Yang, G., Jones, S., & Saniie, J. (2018). AR marker aided obstacle localization system for assisting visually impaired. In 2018 IEEE International Conference on Electro/Information Technology (EIT) (pp. 271–276). Washington, DC: IEEE.CrossRef
go back to reference Zadeh, L. A. (1983). A computational approach to fuzzy quantifiers in natural languages. Computers & Mathematics with Applications, 9, 149–184.MathSciNetCrossRef Zadeh, L. A. (1983). A computational approach to fuzzy quantifiers in natural languages. Computers & Mathematics with Applications, 9, 149–184.MathSciNetCrossRef
go back to reference Zeng, L. (2015). A survey: outdoor mobility experiences by the visually impaired. In Mensch und Computer 2015–Workshopband. Zeng, L. (2015). A survey: outdoor mobility experiences by the visually impaired. In Mensch und Computer 2015–Workshopband.
go back to reference Zhang, J., Kaess, M., & Singh, S. (2017). A real-time method for depth enhanced visual odometry. Autonomous Robots, 41, 31–43.CrossRef Zhang, J., Kaess, M., & Singh, S. (2017). A real-time method for depth enhanced visual odometry. Autonomous Robots, 41, 31–43.CrossRef
go back to reference Zhang, J., Ong, S., & Nee, A. (2008). Navigation systems for individuals with visual impairment: A survey. In Proceedings of the Second International Convention on Rehabilitation Engineering & Assistive Technology (pp. 159–162). Singapore: Singapore Therapeutic, Assistive & Rehabilitative Technologies (START) Centre. Zhang, J., Ong, S., & Nee, A. (2008). Navigation systems for individuals with visual impairment: A survey. In Proceedings of the Second International Convention on Rehabilitation Engineering & Assistive Technology (pp. 159–162). Singapore: Singapore Therapeutic, Assistive & Rehabilitative Technologies (START) Centre.
go back to reference Zhang, X., Zhou, X., Lin, M., & Sun, J. (2017). ShuffleNet: An extremely efficient convolutional neural network for mobile devices. ArXiv e-prints. Zhang, X., Zhou, X., Lin, M., & Sun, J. (2017). ShuffleNet: An extremely efficient convolutional neural network for mobile devices. ArXiv e-prints.
go back to reference Zowghi, D., & Coulin, C. (2005). Requirements elicitation: A survey of techniques, approaches, and tools. In Engineering and managing software requirements (pp. 19–46). Berlin, Heidelberg: Springer.CrossRef Zowghi, D., & Coulin, C. (2005). Requirements elicitation: A survey of techniques, approaches, and tools. In Engineering and managing software requirements (pp. 19–46). Berlin, Heidelberg: Springer.CrossRef
Metadata
Title
Digital Enhancement of Cultural Experience and Accessibility for the Visually Impaired
Authors
Dimitris K. Iakovidis
Dimitrios Diamantis
George Dimas
Charis Ntakolia
Evaggelos Spyrou
Copyright Year
2020
DOI
https://doi.org/10.1007/978-3-030-16450-8_10