Skip to main content
Top

2020 | OriginalPaper | Chapter

A Scene Classification Approach for Augmented Reality Devices

Authors : Aasim Khurshid, Sergio Cleger, Ricardo Grunitzki

Published in: HCI International 2020 – Late Breaking Papers: Virtual and Augmented Reality

Publisher: Springer International Publishing

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Augmented Reality (AR) technology can overlay digital content over the physical world to enhance the user’s interaction with the real-world. The increasing number of devices for this purpose, such as Microsoft HoloLens, MagicLeap, Google Glass, allows to AR an immensity of applications. A critical task to make the AR devices more useful to users is the scene/environment understanding because this can avoid the device of mapping elements that were previously mapped and customized by the user. In this direction, we propose a scene classification approach for AR devices which has two components: i) an AR device that captures images, and ii) a remote server to perform scene classification. Four methods for scene classification, which utilize convolutional neural networks, support vector machine and transfer learning are proposed and evaluated. Experiments conducted using real data from an indoor office environment and Microsoft HoloLens AR device shows that the proposed AR scene classification approach can reach up to \(99\%\) of accuracy, even with similar texture information across scenes.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literature
4.
go back to reference Bichlmeier, C., Ockert, B., Heining, S.M., Ahmadi, A., Navab, N.: Stepping into the operating theater: ARAV - augmented reality aided vertebroplasty. In: 2008 7th IEEE/ACM International Symposium on Mixed and Augmented Reality, pp. 165–166, September 2008. https://doi.org/10.1109/ISMAR.2008.4637348 Bichlmeier, C., Ockert, B., Heining, S.M., Ahmadi, A., Navab, N.: Stepping into the operating theater: ARAV - augmented reality aided vertebroplasty. In: 2008 7th IEEE/ACM International Symposium on Mixed and Augmented Reality, pp. 165–166, September 2008. https://​doi.​org/​10.​1109/​ISMAR.​2008.​4637348
5.
go back to reference Evans, G., Miller, J., Pena, M.I., MacAllister, A., Winer, E.: Evaluating the Microsoft HoloLens through an augmented reality assembly application. In: Degraded Environments: Sensing, Processing, and Display 2017, vol. 10197, p. 101970V. International Society for Optics and Photonics (2017) Evans, G., Miller, J., Pena, M.I., MacAllister, A., Winer, E.: Evaluating the Microsoft HoloLens through an augmented reality assembly application. In: Degraded Environments: Sensing, Processing, and Display 2017, vol. 10197, p. 101970V. International Society for Optics and Photonics (2017)
8.
go back to reference Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., Fei-Fei, L.: Large-scale video classification with convolutional neural networks. In: 2014 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1725–1732, June 2014. https://doi.org/10.1109/CVPR.2014.223 Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., Fei-Fei, L.: Large-scale video classification with convolutional neural networks. In: 2014 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1725–1732, June 2014. https://​doi.​org/​10.​1109/​CVPR.​2014.​223
9.
go back to reference Khurshid, A.: Adaptive face tracking based on online learning (2018) Khurshid, A.: Adaptive face tracking based on online learning (2018)
11.
go back to reference Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012) Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)
13.
go back to reference Mann, S.: Google eye, supplemental material for through the glass, lightly. IEEE Technol. Soc. Mag. 31(3), 10–14 (2012)CrossRef Mann, S.: Google eye, supplemental material for through the glass, lightly. IEEE Technol. Soc. Mag. 31(3), 10–14 (2012)CrossRef
14.
go back to reference Natsume, S.G.: Virtual reality headset. US Patent Application 29/527,040, 07 June 2016 Natsume, S.G.: Virtual reality headset. US Patent Application 29/527,040, 07 June 2016
15.
go back to reference Niu, J., Bu, X., Qian, K., Li, Z.: An indoor scene recognition method combining global and saliency region features. Robot 37(1), 122–128 (2015) Niu, J., Bu, X., Qian, K., Li, Z.: An indoor scene recognition method combining global and saliency region features. Robot 37(1), 122–128 (2015)
18.
go back to reference Pirri, F.: Indoor environment classification and perceptual matching. In: KR, pp. 73–84 (2004) Pirri, F.: Indoor environment classification and perceptual matching. In: KR, pp. 73–84 (2004)
20.
go back to reference Santos, D., Lopez-Lopez, E., Pardo, X.M., Iglesias, R., Barro, S., Fdez-Vidal, X.R.: Robust and fast scene recognition in robotics through the automatic identification of meaningful images. Sensors 19(18), 4024 (2019)CrossRef Santos, D., Lopez-Lopez, E., Pardo, X.M., Iglesias, R., Barro, S., Fdez-Vidal, X.R.: Robust and fast scene recognition in robotics through the automatic identification of meaningful images. Sensors 19(18), 4024 (2019)CrossRef
22.
go back to reference Wang, L., Guo, S., Huang, W., Xiong, Y., Qiao, Y.: Knowledge guided disambiguation for large-scale scene classification with multi-resolution CNNs. IEEE Trans. Image Proc. 26(4), 2055–2068 (2017)MathSciNetCrossRef Wang, L., Guo, S., Huang, W., Xiong, Y., Qiao, Y.: Knowledge guided disambiguation for large-scale scene classification with multi-resolution CNNs. IEEE Trans. Image Proc. 26(4), 2055–2068 (2017)MathSciNetCrossRef
24.
go back to reference Wu, P., Li, Y., Yang, F., Kong, L., Hou, Z.: A CLM-based method of indoor affordance areas classification for service robots. Jiqiren/Robot 40(2), 188–194 (2018) Wu, P., Li, Y., Yang, F., Kong, L., Hou, Z.: A CLM-based method of indoor affordance areas classification for service robots. Jiqiren/Robot 40(2), 188–194 (2018)
25.
go back to reference Zhou, B., Lapedriza, A., Xiao, J., Torralba, A., Oliva, A.: Learning deep features for scene recognition using places database. In: Advances in Neural Information Processing Systems, pp. 487–495 (2014) Zhou, B., Lapedriza, A., Xiao, J., Torralba, A., Oliva, A.: Learning deep features for scene recognition using places database. In: Advances in Neural Information Processing Systems, pp. 487–495 (2014)
Metadata
Title
A Scene Classification Approach for Augmented Reality Devices
Authors
Aasim Khurshid
Sergio Cleger
Ricardo Grunitzki
Copyright Year
2020
DOI
https://doi.org/10.1007/978-3-030-59990-4_14