Skip to main content
Erschienen in: World Wide Web 3/2019

29.05.2018

Augmented reality displaying scheme in a smart glass based on relative object positions and orientation sensors

verfasst von: Shih-Wei Sun, Yi-Shan Lan

Erschienen in: World Wide Web | Ausgabe 3/2019

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

To interactively browse augmented reality (AR) content from a smart glass and to display the proper 3D visual contents on a smart glass are challenging research issues. In this paper, we propose to use a depth camera to detect a human subject in a real 3D space, and orientation sensors on a smart glass are used to reveal the attitude and orientations of a user’s head for pose estimation in an AR application. By implementing a prototype for detecting a user’s head and measuring the orientations of the head, the proposed method provides three contributions: (i) a top-view depth camera is used to detect a user’s head position, (ii) the orientation sensors on a smart glass are used to reveal the attitude and orientation properties of the head, and (iii) the displayed AR content in a virtual space is properly mapped from a real 3D space. The experimental results demonstrated the spatial displaying accuracy in three testing spaces: a research lab, an office, and the center for art and technology. In addition, the proposed method is applied in a tech-art installation to allow audience to reliably view the AR content in a smart glass.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Azuma, R.: A survey of augmented reality. Presence: Teleoperators Virt Environ 6(4), 355–385 (1997)CrossRef Azuma, R.: A survey of augmented reality. Presence: Teleoperators Virt Environ 6(4), 355–385 (1997)CrossRef
2.
Zurück zum Zitat Baum, M., Faion, F., Hanebeck, U.D.: Tracking ground moving extended objects using RGBD data. In: IEEE International Conf on Multisensor Fusion and Integration for Intelligent Systems (2012) Baum, M., Faion, F., Hanebeck, U.D.: Tracking ground moving extended objects using RGBD data. In: IEEE International Conf on Multisensor Fusion and Integration for Intelligent Systems (2012)
3.
Zurück zum Zitat Benatti, S., Casamassima, F., Milosevic, B., Farella, E., Schonle, P., Fateh, S., Burger, T., Huang, Q., Benini, L.: A versatile embedded platform for EMG acquisition and gesture recognition. IEEE Trans. Biomed. Circuits Syst. 9(5), 620–630 (2015)CrossRef Benatti, S., Casamassima, F., Milosevic, B., Farella, E., Schonle, P., Fateh, S., Burger, T., Huang, Q., Benini, L.: A versatile embedded platform for EMG acquisition and gesture recognition. IEEE Trans. Biomed. Circuits Syst. 9(5), 620–630 (2015)CrossRef
4.
Zurück zum Zitat Brostow, G.J., Cipolla, R.: Unsupervised Bayesian detection of independent motion in crowds. In: IEEE Conf on Computer Vision and Pattern Recognition (2006) Brostow, G.J., Cipolla, R.: Unsupervised Bayesian detection of independent motion in crowds. In: IEEE Conf on Computer Vision and Pattern Recognition (2006)
6.
Zurück zum Zitat Donoser, M., Kontschieder, P., Bischof, H.: Robust planar target tracking and pose estimation from a single concavity. In: IEEE Int. Symp on Mixed and Augmented Reality (2011) Donoser, M., Kontschieder, P., Bischof, H.: Robust planar target tracking and pose estimation from a single concavity. In: IEEE Int. Symp on Mixed and Augmented Reality (2011)
7.
Zurück zum Zitat El-Khoury, S., Batzianoulis, I., Antuvan, C.W., Contu, S., Masia, L., Micera, S., Billard, A.: EMG-based learning aproach for estimating wrist motion. In: IEEE Intl. Conf Engineering in Medicine and Biology Society (EMBC), pp. 6732–6735 (2015) El-Khoury, S., Batzianoulis, I., Antuvan, C.W., Contu, S., Masia, L., Micera, S., Billard, A.: EMG-based learning aproach for estimating wrist motion. In: IEEE Intl. Conf Engineering in Medicine and Biology Society (EMBC), pp. 6732–6735 (2015)
10.
Zurück zum Zitat Eshel, R., Moses, Y.: Homography based multiple camera detection and tracking of people in a dense crowd. In: IEEE Conf on Computer Vision and Pattern Recognition (2008) Eshel, R., Moses, Y.: Homography based multiple camera detection and tracking of people in a dense crowd. In: IEEE Conf on Computer Vision and Pattern Recognition (2008)
11.
Zurück zum Zitat Gupta, H.P., Chudgar, H.S., Mukherjee, S., Dutta, T., Sharma, K.: A continuous hand gestures recognition technique for human-machine interaction using accelerometer and gyroscope sensors. IEEE Sensors J. 16(16), 6425–6432 (2016)CrossRef Gupta, H.P., Chudgar, H.S., Mukherjee, S., Dutta, T., Sharma, K.: A continuous hand gestures recognition technique for human-machine interaction using accelerometer and gyroscope sensors. IEEE Sensors J. 16(16), 6425–6432 (2016)CrossRef
12.
Zurück zum Zitat Kim, K., Lepetit, V., Woo, W.: Scalable real-time planar targets tracking for digilog books. Vis. Comput. 26(6–8), 1145–1154 (2010)CrossRef Kim, K., Lepetit, V., Woo, W.: Scalable real-time planar targets tracking for digilog books. Vis. Comput. 26(6–8), 1145–1154 (2010)CrossRef
15.
Zurück zum Zitat Klein, G., Murray, D.: Parallel tracking and mapping for small AR workspaces. In: IEEE/ACM International symposium on mixed and augmented reality (2007) Klein, G., Murray, D.: Parallel tracking and mapping for small AR workspaces. In: IEEE/ACM International symposium on mixed and augmented reality (2007)
16.
Zurück zum Zitat Lan, Y.S., Sun, S.W., Hua, K.L., Cheng, W.H.: O-Displaying: an orientation-based augmented reality display on a smart glass with a user tracking. ACM SIGGRAPH Asia (2017) Lan, Y.S., Sun, S.W., Hua, K.L., Cheng, W.H.: O-Displaying: an orientation-based augmented reality display on a smart glass with a user tracking. ACM SIGGRAPH Asia (2017)
17.
Zurück zum Zitat Liarokapis, M.V., Artemiadis, P.K., Kyriakopoulos, K.J., Manolakos, E.S.: A learning scheme for reach to grasp movements: on EMG-based interfaces using task specific motion decoding models. IEEE J. Biomed. Health Inform. 17(5), 915–921 (2013)CrossRef Liarokapis, M.V., Artemiadis, P.K., Kyriakopoulos, K.J., Manolakos, E.S.: A learning scheme for reach to grasp movements: on EMG-based interfaces using task specific motion decoding models. IEEE J. Biomed. Health Inform. 17(5), 915–921 (2013)CrossRef
18.
Zurück zum Zitat Lowe, D.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60(2), 91–110 (2004)CrossRef Lowe, D.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60(2), 91–110 (2004)CrossRef
19.
Zurück zum Zitat Luber, M., Spinello, L., Arras, K.O.: People tracking in RGB-D data with on-line boosted target models. In: IEEE International Conf on Intelligent Robots and Systems (2011) Luber, M., Spinello, L., Arras, K.O.: People tracking in RGB-D data with on-line boosted target models. In: IEEE International Conf on Intelligent Robots and Systems (2011)
20.
Zurück zum Zitat Lv, Z., Halawani, A., Feng, S., Rhman, S., Li, H.: Touch-less interactive augmented reality game on vision-based wearable device. Personal Ubiq. Comput. 19 (3–4), 551–567 (2015)CrossRef Lv, Z., Halawani, A., Feng, S., Rhman, S., Li, H.: Touch-less interactive augmented reality game on vision-based wearable device. Personal Ubiq. Comput. 19 (3–4), 551–567 (2015)CrossRef
21.
Zurück zum Zitat Marchand, E., Uchiyama, H., Spindler, F.: Pose estimation for augmented reality: a hands-on survey. IEEE Trans. Visual. Comput. Graph. 22(12), 2633–2651 (2016)CrossRef Marchand, E., Uchiyama, H., Spindler, F.: Pose estimation for augmented reality: a hands-on survey. IEEE Trans. Visual. Comput. Graph. 22(12), 2633–2651 (2016)CrossRef
22.
Zurück zum Zitat Martin, P., Marchand, E., Houlier, P., Marchal, I.: Mapping and re-localization for mobile augmented reality. IEEE Int. Conf on Image Processing (2014) Martin, P., Marchand, E., Houlier, P., Marchal, I.: Mapping and re-localization for mobile augmented reality. IEEE Int. Conf on Image Processing (2014)
23.
Zurück zum Zitat Matas, J., Chum, O., Urban, M., Pajdla, T.: Robust wide baseline stereo from maximally stable extremal regions. British Mach. Vis. Conf., 384–396 (2002) Matas, J., Chum, O., Urban, M., Pajdla, T.: Robust wide baseline stereo from maximally stable extremal regions. British Mach. Vis. Conf., 384–396 (2002)
24.
Zurück zum Zitat Ozturk, O., Yamasaki, T., Aizawa, K.: Tracking of humans and estimation of body/head orientation from top-view single camera for visual focus of attention analysis. IEEE Intl. Conf on Computer Vision (2009) Ozturk, O., Yamasaki, T., Aizawa, K.: Tracking of humans and estimation of body/head orientation from top-view single camera for visual focus of attention analysis. IEEE Intl. Conf on Computer Vision (2009)
25.
Zurück zum Zitat Park, Y., Lepetit, V., Woo, W.: Handling motion-blur in 3d tracking and rendering for augmented reality, vol. 18 (2012) Park, Y., Lepetit, V., Woo, W.: Handling motion-blur in 3d tracking and rendering for augmented reality, vol. 18 (2012)
26.
Zurück zum Zitat Santiago, C.B., Sousa, A., Reis, L.P., Estriga, M.L.: Real time colour based player tracking in indoor sports. Computational Vision and Medical Image Processing: Recent Trends (Computational Methods in Applied Sciences), 17–35 (2011) Santiago, C.B., Sousa, A., Reis, L.P., Estriga, M.L.: Real time colour based player tracking in indoor sports. Computational Vision and Medical Image Processing: Recent Trends (Computational Methods in Applied Sciences), 17–35 (2011)
27.
Zurück zum Zitat Shotton, J., Fitzgibbon, A., Cook, M., Sharp, T., Finocchio, M., Moore, R., Kipman, A., Blake, A.: Real-time human pose recognition in parts from single depth images. In: IEEE Conf on Computer Vision and Pattern Recognition (2011) Shotton, J., Fitzgibbon, A., Cook, M., Sharp, T., Finocchio, M., Moore, R., Kipman, A., Blake, A.: Real-time human pose recognition in parts from single depth images. In: IEEE Conf on Computer Vision and Pattern Recognition (2011)
28.
Zurück zum Zitat Solmaz, B., Moore, B.E., Shah, M.: Identifying behaviors in crowd scenes using stability analysis for dynamical systems. IEEE Trans. Pattern Anal. Mach. Intell. 34(10), 2064–2070 (2012)CrossRef Solmaz, B., Moore, B.E., Shah, M.: Identifying behaviors in crowd scenes using stability analysis for dynamical systems. IEEE Trans. Pattern Anal. Mach. Intell. 34(10), 2064–2070 (2012)CrossRef
29.
Zurück zum Zitat Su, Y., Fisher, M.H., Wolczowski, A., Bell, G.D., Burn, D.J., Gao, R.X.: Towards an EMG-controlled prosthetic hand using a 3-D electromagnetic positioning system. IEEE Trans. Instrum. Meas. 56(1), 178–186 (2007)CrossRef Su, Y., Fisher, M.H., Wolczowski, A., Bell, G.D., Burn, D.J., Gao, R.X.: Towards an EMG-controlled prosthetic hand using a 3-D electromagnetic positioning system. IEEE Trans. Instrum. Meas. 56(1), 178–186 (2007)CrossRef
30.
Zurück zum Zitat Sun, S.W., Cheng, W.H., Lin, Y.C., Lin, W.C., Chang, Y.T., Peng, C.W.: Whac-a-Mole: a head detection scheme by estimating the 3D envelope from depth image. In: IEEE Intl. Conf. on Multimedia and Expo (2013) Sun, S.W., Cheng, W.H., Lin, Y.C., Lin, W.C., Chang, Y.T., Peng, C.W.: Whac-a-Mole: a head detection scheme by estimating the 3D envelope from depth image. In: IEEE Intl. Conf. on Multimedia and Expo (2013)
31.
Zurück zum Zitat Tamaazousti, M., Gay-Bellile, V., Collette, S., Bourgeois, S., Dhome, M.: Nonlinear refinement of structure from motion reconstruction by taking advantage of a partial knowledge of the evironment. In: IEEE Conf. on Computer Vision and Pattern Recognition, pp. 3073–3080 (2011) Tamaazousti, M., Gay-Bellile, V., Collette, S., Bourgeois, S., Dhome, M.: Nonlinear refinement of structure from motion reconstruction by taking advantage of a partial knowledge of the evironment. In: IEEE Conf. on Computer Vision and Pattern Recognition, pp. 3073–3080 (2011)
33.
Zurück zum Zitat Xie, R., Cao, J.: Accelerometer-based hand gesture recognition by neural network and similarity matching. IEEE Sensors J. 16(11), 4537–4545 (2016)CrossRef Xie, R., Cao, J.: Accelerometer-based hand gesture recognition by neural network and similarity matching. IEEE Sensors J. 16(11), 4537–4545 (2016)CrossRef
34.
Zurück zum Zitat Xu, R., Zhou, S., Li, W.J.: MEMS accelerometer based nonspecific-user hand gesture recognition. IEEE Sensors J. 12(5), 1166–1173 (2012)CrossRef Xu, R., Zhou, S., Li, W.J.: MEMS accelerometer based nonspecific-user hand gesture recognition. IEEE Sensors J. 12(5), 1166–1173 (2012)CrossRef
35.
Zurück zum Zitat Xu, R., Zhou, S., Li, W.J.: Home Automation Oriented Gesture Classification from Inertial Measurements. IEEE Trans. Autom. Sci. Eng. 12(4), 1200–1210 (2015)CrossRef Xu, R., Zhou, S., Li, W.J.: Home Automation Oriented Gesture Classification from Inertial Measurements. IEEE Trans. Autom. Sci. Eng. 12(4), 1200–1210 (2015)CrossRef
Metadaten
Titel
Augmented reality displaying scheme in a smart glass based on relative object positions and orientation sensors
verfasst von
Shih-Wei Sun
Yi-Shan Lan
Publikationsdatum
29.05.2018
Verlag
Springer US
Erschienen in
World Wide Web / Ausgabe 3/2019
Print ISSN: 1386-145X
Elektronische ISSN: 1573-1413
DOI
https://doi.org/10.1007/s11280-018-0592-z

Weitere Artikel der Ausgabe 3/2019

World Wide Web 3/2019 Zur Ausgabe