Skip to main content
Top

2022 | OriginalPaper | Chapter

Designing AI-Support VR by Self-supervised and Initiative Selective Supports

Authors : Ritwika Mukherjee, Jun-Li Lu, Yoichi Ochiai

Published in: Universal Access in Human-Computer Interaction. User and Context Diversity

Publisher: Springer International Publishing

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

To provide flexible support ways and intelligent support contents for users in VR contexts, compared with the existing support ways of either single or combination of sensing functions, e.g., support of gesture, head or body movement. In our proposal, to provide flexible support functions conditioned on VR contexts or user’s feedbacks, we propose to use a semi-automatic selection of interactive supports. In modeling of semi-selection by user’s feedbacks and VR contexts, we propose to evaluate the performance by consideration of both intelligent AI evaluation, based on data of users’ performance in VR, and user’s initiative feedbacks. Furthermore, to provide customizable or personalized estimation in the VR support, we propose to apply the machine learning of self-supervised learning. Therefore, we are able to train or retrain estimation models with efficiency of low-cost of data works, including reduction of data-labeling cost or reuse of existing models. We require to evaluate the timing of applying selection or modification of support ways, the balance of ratios of automatics or user-initiative due to user preference or experiences or smoothness of VR contexts, and even user awareness or understanding, etc. Further, we require to evaluate the scale, numbers, size, and limitation of data or training that are needed for stable, accurate, and useful estimations of VR support.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Appendix
Available only for authorised users
Literature
1.
go back to reference Ahuja, K., Islam, R., Parashar, V., Dey, K., Harrison, C., Goel, M.: EyeSpyVR: interactive eye sensing using off-the-shelf, smartphone-based VR headsets. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2(2), 57:1–57:10 (2018) Ahuja, K., Islam, R., Parashar, V., Dey, K., Harrison, C., Goel, M.: EyeSpyVR: interactive eye sensing using off-the-shelf, smartphone-based VR headsets. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2(2), 57:1–57:10 (2018)
2.
go back to reference Arabadzhiyska, E., Tursun, O.T., Myszkowski, K., Seidel, H., Didyk, P.: Saccade landing position prediction for gaze-contingent rendering. ACM Trans. Graph. 36(4), 50:1–50:12 (2017) Arabadzhiyska, E., Tursun, O.T., Myszkowski, K., Seidel, H., Didyk, P.: Saccade landing position prediction for gaze-contingent rendering. ACM Trans. Graph. 36(4), 50:1–50:12 (2017)
3.
go back to reference Azuma, R.T., Bishop, G.: A frequency-domain analysis of head-motion prediction. In: Mair, S.G., Cook, R. (eds.) Proceedings of the 22nd Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 1995, Los Angeles, CA, USA, 6–11 August 1995, pp. 401–408. ACM (1995) Azuma, R.T., Bishop, G.: A frequency-domain analysis of head-motion prediction. In: Mair, S.G., Cook, R. (eds.) Proceedings of the 22nd Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 1995, Los Angeles, CA, USA, 6–11 August 1995, pp. 401–408. ACM (1995)
4.
go back to reference Clarence, A., Knibbe, J., Cordeil, M., Wybrow, M.: Unscripted retargeting: reach prediction for haptic retargeting in virtual reality. In: IEEE Virtual Reality and 3D User Interfaces, VR 2021, Lisbon, Portugal, 27 March–1 April 2021, pp. 150–159. IEEE (2021) Clarence, A., Knibbe, J., Cordeil, M., Wybrow, M.: Unscripted retargeting: reach prediction for haptic retargeting in virtual reality. In: IEEE Virtual Reality and 3D User Interfaces, VR 2021, Lisbon, Portugal, 27 March–1 April 2021, pp. 150–159. IEEE (2021)
5.
go back to reference Corcoran, P.M., Nanu, F., Petrescu, S., Bigioi, P.: Real-time eye gaze tracking for gaming design and consumer electronics systems. IEEE Trans. Consum. Electron. 58(2), 347–355 (2012)CrossRef Corcoran, P.M., Nanu, F., Petrescu, S., Bigioi, P.: Real-time eye gaze tracking for gaming design and consumer electronics systems. IEEE Trans. Consum. Electron. 58(2), 347–355 (2012)CrossRef
6.
go back to reference Dohan, M., Mu, M.: Understanding user attention in VR using gaze controlled games. In: Hook, J., Stenton, P., Ursu, M.F., Schofield, G., Vatavu, R. (eds.) Proceedings of the 2019 ACM International Conference on Interactive Experiences for TV and Online Video, TVX 2019, Salford (Manchester), UK, 5–7 June 2019, pp. 167–173. ACM (2019) Dohan, M., Mu, M.: Understanding user attention in VR using gaze controlled games. In: Hook, J., Stenton, P., Ursu, M.F., Schofield, G., Vatavu, R. (eds.) Proceedings of the 2019 ACM International Conference on Interactive Experiences for TV and Online Video, TVX 2019, Salford (Manchester), UK, 5–7 June 2019, pp. 167–173. ACM (2019)
7.
go back to reference Gamage, N.M., Ishtaweera, D., Weigel, M., Withana, A.: So predictable! Continuous 3D hand trajectory prediction in virtual reality. In: Nichols, J., Kumar, R., Nebeling, M. (eds.) UIST 2021: The 34th Annual ACM Symposium on User Interface Software and Technology, Virtual Event, USA, 10–14 October 2021, pp. 332–343. ACM (2021) Gamage, N.M., Ishtaweera, D., Weigel, M., Withana, A.: So predictable! Continuous 3D hand trajectory prediction in virtual reality. In: Nichols, J., Kumar, R., Nebeling, M. (eds.) UIST 2021: The 34th Annual ACM Symposium on User Interface Software and Technology, Virtual Event, USA, 10–14 October 2021, pp. 332–343. ACM (2021)
8.
go back to reference Gao, C., Zhang, X., Banerjee, S.: Conductive inkjet printed passive 2D trackpad for VR interaction. In: Shorey, R., Murty, R., Chen, Y.J., Jamieson, K. (eds.) Proceedings of the 24th Annual International Conference on Mobile Computing and Networking, MobiCom 2018, New Delhi, India, 29 October–02 November 2018, pp. 83–98. ACM (2018) Gao, C., Zhang, X., Banerjee, S.: Conductive inkjet printed passive 2D trackpad for VR interaction. In: Shorey, R., Murty, R., Chen, Y.J., Jamieson, K. (eds.) Proceedings of the 24th Annual International Conference on Mobile Computing and Networking, MobiCom 2018, New Delhi, India, 29 October–02 November 2018, pp. 83–98. ACM (2018)
9.
go back to reference Gül, S., et al.: Reproducibility companion paper: Kalman filter-based head motion prediction for cloud-based mixed reality. In: Shen, H.T., et al. (eds.) MM 2021: ACM Multimedia Conference, Virtual Event, China, 20–24 October 2021, pp. 3619–3621. ACM (2021) Gül, S., et al.: Reproducibility companion paper: Kalman filter-based head motion prediction for cloud-based mixed reality. In: Shen, H.T., et al. (eds.) MM 2021: ACM Multimedia Conference, Virtual Event, China, 20–24 October 2021, pp. 3619–3621. ACM (2021)
11.
go back to reference Hedeshy, R., Kumar, C., Menges, R., Staab, S.: Hummer: text entry by gaze and hum. In: Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., Drucker, S.M. (eds.) CHI 2021: CHI Conference on Human Factors in Computing Systems, Virtual Event/Yokohama, Japan, 8–13 May 2021, pp. 741:1–741:11. ACM (2021) Hedeshy, R., Kumar, C., Menges, R., Staab, S.: Hummer: text entry by gaze and hum. In: Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., Drucker, S.M. (eds.) CHI 2021: CHI Conference on Human Factors in Computing Systems, Virtual Event/Yokohama, Japan, 8–13 May 2021, pp. 741:1–741:11. ACM (2021)
12.
go back to reference Henrikson, R., Grossman, T., Trowbridge, S., Wigdor, D., Benko, H.: Head-coupled kinematic template matching: a prediction model for ray pointing in VR. In: Bernhaupt, R., et al. (eds.) CHI 2020: CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020, pp. 1–14. ACM (2020) Henrikson, R., Grossman, T., Trowbridge, S., Wigdor, D., Benko, H.: Head-coupled kinematic template matching: a prediction model for ray pointing in VR. In: Bernhaupt, R., et al. (eds.) CHI 2020: CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020, pp. 1–14. ACM (2020)
13.
go back to reference Herman, L., Jurík, V., Stachon, Z., Vrbík, D., Russnák, J., Rezník, T.: Evaluation of user performance in interactive and static 3D maps. ISPRS Int. J. Geo Inf. 7(11), 415 (2018)CrossRef Herman, L., Jurík, V., Stachon, Z., Vrbík, D., Russnák, J., Rezník, T.: Evaluation of user performance in interactive and static 3D maps. ISPRS Int. J. Geo Inf. 7(11), 415 (2018)CrossRef
15.
go back to reference Humski, L., Pintar, D., Vranic, M.: Analysis of Facebook interaction as basis for synthetic expanded social graph generation. IEEE Access 7, 6622–6636 (2019)CrossRef Humski, L., Pintar, D., Vranic, M.: Analysis of Facebook interaction as basis for synthetic expanded social graph generation. IEEE Access 7, 6622–6636 (2019)CrossRef
16.
go back to reference Jang, J.R., Hsu, C., Lee, H.: Continuous HMM and its enhancement for singing/humming query retrieval. In: ISMIR 2005, 6th International Conference on Music Information Retrieval, London, UK, 11–15 September 2005, Proceedings, pp. 546–551 (2005) Jang, J.R., Hsu, C., Lee, H.: Continuous HMM and its enhancement for singing/humming query retrieval. In: ISMIR 2005, 6th International Conference on Music Information Retrieval, London, UK, 11–15 September 2005, Proceedings, pp. 546–551 (2005)
17.
go back to reference Lank, E., Cheng, Y.N., Ruiz, J.: Endpoint prediction using motion kinematics. In: Rosson, M.B., Gilmore, D.J. (eds.) Proceedings of the 2007 Conference on Human Factors in Computing Systems, CHI 2007, San Jose, California, USA, 28 April–3 May 2007, pp. 637–646. ACM (2007) Lank, E., Cheng, Y.N., Ruiz, J.: Endpoint prediction using motion kinematics. In: Rosson, M.B., Gilmore, D.J. (eds.) Proceedings of the 2007 Conference on Human Factors in Computing Systems, CHI 2007, San Jose, California, USA, 28 April–3 May 2007, pp. 637–646. ACM (2007)
18.
go back to reference Laville, V., et al.: Deriving stratified effects from joint models investigating gene-environment interactions. BMC Bioinform. 21(1), 251 (2020)CrossRef Laville, V., et al.: Deriving stratified effects from joint models investigating gene-environment interactions. BMC Bioinform. 21(1), 251 (2020)CrossRef
19.
20.
go back to reference Markopoulos, E., Luimula, M., Ravyse, W., Ahtiainen, J., Aro-Heinilä, V.: Human computer interaction opportunities in hand tracking and finger recognition in ship engine room VR training. In: Markopoulos, E., Goonetilleke, R.S., Ho, A.G., Luximon, Y. (eds.) AHFE 2021. LNNS, vol. 276, pp. 343–351. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-80094-9_41CrossRef Markopoulos, E., Luimula, M., Ravyse, W., Ahtiainen, J., Aro-Heinilä, V.: Human computer interaction opportunities in hand tracking and finger recognition in ship engine room VR training. In: Markopoulos, E., Goonetilleke, R.S., Ho, A.G., Luximon, Y. (eds.) AHFE 2021. LNNS, vol. 276, pp. 343–351. Springer, Cham (2021). https://​doi.​org/​10.​1007/​978-3-030-80094-9_​41CrossRef
21.
go back to reference Menges, R., Kumar, C., Staab, S.: Improving user experience of eye tracking-based interaction: introspecting and adapting interfaces. ACM Trans. Comput. Hum. Interact. 26(6), 37:1–37:46 (2019) Menges, R., Kumar, C., Staab, S.: Improving user experience of eye tracking-based interaction: introspecting and adapting interfaces. ACM Trans. Comput. Hum. Interact. 26(6), 37:1–37:46 (2019)
22.
go back to reference Murnane, M., Higgins, P., Saraf, M., Ferraro, F., Matuszek, C., Engel, D.: A simulator for human-robot interaction in virtual reality. In: IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops, VR Workshops 2021, Lisbon, Portugal, 27 March–1 April 2021, pp. 470–471. IEEE (2021) Murnane, M., Higgins, P., Saraf, M., Ferraro, F., Matuszek, C., Engel, D.: A simulator for human-robot interaction in virtual reality. In: IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops, VR Workshops 2021, Lisbon, Portugal, 27 March–1 April 2021, pp. 470–471. IEEE (2021)
23.
go back to reference Petersen, G.B., Petkakis, G., Makransky, G.: A study of how immersion and interactivity drive VR learning. Comput. Educ. 179, 104429 (2022)CrossRef Petersen, G.B., Petkakis, G., Makransky, G.: A study of how immersion and interactivity drive VR learning. Comput. Educ. 179, 104429 (2022)CrossRef
24.
go back to reference Soro, A.: Gestures and cooperation: considering non verbal communication in the design of interactive spaces. Ph.D. thesis, University of Cagliari, Italy (2012) Soro, A.: Gestures and cooperation: considering non verbal communication in the design of interactive spaces. Ph.D. thesis, University of Cagliari, Italy (2012)
25.
go back to reference Sprengel, U., et al.: Virtual embolization for treatment support of intracranial AVMs using an interactive desktop and VR application. Int. J. Comput. Assist. Radiol. Surg. 16(12), 2119–2127 (2021)CrossRef Sprengel, U., et al.: Virtual embolization for treatment support of intracranial AVMs using an interactive desktop and VR application. Int. J. Comput. Assist. Radiol. Surg. 16(12), 2119–2127 (2021)CrossRef
26.
go back to reference Vryzas, N., Matsiola, M., Kotsakis, R., Dimoulas, C., Kalliris, G.: Subjective evaluation of a speech emotion recognition interaction framework. In: Cunningham, S., Picking, R. (eds.) Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion, Wrexham, United Kingdom, 12–14 September 2018, pp. 34:1–34:7. ACM (2018) Vryzas, N., Matsiola, M., Kotsakis, R., Dimoulas, C., Kalliris, G.: Subjective evaluation of a speech emotion recognition interaction framework. In: Cunningham, S., Picking, R. (eds.) Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion, Wrexham, United Kingdom, 12–14 September 2018, pp. 34:1–34:7. ACM (2018)
27.
go back to reference Vu, T.H., Misra, A., Roy, Q., Choo, K.T.W., Lee, Y.: Smartwatch-based early gesture detection 8 trajectory tracking for interactive gesture-driven applications. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2(1), 39:1–39:27 (2018) Vu, T.H., Misra, A., Roy, Q., Choo, K.T.W., Lee, Y.: Smartwatch-based early gesture detection 8 trajectory tracking for interactive gesture-driven applications. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2(1), 39:1–39:27 (2018)
28.
go back to reference Wang, L., Wang, H., Dai, D., Leng, J., Han, X.: Bidirectional shadow rendering for interactive mixed 360\(^{\circ }\) videos. In: IEEE Virtual Reality and 3D User Interfaces, VR 2021, Lisbon, Portugal, 27 March–1 April 2021, pp. 170–178. IEEE (2021) Wang, L., Wang, H., Dai, D., Leng, J., Han, X.: Bidirectional shadow rendering for interactive mixed 360\(^{\circ }\) videos. In: IEEE Virtual Reality and 3D User Interfaces, VR 2021, Lisbon, Portugal, 27 March–1 April 2021, pp. 170–178. IEEE (2021)
29.
go back to reference Wang, Z., Xie, L., Wei, H., Zhang, K., Zhang, J.: Omnidirectional motion input: the basis of natural interaction in room-scale virtual reality. In: 2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops, VR Workshops, Atlanta, GA, USA, 22–26 March 2020, pp. 699–700. IEEE (2020) Wang, Z., Xie, L., Wei, H., Zhang, K., Zhang, J.: Omnidirectional motion input: the basis of natural interaction in room-scale virtual reality. In: 2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops, VR Workshops, Atlanta, GA, USA, 22–26 March 2020, pp. 699–700. IEEE (2020)
30.
go back to reference Wienrich, C., Gross, R., Kretschmer, F., Müller-Plath, G.: Developing and proving a framework for reaction time experiments in VR to objectively measure social interaction with virtual agents. In: Kiyokawa, K., Steinicke, F., Thomas, B.H., Welch, G. (eds.) 2018 IEEE Conference on Virtual Reality and 3D User Interfaces, VR 2018, Tuebingen/Reutlingen, Germany, 18–22 March 2018, pp. 191–198. IEEE Computer Society (2018) Wienrich, C., Gross, R., Kretschmer, F., Müller-Plath, G.: Developing and proving a framework for reaction time experiments in VR to objectively measure social interaction with virtual agents. In: Kiyokawa, K., Steinicke, F., Thomas, B.H., Welch, G. (eds.) 2018 IEEE Conference on Virtual Reality and 3D User Interfaces, VR 2018, Tuebingen/Reutlingen, Germany, 18–22 March 2018, pp. 191–198. IEEE Computer Society (2018)
Metadata
Title
Designing AI-Support VR by Self-supervised and Initiative Selective Supports
Authors
Ritwika Mukherjee
Jun-Li Lu
Yoichi Ochiai
Copyright Year
2022
DOI
https://doi.org/10.1007/978-3-031-05039-8_17