Skip to main content
Erschienen in: KI - Künstliche Intelligenz 3-4/2016

31.10.2015 | Technical Contribution

The Role of Focus in Advanced Visual Interfaces

verfasst von: Jason Orlosky, Takumi Toyama, Daniel Sonntag, Kiyoshi Kiyokawa

Erschienen in: KI - Künstliche Intelligenz | Ausgabe 3-4/2016

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Developing more natural and intelligent interaction methods for head mounted displays (HMDs) has been an important goal in augmented reality for many years. Recently, small form factor eye tracking interfaces and wearable displays have become small enough to be used simultaneously and for extended periods of time. In this paper, we describe the combination of monocular HMDs and an eye tracking interface and show how they can be used to automatically reduce interaction requirements for displays with both single and multiple focal planes. We then present the results of preliminary and primary experiments which test the accuracy of eye tracking for a number of different displays such as Google Glass and Brother’s AiRScouter. Results show that our focal plane classification algorithm works with over 98 % accuracy for classifying the correct distance of virtual objects in our multi-focal plane display prototype and with over 90 % accuracy for classifying physical and virtual objects in commercial monocular displays. Additionally, we describe methodology for integrating our system into augmented reality applications and attentive interfaces.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

KI - Künstliche Intelligenz

The Scientific journal "KI – Künstliche Intelligenz" is the official journal of the division for artificial intelligence within the "Gesellschaft für Informatik e.V." (GI) – the German Informatics Society - with constributions from troughout the field of artificial intelligence.

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Weitere Produktempfehlungen anzeigen
Literatur
1.
Zurück zum Zitat Abbott WW, Faisal AA (2012) Ultra-low-cost 3D gaze estimation: an intuitive high information throughput compliment to direct brain–machine interfaces. J Neural Eng 9(4):046016CrossRef Abbott WW, Faisal AA (2012) Ultra-low-cost 3D gaze estimation: an intuitive high information throughput compliment to direct brain–machine interfaces. J Neural Eng 9(4):046016CrossRef
2.
Zurück zum Zitat Akeley K, Watt SJ, Girshick AR, Banks MS (2004) A stereo display prototype with multiple focal distances. ACM Trans Gr (TOG) 23(3):804–813 (ACM) Akeley K, Watt SJ, Girshick AR, Banks MS (2004) A stereo display prototype with multiple focal distances. ACM Trans Gr (TOG) 23(3):804–813 (ACM)
3.
Zurück zum Zitat Ames SL, McBrien NA (2004) Development of a miniaturized system for monitoring vergence during viewing of stereoscopic imagery using a head-mounted display. In: Proceedings of SPIE, vol 5291, pp 25–35 Ames SL, McBrien NA (2004) Development of a miniaturized system for monitoring vergence during viewing of stereoscopic imagery using a head-mounted display. In: Proceedings of SPIE, vol 5291, pp 25–35
4.
Zurück zum Zitat Baumgarten J, Schuchert T, Voth S, Wartenberg P, Richter B, Vogel U (2012) Aspects of a head-mounted eye-tracker based on a bidirectional OLED microdisplay. J Inf Disp 13(2):67–71CrossRef Baumgarten J, Schuchert T, Voth S, Wartenberg P, Richter B, Vogel U (2012) Aspects of a head-mounted eye-tracker based on a bidirectional OLED microdisplay. J Inf Disp 13(2):67–71CrossRef
5.
Zurück zum Zitat Bulling A, Gellersen H (2010) Toward mobile eye-based human–computer interaction. Pervasive Comput IEEE 9(4):8–12CrossRef Bulling A, Gellersen H (2010) Toward mobile eye-based human–computer interaction. Pervasive Comput IEEE 9(4):8–12CrossRef
6.
Zurück zum Zitat Chang CC, Lin CJ (2011) LIBSVM: a library for support vector machines. ACM Trans Intell Syst Technol (TIST) 2(3):27 Chang CC, Lin CJ (2011) LIBSVM: a library for support vector machines. ACM Trans Intell Syst Technol (TIST) 2(3):27
7.
Zurück zum Zitat Cho I, Dou W, Wartell Z, Ribarsky W, Wang X (2012) Evaluating depth perception of volumetric data in semi-immersive VR. In: Proceedings of the international working conference on advanced visual interfaces, pp 266–269. ACM Cho I, Dou W, Wartell Z, Ribarsky W, Wang X (2012) Evaluating depth perception of volumetric data in semi-immersive VR. In: Proceedings of the international working conference on advanced visual interfaces, pp 266–269. ACM
8.
Zurück zum Zitat Cormack RH (1984) Stereoscopic depth perception at far viewing distances. Percept Psychophys 35(5):423–428 (Chicago) Cormack RH (1984) Stereoscopic depth perception at far viewing distances. Percept Psychophys 35(5):423–428 (Chicago)
9.
Zurück zum Zitat Damala A, Stojanovic N, Schuchert T, Moragues J, Cabrera A, Gilleade K (2012) Adaptive augmented reality for cultural heritage: ARtSENSE project. In: Progress in cultural heritage preservation. Springer, Berlin, pp 746–755 Damala A, Stojanovic N, Schuchert T, Moragues J, Cabrera A, Gilleade K (2012) Adaptive augmented reality for cultural heritage: ARtSENSE project. In: Progress in cultural heritage preservation. Springer, Berlin, pp 746–755
10.
Zurück zum Zitat Essig K, Pomplun M, Ritter H (2006) A neural network for 3D gaze recording with binocular eye trackers. Int J Parallel Emerg Distrib Syst 21(2):79–95MathSciNetCrossRefMATH Essig K, Pomplun M, Ritter H (2006) A neural network for 3D gaze recording with binocular eye trackers. Int J Parallel Emerg Distrib Syst 21(2):79–95MathSciNetCrossRefMATH
11.
Zurück zum Zitat Ferguson G, Allen JF (2011) A cognitive model for collaborative agents. In: AAAI fall symposium: advances in cognitive systems Ferguson G, Allen JF (2011) A cognitive model for collaborative agents. In: AAAI fall symposium: advances in cognitive systems
12.
Zurück zum Zitat Hu X, Hua H (2014) Design and assessment of a depth-fused multi-focal-plane display prototype. J Disp Technol 10(4):308–316CrossRef Hu X, Hua H (2014) Design and assessment of a depth-fused multi-focal-plane display prototype. J Disp Technol 10(4):308–316CrossRef
13.
Zurück zum Zitat Hua H, Gao C, Brown LD, Ahuja N, Rolland JP (2002) A testbed for precise registration, natural occlusion and interaction in an augmented environment using a head-mounted projective display (HMPD). In: Virtual reality, 2002. Proceedings. IEEE, pp 81–89 Hua H, Gao C, Brown LD, Ahuja N, Rolland JP (2002) A testbed for precise registration, natural occlusion and interaction in an augmented environment using a head-mounted projective display (HMPD). In: Virtual reality, 2002. Proceedings. IEEE, pp 81–89
14.
Zurück zum Zitat Ki J, Kwon YM, Sohn K (2007) 3D gaze tracking and analysis for attentive human–computer interaction. In: Frontiers in the convergence of bioscience and information technologies, 2007 (FBIT’07), pp 617–621. IEEE Ki J, Kwon YM, Sohn K (2007) 3D gaze tracking and analysis for attentive human–computer interaction. In: Frontiers in the convergence of bioscience and information technologies, 2007 (FBIT’07), pp 617–621. IEEE
15.
Zurück zum Zitat Kim SK, Kim DW, Kwon YM, Son JY (2008) Evaluation of the monocular depth cue in 3D displays. Opt Express 16(26):21415–21422CrossRef Kim SK, Kim DW, Kwon YM, Son JY (2008) Evaluation of the monocular depth cue in 3D displays. Opt Express 16(26):21415–21422CrossRef
16.
Zurück zum Zitat Kim SK, Kim EH, Kim DW (2011) Full parallax multifocus three-dimensional display using a slanted light source array. Opt Eng 50(11):114001CrossRef Kim SK, Kim EH, Kim DW (2011) Full parallax multifocus three-dimensional display using a slanted light source array. Opt Eng 50(11):114001CrossRef
17.
Zurück zum Zitat Kwon YM, Jeon KW, Ki J, Shahab QM, Jo S, Kim SK (2006) 3d gaze estimation and interaction to stereo display. Int J Virtual Real IJVR 5(3):41–45 Kwon YM, Jeon KW, Ki J, Shahab QM, Jo S, Kim SK (2006) 3d gaze estimation and interaction to stereo display. Int J Virtual Real IJVR 5(3):41–45
18.
Zurück zum Zitat Land MF, Hayhoe M (2001) In what ways do eye movements contribute to everyday activities? Vis Res 41(25):3559–3565CrossRef Land MF, Hayhoe M (2001) In what ways do eye movements contribute to everyday activities? Vis Res 41(25):3559–3565CrossRef
19.
Zurück zum Zitat Lang C, Nguyen TV, Katti H, Yadati K, Kankanhalli M, Yan S (2012) Depth matters: influence of depth cues on visual saliency. In: Computer vision—ECCV 2012. Springer, Berlin, pp 101–115 Lang C, Nguyen TV, Katti H, Yadati K, Kankanhalli M, Yan S (2012) Depth matters: influence of depth cues on visual saliency. In: Computer vision—ECCV 2012. Springer, Berlin, pp 101–115
20.
Zurück zum Zitat Lee JY, Lee SH, Park HM, Lee SK, Choi JS, Kwon JS (2010) Design and implementation of a wearable AR annotation system using gaze interaction. In: 2010 digest of technical papers international conference on consumer electronics (ICCE). IEEE, pp 185–186 Lee JY, Lee SH, Park HM, Lee SK, Choi JS, Kwon JS (2010) Design and implementation of a wearable AR annotation system using gaze interaction. In: 2010 digest of technical papers international conference on consumer electronics (ICCE). IEEE, pp 185–186
21.
Zurück zum Zitat Liu S, Hua H, Cheng D (2010) A novel prototype for an optical see-through head-mounted display with addressable focus cues. IEEE Trans Vis Comput Gr 16(3):381–393CrossRef Liu S, Hua H, Cheng D (2010) A novel prototype for an optical see-through head-mounted display with addressable focus cues. IEEE Trans Vis Comput Gr 16(3):381–393CrossRef
22.
Zurück zum Zitat Liu S, Hua H (2010) A systematic method for designing depth-fused multi-focal plane three-dimensional displays. Opt Express 18(11):11562–11573CrossRef Liu S, Hua H (2010) A systematic method for designing depth-fused multi-focal plane three-dimensional displays. Opt Express 18(11):11562–11573CrossRef
23.
Zurück zum Zitat Maimone A, Fuchs H (2013) Computational augmented reality eyeglasses. In: IEEE international symposium on mixed and augmented reality (ISMAR). IEEE, pp 29–38 Maimone A, Fuchs H (2013) Computational augmented reality eyeglasses. In: IEEE international symposium on mixed and augmented reality (ISMAR). IEEE, pp 29–38
24.
Zurück zum Zitat Orlosky J, Kiyokawa K, Takemura H (2014) Managing mobile text in head mounted displays: studies on visual preference and text placement. ACM SIGMOBILE Mob Comput Commun Rev 18(2):20–31CrossRef Orlosky J, Kiyokawa K, Takemura H (2014) Managing mobile text in head mounted displays: studies on visual preference and text placement. ACM SIGMOBILE Mob Comput Commun Rev 18(2):20–31CrossRef
25.
Zurück zum Zitat Pfeiffer T, Latoschik ME, Wachsmuth I (2008) Evaluation of binocular eye trackers and algorithms for 3D gaze interaction in virtual reality environments. JVRB-J Virtual Real Broadcast 5(16):1660 Pfeiffer T, Latoschik ME, Wachsmuth I (2008) Evaluation of binocular eye trackers and algorithms for 3D gaze interaction in virtual reality environments. JVRB-J Virtual Real Broadcast 5(16):1660
26.
Zurück zum Zitat Prasov Z, Chai JY (2008) What’s in a gaze?: the role of eye-gaze in reference resolution in multimodal conversational interfaces. In: Proceedings of the 13th international conference on intelligent user interfaces. ACM, pp 20–29 Prasov Z, Chai JY (2008) What’s in a gaze?: the role of eye-gaze in reference resolution in multimodal conversational interfaces. In: Proceedings of the 13th international conference on intelligent user interfaces. ACM, pp 20–29
27.
Zurück zum Zitat Qvarfordt P, Zhai S (2005) Conversing with the user based on eye-gaze patterns. In: Proceedings of the SIGCHI conference on human factors in computing systems. ACM, pp 221–230 Qvarfordt P, Zhai S (2005) Conversing with the user based on eye-gaze patterns. In: Proceedings of the SIGCHI conference on human factors in computing systems. ACM, pp 221–230
28.
Zurück zum Zitat Schowengerdt BT, Seibel EJ (2004) True three-dimensional displays that allow viewers to dynamically shift accommodation, bringing objects displayed at different viewing distances into and out of focus. CyberPsychol Behav 7(6):610–620CrossRef Schowengerdt BT, Seibel EJ (2004) True three-dimensional displays that allow viewers to dynamically shift accommodation, bringing objects displayed at different viewing distances into and out of focus. CyberPsychol Behav 7(6):610–620CrossRef
29.
Zurück zum Zitat Sonntag D, Zillner S, Schulz C, Weber M, Toyama T (2013) Towards medical cyber-physical systems: multimodal augmented reality for doctors and knowledge discovery about patients. In: Design, user experience, and usability. User experience in novel technological environments. Springer, Berlin, pp 401–410 Sonntag D, Zillner S, Schulz C, Weber M, Toyama T (2013) Towards medical cyber-physical systems: multimodal augmented reality for doctors and knowledge discovery about patients. In: Design, user experience, and usability. User experience in novel technological environments. Springer, Berlin, pp 401–410
30.
Zurück zum Zitat Sullivan A (2003) 58.3: a solid-state multi-planar volumetric display. In: SID symposium digest of technical papers, vol 34, issue 1. Blackwell Publishing Ltd, Oxford, pp 1531–1533 Sullivan A (2003) 58.3: a solid-state multi-planar volumetric display. In: SID symposium digest of technical papers, vol 34, issue 1. Blackwell Publishing Ltd, Oxford, pp 1531–1533
31.
Zurück zum Zitat Swan JE, Livingston MA, Smallman HS, Brown D, Baillot Y, Gabbard JL, Hix D (2006) A perceptual matching technique for depth judgments in optical, see-through augmented reality. In: Virtual reality conference, 2006. IEEE, pp 19–26 Swan JE, Livingston MA, Smallman HS, Brown D, Baillot Y, Gabbard JL, Hix D (2006) A perceptual matching technique for depth judgments in optical, see-through augmented reality. In: Virtual reality conference, 2006. IEEE, pp 19–26
32.
Zurück zum Zitat Toyama T, Dengel A, Suzuki W, Kise K (2013) Wearable reading assist system: augmented reality document combining document retrieval and eye tracking. In: 12th international conference on document analysis and recognition (ICDAR). IEEE, pp 30–34 Toyama T, Dengel A, Suzuki W, Kise K (2013) Wearable reading assist system: augmented reality document combining document retrieval and eye tracking. In: 12th international conference on document analysis and recognition (ICDAR). IEEE, pp 30–34
33.
Zurück zum Zitat Toyama T, Sonntag D, Orlosky J, Kiyokawa K (2014) A natural interface for multi-focal plane head mounted displays using 3D gaze. In: Proceedings of the 2014 international working conference on advanced visual interfaces. ACM, pp 25–32 Toyama T, Sonntag D, Orlosky J, Kiyokawa K (2014) A natural interface for multi-focal plane head mounted displays using 3D gaze. In: Proceedings of the 2014 international working conference on advanced visual interfaces. ACM, pp 25–32
34.
Zurück zum Zitat Tsukamoto M, Terada T (2006) Step toward establishing safety guidelines of wearable head-mounted displays (HMDs). In: Proc. of the 11th international conference on industrial engineering theory, applications, and practice (IJIE2006), pp 378–383 Tsukamoto M, Terada T (2006) Step toward establishing safety guidelines of wearable head-mounted displays (HMDs). In: Proc. of the 11th international conference on industrial engineering theory, applications, and practice (IJIE2006), pp 378–383
35.
Zurück zum Zitat Uratani K, Machida T, Kiyokawa K, Takemura H (2005) A study of depth visualization techniques for virtual annotations in augmented reality. In: Proceedings of virtual reality, 2005 (VR’05). IEEE, pp 295–296 Uratani K, Machida T, Kiyokawa K, Takemura H (2005) A study of depth visualization techniques for virtual annotations in augmented reality. In: Proceedings of virtual reality, 2005 (VR’05). IEEE, pp 295–296
36.
Zurück zum Zitat Urey H, Chellappan KV, Erden E, Surman P (2011) State of the art in stereoscopic and autostereoscopic displays. Proc IEEE 99(4):540–555CrossRef Urey H, Chellappan KV, Erden E, Surman P (2011) State of the art in stereoscopic and autostereoscopic displays. Proc IEEE 99(4):540–555CrossRef
37.
Zurück zum Zitat Van DN, Mashita T, Kiyokawa K, Takemura H (2012) Subjective evaluations on perceptual depth of stereo image and effective field of view of a wide-view head mounted projective display with a semi-transparent retro-reflective screen. In: IEEE international symposium on mixed and augmented reality (ISMAR). IEEE, pp 327–328 Van DN, Mashita T, Kiyokawa K, Takemura H (2012) Subjective evaluations on perceptual depth of stereo image and effective field of view of a wide-view head mounted projective display with a semi-transparent retro-reflective screen. In: IEEE international symposium on mixed and augmented reality (ISMAR). IEEE, pp 327–328
38.
Zurück zum Zitat Woods RL, Fetchenheuer I, Vargas-Martín F, Peli E (2003) The impact of non-immersive head-mounted displays (HMDs) on the visual field. J Soc Inf Disp 11(1):191-1CrossRef Woods RL, Fetchenheuer I, Vargas-Martín F, Peli E (2003) The impact of non-immersive head-mounted displays (HMDs) on the visual field. J Soc Inf Disp 11(1):191-1CrossRef
Metadaten
Titel
The Role of Focus in Advanced Visual Interfaces
verfasst von
Jason Orlosky
Takumi Toyama
Daniel Sonntag
Kiyoshi Kiyokawa
Publikationsdatum
31.10.2015
Verlag
Springer Berlin Heidelberg
Erschienen in
KI - Künstliche Intelligenz / Ausgabe 3-4/2016
Print ISSN: 0933-1875
Elektronische ISSN: 1610-1987
DOI
https://doi.org/10.1007/s13218-015-0411-y

Weitere Artikel der Ausgabe 3-4/2016

KI - Künstliche Intelligenz 3-4/2016 Zur Ausgabe

Technical contribution

16 Years of RoboCup Rescue