Skip to main content
Log in

An augmented reality interface to contextual information

  • SI: Augmented Reality
  • Published:
Virtual Reality Aims and scope Submit manuscript

Abstract

In this paper, we report on a prototype augmented reality (AR) platform for accessing abstract information in real-world pervasive computing environments. Using this platform, objects, people, and the environment serve as contextual channels to more information. The user’s interest with respect to the environment is inferred from eye movement patterns, speech, and other implicit feedback signals, and these data are used for information filtering. The results of proactive context-sensitive information retrieval are augmented onto the view of a handheld or head-mounted display or uttered as synthetic speech. The augmented information becomes part of the user’s context, and if the user shows interest in the AR content, the system detects this and provides progressively more information. In this paper, we describe the first use of the platform to develop a pilot application, Virtual Laboratory Guide, and early evaluation results of this application.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Notes

  1. http://layar.com/.

  2. http://www.wikitude.org/.

  3. http://virtual.vtt.fi/virtual/proj2/multimedia/alvar.html.

  4. http://www.openscenegraph.org

  5. http://emime.org/.

References

  • Ajanki A, Hardoon DR, Kaski S, Puolamäki K, Shawe-Taylor J (2009) Can eyes reveal interest?—implicit queries from gaze patterns. User Model User-Adapt Interact 19(4):307–339. doi:10.1007/s11257-009-9066-4

    Article  Google Scholar 

  • Azuma R (1997) A survey of augmented reality. Presence 6(4):355–385

    Google Scholar 

  • Baillot Y, Brown D, Julier S (2001) Authoring of physical models using mobile computers. In: Proceedings of the 5th IEEE international symposium on wearable computer. IEEE Computer Society, Washington, DC, USA, pp 39–46

  • Bee N, André E (2008) Writing with your eye: a dwell time free writing system adapted to the nature of human eye gaze. In: PIT ’08: proceedings of the 4th IEEE tutorial and research workshop on perception and interactive technologies for speech-based systems. Springer, Berlin, pp 111–122, doi:10.1007/978-3-540-69369-7_13

  • Brzezowski S, Dunn CM, Vetter M (1996) Integrated portable system for suspect identification and tracking. In: DePersia AT, Yeager S, Ortiz S (eds) SPIE: surveillance and assessment technologies for law enforcement. Society of Photo-Optical Instrumentation Engineers, Bellingham, WA, USA, pp 24–35

  • Cohen PR, Johnston M, McGee D, Oviatt S, Pittman J, Smith I, Chen L, Clow J (1997) QuickSet: multimodal interaction for simulation set-up and control. In: Proceedings of the fifth conference on applied natural language processing, Washington, DC. pp 20–24

  • Crestani F, Ruthven I (2007) Introduction to special issue on contextual information retrieval systems. Inf Retr 10(2):111–113

    Article  Google Scholar 

  • Davison A, Reid I, Molton N, Stasse O (2007) Monoslam: real-time single camera slam. Pattern Anal Mach Intell IEEE Trans 29(6):1052–1067. doi:10.1109/TPAMI.2007.1049

    Article  Google Scholar 

  • Farringdon J, Oni V (2000) Visual augmented memory (VAM). In: Proceedings of 4th international symposium on wearable computers. pp 167–168. doi:10.1109/ISWC.2000.888484

  • Feiner S, MacIntyre B, Höllerer T, Webster A (1997) A touring machine: prototyping 3D mobile augmented reality systems for exploring the urban environment. Personal Ubiquitous Comput 1(4):208–217

    Google Scholar 

  • Hahn D, Beutler F, Hanebeck U (2005) Visual scene augmentation for enhanced human perception. In: International conference on informatics in control, automation & robotics (ICINCO 2005). INSTICC Press, Barcelona, Spain, pp 146–153

  • Hayhoe M, Ballard D (2005) Eye movements in natural behavior. Trends Cogn Sci 9(4):188–194

    Article  Google Scholar 

  • Hendricksen K, Indulska J, Rakotonirainy A (2002) Modeling context information in pervasive computing systems. In: Proceedings of the first international conference on pervasive computing, pp 167–180

  • Henrysson A, Ollila M (2004) Umar: ubiquitous mobile augmented reality. In: MUM ’04: proceedings of the 3rd international conference on mobile and ubiquitous multimedia. ACM, New York, NY, USA, pp 41–45, doi:10.1145/1052380.1052387

  • Hirsimäki T, Pylkkönen J, Kurimo M (2009) Importance of high-order n-gram models in morph-based speech recognition. IEEE Trans Audio Speech Lang Processing 17(4):724–732

    Article  Google Scholar 

  • Hyrskykari A, Majaranta P, Aaltonen A, Räihä KJ (2000) Design issues of idict: a gaze-assisted translation aid. In: Proceedings of ETRA 2000, eye tracking research and applications symposium. ACM Press, pp 9–14, http://www.cs.uta.fi/~curly/publications/ETRA2000-Hyrskykari.pdf

  • Iordanoglou C, Jonsson K, Kittler J, Matas J (2000) Wearable face recognition aid. In: Proceedings. 2000 IEEE international conference on acoustics, speech, and signal processing (ICASSP ’00). pp 2365–2368

  • ISO/IEC (2002) Information technology—multimedia content description interface—part 3: visual. 15938–3:2002(E)

  • Järvenpää T, Aaltonen V (2008) Photonics in multimedia II. In: Proceedings of SPIE, vol 7001, SPIE, Bellingham, WA, chap Compact near-to-eye display with integrated gaze tracker. pp 700106–1–700106–8

  • Joachims T, Granka L, Pan B, Hembrooke H, Gay G (2005) Accurately interpreting clickthrough data as implicit feedback. In: Proceedings of the 28th annual international ACM SIGIR conference on research and development in information retrieval. ACM, Salvador, Brazil, pp 154–161

  • Julier S, Lanzagorta M, Baillot Y, Rosenblum L, Feiner S, Hollerer T, Sestito S (2000) Information filtering for mobile augmented reality. In: Augmented reality, 2000. (ISAR 2000). Proceedings. IEEE and ACM international symposium on. pp 3–11, doi:10.1109/ISAR.2000.880917

  • Julier S, Baillot Y, Brown D, Lanzagorta M (2002) Information filtering for mobile augmented reality. IEEE Comput Graph Appl 22:12–15. http://doi.ieeecomputersociety.org/10.1109/MCG.2002.1028721

    Google Scholar 

  • Kandemir M, Saarinen VM, Kaski S (2010) Inferring object relevance from gaze in dynamic scenes. In: ETRA 2010: proceedings of the 2010 symposium on eye-tracking research & applications. ACM, New York, NY, USA, pp 105–108

  • Klein G, Murray D (2007) Parallel tracking and mapping for small AR workspaces. In: Proceedings of sixth IEEE and ACM international symposium on mixed and augmented reality (ISMAR’07). IEEE computer society, Washington, DC, USA, pp 1–10

  • Klein G, Murray D (2008) Compositing for small cameras. In: Proceedings of seventh IEEE and ACM international symposium on mixed and augmented reality (ISMAR’08). IEEE Computer Society, Washington, DC, USA, pp 57–60

  • Kozma L, Klami A, Kaski S (2009) GaZIR: gaze-based zooming interface for image retrieval. In: Proceedings of 11th conference on multimodal interfaces and the sixth workshop on machine learning for multimodal interaction (ICMI-MLMI). ACM, New York, NY, USA, pp 305–312

  • Land MF (2006) Eye movements and the control of actions in everyday life. Prog Retin Eye Res 25(3):296–324

    Article  Google Scholar 

  • Lee R, Kwon YJ, Sumiya K (2009) Layer-based media integration for mobile mixed-reality applications. In: International conference on next generation mobile applications, services and technologies. IEEE Computer Society, Los Alamitos, CA, pp 58–63

  • Lowe D (1999) Object recognition from local scale-invariant features. In: Computer vision, 1999. The proceedings of the seventh IEEE international conference on, vol 2. pp 1150–1157, doi:10.1109/ICCV.1999.790410

  • Nilsson S, Gustafsson T, Carleberg P (2009) Hands free interaction with virtual information in a real environment: eye gaze as an interaction tool in an augmented reality system. Psychol J 7(2):175–196

    Google Scholar 

  • Oyekoya O, Stentiford F (2006) Perceptual image retrieval using eye movements. In: International workshop on intelligent computing in pattern analysis/synthesis. Advances in machine vision, image processing, and pattern analysis. Springer, Xi’an, China, pp 281–289

  • Park H, Lee S, Choi J (2008) Wearable augmented reality system using gaze interaction. In: Proceedings of the 2008 7th IEEE/ACM international symposium on mixed and augmented reality, vol 00. IEEE Computer Society, pp 175–176

  • Pentland A (1998) Wearable Intelligence. Exploring Intelligence; Scientific American Presents

  • Pentland A (2000) Looking at people: sensing for ubiquitous and wearable computing. IEEE Trans Pattern Anal Mach Intell 22(1):107–119. doi:10.1109/34.824823

    Article  Google Scholar 

  • Pfeiffer T, Latoschik ME, Wachsmuth I (2008) Evaluation of binocular eye trackers and algorithms for 3D gaze interaction in virtual reality environments. J Virtual Real Broadcast 5(16). urn:nbn:de:0009-6-16605, ISSN 1860-2037

  • Puolamäki K, Salojärvi J, Savia E, Simola J, Kaski S (2005) Combining eye movements and collaborative filtering for proactive information retrieval. In: Proceedings of the 28th annual international ACM SIGIR conference on research and development in information retrieval. ACM, Salvador, Brazil, pp 146–153

  • Pylvänäinen T, Järvenpää T, Nummela V (2008) Gaze tracking for near to eye displays. In: Proceedings of the 18th international conference on artificial reality and telexistence (ICAT 2008), Yokohama, Japan. pp 5–11

  • Qvarfordt P, Zhai S (2005) Conversing with the user based on eye-gaze patterns. In: CHI ’05: proceedings of the SIGCHI conference on human factors in computing systems. ACM, New York, NY, USA, pp 221–230, doi:10.1145/1054972.1055004

  • Rauhala M, Gunnarsson AS, Henrysson A (2006) A novel interface to sensor networks using handheld augmented reality. In: MobileHCI ’06: proceedings of the 8th conference on human–computer interaction with mobile devices and services. ACM, New York, NY, USA, pp 145–148, doi:10.1145/1152215.1152245

  • Rebman CM Jr., Aiken MW, Cegielski CG (2003) Speech recognition in the human-computer interface. Inf Manage 40(6):509–519. doi:10.1016/S0378-7206(02)00067-8

    Article  Google Scholar 

  • Rekimoto J, Ayatsuka Y, Hayashi K (1998) Augment-able reality: situated communication through physical and digital spaces. In: Proceedings of the 2nd IEEE international symposium on wearable computers. IEEE Computer Society, Washington, DC, USA, pp 68–75

  • Singletary BA, Starner TE (2001) Symbiotic interfaces for wearable face recognition. In: Proceedings of HCI international 2001 workshop on wearable computing, New Orleans, LA. pp 813–817

  • Starner T, Mann S, Rhodes B, Levine J, Healey J, Kirsch D, Picard RW, Pentland A (1997) Augmented reality through wearable computing. Presence Teleoperators Virtual Environ 6(4):452–460

    Google Scholar 

  • Sun Y, Prendinger H, Shi Y, Chen F, Chung V, Ishizuka M (2008) The hinge between input and output: understanding the multimodal input fusion results in an agent-based multimodal presentation system. In: Conference on human factors in computing systems (CHI ’08), Florence, Italy. pp 3483–3488

  • Tanriverdi V, Jacob R (2000) Interacting with eye movements in virtual environments. In: Proceedings of the SIGCHI conference on human factors in computing systems. ACM, p 272

  • Tomasi C, Kanade T (1991) Detection and tracking of point features. Tech. Rep. CMU-CS-91-132, Carnegie Mellon University

  • Turpin A, Scholer F (2006) User performance versus precision measures for simple search tasks. In: SIGIR ’06: proceedings of the international ACM SIGIR conference on research and development in information retrieval. ACM, New York, NY, pp 11–18

  • Vertegaal R (2002) Designing attentive interfaces. In: Proceedings of the 2002 symposium on eye tracking research & applications. ACM, p 30

  • Viola P, Jones M (2001) Rapid object detection using a boosted cascade of simple features. In: IEEE Computer Society conference on computer vision and pattern recognition (CVPR’01), Kauai, Hawaii, pp 511–518

  • Wang H, Tan CC, Li Q (2008) Snoogle: a search engine for the physical world. In: Proceedings of the 27th conference on computer communications (IEEE INFOCOM), pp 1382–1390

  • Ward DJ, MacKay DJC (2002) Fast hands-free writing by gaze direction. Nature 418(6900):838

    Article  Google Scholar 

  • Yamagishi J, Usabaev B, King S, Watts O, Dines J, Tian J, Hu R, Guan Y, Oura K, Tokuda K, Karhila R, Kurimo M (2009) Thousands of voices for HMM-based speech synthesis. In: Proceedings of the 10th annual conference of the international speech communication association, INTERSPEECH 2009, ISCA, Brighton, UK

  • Yap KK, Srinivasan V, Motani M (2005) MAX: human-centric search of the physical world. In: Proceedings of the 3rd international conference on embedded networked sensor systems. ACM, pp 166–179

Download references

Acknowledgments

Antti Ajanki, Melih Kandemir, Samuel Kaski, Markus Koskela, Mikko Kurimo, Jorma Laaksonen, Kai Puolamäki, and Teemu Ruokolainen belong to Adaptive Informatics Research Centre at Aalto University, Antti Ajanki, Melih Kandemir, Samuel Kaski, and Kai Puolamäki to Helsinki Institute for Information Technology HIIT, and Kai Puolamäki to the Finnish Centre of Excellence in Algorithmic Data Analysis. This work has been funded by Aalto MIDE programme (project UI-ART) and in part by Finnish Funding Agency for Technology and Innovation (TEKES) under the project DIEM/MMR and by the PASCAL2 Network of Excellence, ICT 216886.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Antti Ajanki.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Ajanki, A., Billinghurst, M., Gamper, H. et al. An augmented reality interface to contextual information. Virtual Reality 15, 161–173 (2011). https://doi.org/10.1007/s10055-010-0183-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10055-010-0183-5

Keywords

Navigation