skip to main content
10.1145/3332165.3347883acmconferencesArticle/Chapter ViewAbstractPublication PagesuistConference Proceedingsconference-collections
research-article

Gaze-Assisted Typing for Smart Glasses

Authors Info & Claims
Published:17 October 2019Publication History

Editorial Notes

A corrigendum was issued for this article on November 7, 2019. You can download the corrigendum from the supplemental material section of this citation page.

ABSTRACT

Text entry is expected to be a common task for smart glass users, which is generally performed using a touchpad on the temple or by a promising approach using eye tracking. However, each approach has its own limitations. For more efficient text entry, we present the concept of gaze-assisted typing (GAT), which uses both a touchpad and eye tracking. We initially examined GAT with a minimal eye input load, and demonstrated that the GAT technology was 51% faster than a two-step touch input typing method (i.e.,M-SwipeBoard: 5.85 words per minute (wpm) and GAT: 8.87 wpm). We also compared GAT methods with varying numbers of touch gestures. The results showed that a GAT requiring five different touch gestures was the most preferred, although all GAT techniques were equally efficient. Finally, we compared GAT with touch-only typing (SwipeZone) and eye-only typing (adjustable dwell) using an eye-trackable head-worn display. The results demonstrate that the most preferred technique, GAT, was 25.4% faster than the eye-only typing and 29.4% faster than the touch-only typing (GAT: 11.04 wpm, eye-only typing: 8.81 wpm, and touch-only typing: 8.53 wpm).

Skip Supplemental Material Section

Supplemental Material

ufp2768pv.mp4

mp4

5.1 MB

p857-ahn.mp4

mp4

467.9 MB

References

  1. Sunggeun Ahn, Seongkook Heo, and Geehyuk Lee. 2017. Typing on a Smartwatch for Smart Glasses. In Proceedings of the 2017 ACM International Conference on Interactive Surfaces and Spaces (ISS '17). ACM, New York, NY, USA, 201--209. DOI: http://dx.doi.org/10.1145/3132272.3134136Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Behrooz Ashtiani and I. Scott MacKenzie. 2010. BlinkWrite2: An Improved Text Entry Method Using Eye Blinks. In Proceedings of the 2010 Symposium on Eye-Tracking Research & Applications (ETRA '10). ACM, New York, NY, USA, 339--345. DOI: http://dx.doi.org/10.1145/1743666.1743742Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. L. R. Bahl, R. Bakis, J. Bellegarda, P. F. Brown, D. Burshtein, S. K. Das, P. V. de Souza, P. S. Gopalakrishnan, F. Jelinek, D. Kanevsky, R. L. Mercer, A. J. Nadas, D. Nahamoo, and M. A. Picheny. 1989. Large vocabulary natural language continuous speech recognition. In International Conference on Acoustics, Speech, and Signal Processing,. 465--467 vol.1. DOI: http://dx.doi.org/10.1109/ICASSP.1989.266464Google ScholarGoogle ScholarCross RefCross Ref
  4. Ishan Chatterjee, Robert Xiao, and Chris Harrison. 2015. Gaze+Gesture: Expressive, Precise and Targeted Free-Space Interactions. In Proceedings of the 2015 ACM on International Conference on Multimodal Interaction (ICMI '15). ACM, New York, NY, USA, 131--138. DOI: http://dx.doi.org/10.1145/2818346.2820752Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Xiang 'Anthony' Chen, Tovi Grossman, and George Fitzmaurice. 2014. Swipeboard: A Text Entry Technique for Ultra-small Interfaces That Supports Novice to Expert Transitions. In Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology (UIST '14). ACM, New York, NY, USA, 615--620. DOI: http://dx.doi.org/10.1145/2642918.2647354Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Sangwon Choi, Jaehyun Han, Geehyuk Lee, Narae Lee, and Woohun Lee. 2011. RemoteTouch: Touch-screen-like Interaction in the Tv Viewing Environment. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '11). ACM, New York, NY, USA, 393--402. DOI: http://dx.doi.org/10.1145/1978942.1978999Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Edward Clarkson, James Clawson, Kent Lyons, and Thad Starner. 2005. An Empirical Study of Typing Rates on mini-QWERTY Keyboards. In CHI '05 Extended Abstracts on Human Factors in Computing Systems (CHI EA '05). ACM, New York, NY, USA, 1288--1291. DOI:http://dx.doi.org/10.1145/1056808.1056898Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. John J. Dudley, Keith Vertanen, and Per Ola Kristensson. 2018. Fast and Precise Touch-Based Text Entry for Head-Mounted Augmented Reality with Variable Occlusion. ACM Trans. Comput.-Hum. Interact. 25, 6, Article 30 (Dec. 2018), 40 pages. DOI: http://dx.doi.org/10.1145/3232163Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Augusto Esteves, Eduardo Velloso, Andreas Bulling, and Hans Gellersen. 2015. Orbits: Gaze Interaction for Smart Watches Using Smooth Pursuit Eye Movements. In Proceedings of the 28th Annual ACM Symposium on User Interface Software and Technology (UIST '15). ACM, New York, NY, USA, 457--466. DOI: http://dx.doi.org/10.1145/2807442.2807499Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Jun Gong, Zheer Xu, Qifan Guo, Teddy Seyed, Xiang 'Anthony' Chen, Xiaojun Bi, and Xing-Dong Yang. 2018. WrisText: One-handed Text Entry on Smartwatch Using Wrist Gestures. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18). ACM, New York, NY, USA, Article 181, 14 pages. DOI: http://dx.doi.org/10.1145/3173574.3173755Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Tovi Grossman, Xiang Anthony Chen, and George Fitzmaurice. 2015. Typing on Glasses: Adapting Text Entry to Smart Eyewear. In Proc. MobileHCI '15. ACM, 144--152. DOI: http://dx.doi.org/10.1145/2785830.2785867Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Aakar Gupta, Cheng Ji, Hui-Shyong Yeo, Aaron Quigley, and Daniel Vogel. 2019. RotoSwype: Word-Gesture Typing Using a Ring. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI '19). ACM, New York, NY, USA, Article 14, 12 pages. DOI: http://dx.doi.org/10.1145/3290605.3300244Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Juan David Hincapié-Ramos, Xiang Guo, Paymahn Moghadasian, and Pourang Irani. 2014. Consumed Endurance: A Metric to Quantify Arm Fatigue of Mid-air Interactions. In Proceedings of the 32Nd Annual ACM Conference on Human Factors in Computing Systems (CHI '14). ACM, New York, NY, USA, 1063--1072. DOI: http://dx.doi.org/10.1145/2556288.2557130Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Yi-Ta Hsieh, Antti Jylhä, Valeria Orso, Luciano Gamberini, and Giulio Jacucci. 2016. Designing a Willing-to-Use-in-Public Hand Gestural Interaction Technique for Smart Glasses. In Proc. CHI '16. ACM, 4203--4215. DOI: http://dx.doi.org/10.1145/2858036.2858436Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Robert J. K. Jacob. 1991. The Use of Eye Movements in Human-computer Interaction Techniques: What You Look at is What You Get. ACM Trans. Inf. Syst. 9, 2 (April 1991), 152--169. DOI: http://dx.doi.org/10.1145/123078.128728Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Andrew Kurauchi, Wenxin Feng, Ajjen Joshi, Carlos Morimoto, and Margrit Betke. 2016. EyeSwipe: Dwell-free Text Entry Using Gaze Paths. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI '16). ACM, New York, NY, USA, 1952--1956. DOI: http://dx.doi.org/10.1145/2858036.2858335Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Mikko Kytö, Barrett Ens, Thammathip Piumsomboon, Gun A. Lee, and Mark Billinghurst. 2018. Pinpointing: Precise Head-and Eye-Based Target Selection for Augmented Reality. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18). ACM, New York, NY, USA, Article 81, 14 pages. DOI:http://dx.doi.org/10.1145/3173574.3173655Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Shu-Yang Lin, Chao-Huai Su, Kai-Yin Cheng, Rong-Hao Liang, Tzu-Hao Kuo, and Bing-Yu Chen. 2011. Pub -Point Upon Body: Exploring Eyes-free Interaction and Methods on an Arm. In Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology (UIST '11). ACM, New York, NY, USA, 481--488. DOI: http://dx.doi.org/10.1145/2047196.2047259Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Mingyu Liu, Mathieu Nancel, and Daniel Vogel. 2015. Gunslinger: Subtle Arms-down Mid-air Interaction. In Proceedings of the 28th Annual ACM Symposium on User Interface Software and Technology (UIST '15). ACM, New York, NY, USA, 63--71. DOI: http://dx.doi.org/10.1145/2807442.2807489Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. K. Lyons, D. Plaisted, and T. Starner. 2004. Expert chording text entry on the Twiddler one-handed keyboard. In Eighth International Symposium on Wearable Computers, Vol. 1. 94--101. DOI: http://dx.doi.org/10.1109/ISWC.2004.19Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. I. Scott MacKenzie and R. William Soukoreff. 2003. Phrase Sets for Evaluating Text Entry Techniques. In Proc. CHIEA'03 (CHI EA '03). ACM, 754--755. DOI: http://dx.doi.org/10.1145/765891.765971Google ScholarGoogle Scholar
  22. Päivi Majaranta, Ulla-Kaija Ahola, and Oleg pakov. 2009. Fast Gaze Typing with an Adjustable Dwell Time. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '09). ACM, New York, NY, USA, 357--360. DOI: http://dx.doi.org/10.1145/1518701.1518758Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Päivi Majaranta and Andreas Bulling. 2014. Eye tracking and eye-based human--computer interaction. In Advances in physiological computing. Springer, 39--65.Google ScholarGoogle Scholar
  24. Meethu Malu and Leah Findlater. 2015. Personalized, Wearable Control of a Head-mounted Display for Users with Upper Body Motor Impairments. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI '15). ACM, New York, NY, USA, 221--230. DOI: http://dx.doi.org/10.1145/2702123.2702188Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Anders Markussen, Mikkel Rønne Jakobsen, and Kasper Hornbæk. 2014. Vulture: A Mid-air Word-gesture Keyboard. In Pro. CHI '14. ACM, 1073--1082. DOI: http://dx.doi.org/10.1145/2556288.2556964Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. M. Menozzi, A. v. Buol, H. Krueger, and Ch. Miége. 1994. Direction of gaze and comfort: discovering the relation for the ergonomic optimization of visual tasks. Ophthalmic and Physiological Optics 14, 4 (1994), 393--399. DOI: http://dx.doi.org/10.1111/j.1475--1313.1994.tb00131.xGoogle ScholarGoogle ScholarCross RefCross Ref
  27. Carlos H. Morimoto and Arnon Amir. 2010. Context Switching for Fast Key Selection in Text Entry Applications. In Proceedings of the 2010 Symposium on Eye-Tracking Research and Applications (ETRA '10). ACM, New York, NY, USA, 271--274. DOI: http://dx.doi.org/10.1145/1743666.1743730Google ScholarGoogle Scholar
  28. Martez E. Mott, Shane Williams, Jacob O. Wobbrock, and Meredith Ringel Morris. 2017. Improving Dwell-Based Gaze Typing with Dynamic, Cascading Dwell Times. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI '17). ACM, New York, NY, USA, 2558--2570. DOI:http://dx.doi.org/10.1145/3025453.3025517Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Shahriar Nirjon, Jeremy Gummeson, Dan Gelb, and Kyu-Han Kim. 2015. TypingRing: A Wearable Ring Platform for Text Input. In Proceedings of the 13th Annual International Conference on Mobile Systems, Applications, and Services (MobiSys '15). ACM, New York, NY, USA, 227--239. DOI: http://dx.doi.org/10.1145/2742647.2742665Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Ken Pfeuffer, Jason Alexander, Ming Ki Chong, and Hans Gellersen. 2014. Gaze-touch: Combining Gaze with Multi-touch for Interaction on the Same Surface. In Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology (UIST '14). ACM, New York, NY, USA, 509--518. DOI: http://dx.doi.org/10.1145/2642918.2647397Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Ken Pfeuffer and Hans Gellersen. 2016. Gaze and Touch Interaction on Tablets. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology (UIST '16). ACM, New York, NY, USA, 301--311. DOI: http://dx.doi.org/10.1145/2984511.2984514Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. Vijay Rajanna. 2016. Gaze Typing Through Foot-Operated Wearable Device. In Proceedings of the 18th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS '16). ACM, New York, NY, USA, 345--346. DOI: http://dx.doi.org/10.1145/2982142.2982145Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Vijay Rajanna and John Paulin Hansen. 2018. Gaze Typing in Virtual Reality: Impact of Keyboard Design, Selection Method, and Motion. In Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications (ETRA '18). ACM, New York, NY, USA, Article 15, 10 pages. DOI: http://dx.doi.org/10.1145/3204493.3204541Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Julie Rico and Stephen Brewster. 2010. Usable Gestures for Mobile Interfaces: Evaluating Social Acceptability. In Proc. CHI'10. ACM, 887--896. DOI: http://dx.doi.org/10.1145/1753326.1753458Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Srinath Sridhar, Anna Maria Feit, Christian Theobalt, and Antti Oulasvirta. 2015. Investigating the Dexterity of Multi-Finger Input for Mid-Air Text Entry. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI '15). ACM, New York, NY, USA, 3643--3652. DOI: http://dx.doi.org/10.1145/2702123.2702136Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Takumi Toyama, Thomas Kieninger, Faisal Shafait, and Andreas Dengel. 2012. Gaze Guided Object Recognition Using a Head-mounted Eye Tracker. In Proceedings of the Symposium on Eye Tracking Research & Applications (ETRA '12). ACM, New York, NY, USA, 91--98. DOI:http://dx.doi.org/10.1145/2168556.2168570Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Takumi Toyama, Daniel Sonntag, Andreas Dengel, Takahiro Matsuda, Masakazu Iwamura, and Koichi Kise. 2014. A Mixed Reality Head-mounted Text Translation System Using Eye Gaze Input. In Proceedings of the 19th International Conference on Intelligent User Interfaces (IUI '14). ACM, New York, NY, USA, 329--334. DOI: http://dx.doi.org/10.1145/2557500.2557528Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Outi Tuisku, Veikko Surakka, Ville Rantanen, Toni Vanhala, and Jukka Lekkala. 2013. Text Entry by Gazing and Smiling. Adv. in Hum.-Comp. Int. 2013, Article 1 (Jan. 2013), 1 pages. DOI: http://dx.doi.org/10.1155/2013/218084Google ScholarGoogle Scholar
  39. Cheng-Yao Wang, Wei-Chen Chu, Po-Tsung Chiu, Min-Chieh Hsiu, Yih-Harn Chiang, and Mike Y. Chen. 2015. PalmType: Using Palms As Keyboards for Smart Glasses. In Proc. MobileHCI'15. ACM, 153--160. DOI: http://dx.doi.org/10.1145/2785830.2785886Google ScholarGoogle Scholar
  40. David J. Ward, Alan F. Blackwell, and David J. C. MacKay. 2000. Dasher - a Data Entry Interface Using Continuous Gestures and Language Models. In Proceedings of the 13th Annual ACM Symposium on User Interface Software and Technology (UIST '00). ACM, New York, NY, USA, 129--137. DOI: http://dx.doi.org/10.1145/354401.354427Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. Eric Whitmire, Mohit Jain, Divye Jain, Greg Nelson, Ravi Karkar, Shwetak Patel, and Mayank Goel. 2017. DigiTouch: Reconfigurable Thumb-to-Finger Input and Text Entry on Head-mounted Displays. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 1, 3, Article 113 (Sept. 2017), 21 pages. DOI: http://dx.doi.org/10.1145/3130978Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. Jacob O. Wobbrock, Leah Findlater, Darren Gergle, and James J. Higgins. 2011. The Aligned Rank Transform for Nonparametric Factorial Analyses Using Only Anova Procedures. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '11). ACM, New York, NY, USA, 143--146. DOI: http://dx.doi.org/10.1145/1978942.1978963Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. Jacob O. Wobbrock, James Rubinstein, Michael W. Sawyer, and Andrew T. Duchowski. 2008. Longitudinal Evaluation of Discrete Consecutive Gaze Gestures for Text Entry. In Proceedings of the 2008 Symposium on Eye Tracking Research & Applications (ETRA '08). ACM, New York, NY, USA, 11--18. DOI: http://dx.doi.org/10.1145/1344471.1344475Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. Hui-Shyong Yeo, Xiao-Shen Phang, Taejin Ha, Woontack Woo, and Aaron Quigley. 2017. TiTAN: Exploring Midair Text Entry Using Freehand Input. In Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA '17). ACM, New York, NY, USA, 3041--3049. DOI:http://dx.doi.org/10.1145/3027063.3053228Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. Xin Yi, Chun Yu, Mingrui Zhang, Sida Gao, Ke Sun, and Yuanchun Shi. 2015. ATK: Enabling Ten-Finger Freehand Typing in Air Based on 3D Hand Tracking Data. In Proc. UIST'15. ACM, 539--548. DOI: http://dx.doi.org/10.1145/2807442.2807504Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. Chun Yu, Ke Sun, Mingyuan Zhong, Xincheng Li, Peijun Zhao, and Yuanchun Shi. 2016. One Dimensional Handwriting: Inputting Letters and Words on Smart Glasses. In Proc. CHI'16. ACM, 71--82. DOI: http://dx.doi.org/10.1145/2858036.2858542Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. Shumin Zhai, Carlos Morimoto, and Steven Ihde. 1999. Manual and Gaze Input Cascaded (MAGIC) Pointing. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '99). ACM, New York, NY, USA, 246--253. DOI: http://dx.doi.org/10.1145/302979.303053Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Gaze-Assisted Typing for Smart Glasses

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        UIST '19: Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology
        October 2019
        1229 pages
        ISBN:9781450368162
        DOI:10.1145/3332165

        Copyright © 2019 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 17 October 2019

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

        Acceptance Rates

        Overall Acceptance Rate842of3,967submissions,21%

        Upcoming Conference

        UIST '24

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader