Editorial Notes
A corrigendum was issued for this article on November 7, 2019. You can download the corrigendum from the supplemental material section of this citation page.
ABSTRACT
Text entry is expected to be a common task for smart glass users, which is generally performed using a touchpad on the temple or by a promising approach using eye tracking. However, each approach has its own limitations. For more efficient text entry, we present the concept of gaze-assisted typing (GAT), which uses both a touchpad and eye tracking. We initially examined GAT with a minimal eye input load, and demonstrated that the GAT technology was 51% faster than a two-step touch input typing method (i.e.,M-SwipeBoard: 5.85 words per minute (wpm) and GAT: 8.87 wpm). We also compared GAT methods with varying numbers of touch gestures. The results showed that a GAT requiring five different touch gestures was the most preferred, although all GAT techniques were equally efficient. Finally, we compared GAT with touch-only typing (SwipeZone) and eye-only typing (adjustable dwell) using an eye-trackable head-worn display. The results demonstrate that the most preferred technique, GAT, was 25.4% faster than the eye-only typing and 29.4% faster than the touch-only typing (GAT: 11.04 wpm, eye-only typing: 8.81 wpm, and touch-only typing: 8.53 wpm).
Supplemental Material
Available for Download
Corrigendum to "Gaze-Assisted Typing for Smart Glasses," by Ahn et al., Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology (UIST '19).
- Sunggeun Ahn, Seongkook Heo, and Geehyuk Lee. 2017. Typing on a Smartwatch for Smart Glasses. In Proceedings of the 2017 ACM International Conference on Interactive Surfaces and Spaces (ISS '17). ACM, New York, NY, USA, 201--209. DOI: http://dx.doi.org/10.1145/3132272.3134136Google ScholarDigital Library
- Behrooz Ashtiani and I. Scott MacKenzie. 2010. BlinkWrite2: An Improved Text Entry Method Using Eye Blinks. In Proceedings of the 2010 Symposium on Eye-Tracking Research & Applications (ETRA '10). ACM, New York, NY, USA, 339--345. DOI: http://dx.doi.org/10.1145/1743666.1743742Google ScholarDigital Library
- L. R. Bahl, R. Bakis, J. Bellegarda, P. F. Brown, D. Burshtein, S. K. Das, P. V. de Souza, P. S. Gopalakrishnan, F. Jelinek, D. Kanevsky, R. L. Mercer, A. J. Nadas, D. Nahamoo, and M. A. Picheny. 1989. Large vocabulary natural language continuous speech recognition. In International Conference on Acoustics, Speech, and Signal Processing,. 465--467 vol.1. DOI: http://dx.doi.org/10.1109/ICASSP.1989.266464Google ScholarCross Ref
- Ishan Chatterjee, Robert Xiao, and Chris Harrison. 2015. Gaze+Gesture: Expressive, Precise and Targeted Free-Space Interactions. In Proceedings of the 2015 ACM on International Conference on Multimodal Interaction (ICMI '15). ACM, New York, NY, USA, 131--138. DOI: http://dx.doi.org/10.1145/2818346.2820752Google ScholarDigital Library
- Xiang 'Anthony' Chen, Tovi Grossman, and George Fitzmaurice. 2014. Swipeboard: A Text Entry Technique for Ultra-small Interfaces That Supports Novice to Expert Transitions. In Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology (UIST '14). ACM, New York, NY, USA, 615--620. DOI: http://dx.doi.org/10.1145/2642918.2647354Google ScholarDigital Library
- Sangwon Choi, Jaehyun Han, Geehyuk Lee, Narae Lee, and Woohun Lee. 2011. RemoteTouch: Touch-screen-like Interaction in the Tv Viewing Environment. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '11). ACM, New York, NY, USA, 393--402. DOI: http://dx.doi.org/10.1145/1978942.1978999Google ScholarDigital Library
- Edward Clarkson, James Clawson, Kent Lyons, and Thad Starner. 2005. An Empirical Study of Typing Rates on mini-QWERTY Keyboards. In CHI '05 Extended Abstracts on Human Factors in Computing Systems (CHI EA '05). ACM, New York, NY, USA, 1288--1291. DOI:http://dx.doi.org/10.1145/1056808.1056898Google ScholarDigital Library
- John J. Dudley, Keith Vertanen, and Per Ola Kristensson. 2018. Fast and Precise Touch-Based Text Entry for Head-Mounted Augmented Reality with Variable Occlusion. ACM Trans. Comput.-Hum. Interact. 25, 6, Article 30 (Dec. 2018), 40 pages. DOI: http://dx.doi.org/10.1145/3232163Google ScholarDigital Library
- Augusto Esteves, Eduardo Velloso, Andreas Bulling, and Hans Gellersen. 2015. Orbits: Gaze Interaction for Smart Watches Using Smooth Pursuit Eye Movements. In Proceedings of the 28th Annual ACM Symposium on User Interface Software and Technology (UIST '15). ACM, New York, NY, USA, 457--466. DOI: http://dx.doi.org/10.1145/2807442.2807499Google ScholarDigital Library
- Jun Gong, Zheer Xu, Qifan Guo, Teddy Seyed, Xiang 'Anthony' Chen, Xiaojun Bi, and Xing-Dong Yang. 2018. WrisText: One-handed Text Entry on Smartwatch Using Wrist Gestures. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18). ACM, New York, NY, USA, Article 181, 14 pages. DOI: http://dx.doi.org/10.1145/3173574.3173755Google ScholarDigital Library
- Tovi Grossman, Xiang Anthony Chen, and George Fitzmaurice. 2015. Typing on Glasses: Adapting Text Entry to Smart Eyewear. In Proc. MobileHCI '15. ACM, 144--152. DOI: http://dx.doi.org/10.1145/2785830.2785867Google ScholarDigital Library
- Aakar Gupta, Cheng Ji, Hui-Shyong Yeo, Aaron Quigley, and Daniel Vogel. 2019. RotoSwype: Word-Gesture Typing Using a Ring. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI '19). ACM, New York, NY, USA, Article 14, 12 pages. DOI: http://dx.doi.org/10.1145/3290605.3300244Google ScholarDigital Library
- Juan David Hincapié-Ramos, Xiang Guo, Paymahn Moghadasian, and Pourang Irani. 2014. Consumed Endurance: A Metric to Quantify Arm Fatigue of Mid-air Interactions. In Proceedings of the 32Nd Annual ACM Conference on Human Factors in Computing Systems (CHI '14). ACM, New York, NY, USA, 1063--1072. DOI: http://dx.doi.org/10.1145/2556288.2557130Google ScholarDigital Library
- Yi-Ta Hsieh, Antti Jylhä, Valeria Orso, Luciano Gamberini, and Giulio Jacucci. 2016. Designing a Willing-to-Use-in-Public Hand Gestural Interaction Technique for Smart Glasses. In Proc. CHI '16. ACM, 4203--4215. DOI: http://dx.doi.org/10.1145/2858036.2858436Google ScholarDigital Library
- Robert J. K. Jacob. 1991. The Use of Eye Movements in Human-computer Interaction Techniques: What You Look at is What You Get. ACM Trans. Inf. Syst. 9, 2 (April 1991), 152--169. DOI: http://dx.doi.org/10.1145/123078.128728Google ScholarDigital Library
- Andrew Kurauchi, Wenxin Feng, Ajjen Joshi, Carlos Morimoto, and Margrit Betke. 2016. EyeSwipe: Dwell-free Text Entry Using Gaze Paths. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI '16). ACM, New York, NY, USA, 1952--1956. DOI: http://dx.doi.org/10.1145/2858036.2858335Google ScholarDigital Library
- Mikko Kytö, Barrett Ens, Thammathip Piumsomboon, Gun A. Lee, and Mark Billinghurst. 2018. Pinpointing: Precise Head-and Eye-Based Target Selection for Augmented Reality. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18). ACM, New York, NY, USA, Article 81, 14 pages. DOI:http://dx.doi.org/10.1145/3173574.3173655Google ScholarDigital Library
- Shu-Yang Lin, Chao-Huai Su, Kai-Yin Cheng, Rong-Hao Liang, Tzu-Hao Kuo, and Bing-Yu Chen. 2011. Pub -Point Upon Body: Exploring Eyes-free Interaction and Methods on an Arm. In Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology (UIST '11). ACM, New York, NY, USA, 481--488. DOI: http://dx.doi.org/10.1145/2047196.2047259Google ScholarDigital Library
- Mingyu Liu, Mathieu Nancel, and Daniel Vogel. 2015. Gunslinger: Subtle Arms-down Mid-air Interaction. In Proceedings of the 28th Annual ACM Symposium on User Interface Software and Technology (UIST '15). ACM, New York, NY, USA, 63--71. DOI: http://dx.doi.org/10.1145/2807442.2807489Google ScholarDigital Library
- K. Lyons, D. Plaisted, and T. Starner. 2004. Expert chording text entry on the Twiddler one-handed keyboard. In Eighth International Symposium on Wearable Computers, Vol. 1. 94--101. DOI: http://dx.doi.org/10.1109/ISWC.2004.19Google ScholarDigital Library
- I. Scott MacKenzie and R. William Soukoreff. 2003. Phrase Sets for Evaluating Text Entry Techniques. In Proc. CHIEA'03 (CHI EA '03). ACM, 754--755. DOI: http://dx.doi.org/10.1145/765891.765971Google Scholar
- Päivi Majaranta, Ulla-Kaija Ahola, and Oleg pakov. 2009. Fast Gaze Typing with an Adjustable Dwell Time. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '09). ACM, New York, NY, USA, 357--360. DOI: http://dx.doi.org/10.1145/1518701.1518758Google ScholarDigital Library
- Päivi Majaranta and Andreas Bulling. 2014. Eye tracking and eye-based human--computer interaction. In Advances in physiological computing. Springer, 39--65.Google Scholar
- Meethu Malu and Leah Findlater. 2015. Personalized, Wearable Control of a Head-mounted Display for Users with Upper Body Motor Impairments. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI '15). ACM, New York, NY, USA, 221--230. DOI: http://dx.doi.org/10.1145/2702123.2702188Google ScholarDigital Library
- Anders Markussen, Mikkel Rønne Jakobsen, and Kasper Hornbæk. 2014. Vulture: A Mid-air Word-gesture Keyboard. In Pro. CHI '14. ACM, 1073--1082. DOI: http://dx.doi.org/10.1145/2556288.2556964Google ScholarDigital Library
- M. Menozzi, A. v. Buol, H. Krueger, and Ch. Miége. 1994. Direction of gaze and comfort: discovering the relation for the ergonomic optimization of visual tasks. Ophthalmic and Physiological Optics 14, 4 (1994), 393--399. DOI: http://dx.doi.org/10.1111/j.1475--1313.1994.tb00131.xGoogle ScholarCross Ref
- Carlos H. Morimoto and Arnon Amir. 2010. Context Switching for Fast Key Selection in Text Entry Applications. In Proceedings of the 2010 Symposium on Eye-Tracking Research and Applications (ETRA '10). ACM, New York, NY, USA, 271--274. DOI: http://dx.doi.org/10.1145/1743666.1743730Google Scholar
- Martez E. Mott, Shane Williams, Jacob O. Wobbrock, and Meredith Ringel Morris. 2017. Improving Dwell-Based Gaze Typing with Dynamic, Cascading Dwell Times. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI '17). ACM, New York, NY, USA, 2558--2570. DOI:http://dx.doi.org/10.1145/3025453.3025517Google ScholarDigital Library
- Shahriar Nirjon, Jeremy Gummeson, Dan Gelb, and Kyu-Han Kim. 2015. TypingRing: A Wearable Ring Platform for Text Input. In Proceedings of the 13th Annual International Conference on Mobile Systems, Applications, and Services (MobiSys '15). ACM, New York, NY, USA, 227--239. DOI: http://dx.doi.org/10.1145/2742647.2742665Google ScholarDigital Library
- Ken Pfeuffer, Jason Alexander, Ming Ki Chong, and Hans Gellersen. 2014. Gaze-touch: Combining Gaze with Multi-touch for Interaction on the Same Surface. In Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology (UIST '14). ACM, New York, NY, USA, 509--518. DOI: http://dx.doi.org/10.1145/2642918.2647397Google ScholarDigital Library
- Ken Pfeuffer and Hans Gellersen. 2016. Gaze and Touch Interaction on Tablets. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology (UIST '16). ACM, New York, NY, USA, 301--311. DOI: http://dx.doi.org/10.1145/2984511.2984514Google ScholarDigital Library
- Vijay Rajanna. 2016. Gaze Typing Through Foot-Operated Wearable Device. In Proceedings of the 18th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS '16). ACM, New York, NY, USA, 345--346. DOI: http://dx.doi.org/10.1145/2982142.2982145Google ScholarDigital Library
- Vijay Rajanna and John Paulin Hansen. 2018. Gaze Typing in Virtual Reality: Impact of Keyboard Design, Selection Method, and Motion. In Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications (ETRA '18). ACM, New York, NY, USA, Article 15, 10 pages. DOI: http://dx.doi.org/10.1145/3204493.3204541Google ScholarDigital Library
- Julie Rico and Stephen Brewster. 2010. Usable Gestures for Mobile Interfaces: Evaluating Social Acceptability. In Proc. CHI'10. ACM, 887--896. DOI: http://dx.doi.org/10.1145/1753326.1753458Google ScholarDigital Library
- Srinath Sridhar, Anna Maria Feit, Christian Theobalt, and Antti Oulasvirta. 2015. Investigating the Dexterity of Multi-Finger Input for Mid-Air Text Entry. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI '15). ACM, New York, NY, USA, 3643--3652. DOI: http://dx.doi.org/10.1145/2702123.2702136Google ScholarDigital Library
- Takumi Toyama, Thomas Kieninger, Faisal Shafait, and Andreas Dengel. 2012. Gaze Guided Object Recognition Using a Head-mounted Eye Tracker. In Proceedings of the Symposium on Eye Tracking Research & Applications (ETRA '12). ACM, New York, NY, USA, 91--98. DOI:http://dx.doi.org/10.1145/2168556.2168570Google ScholarDigital Library
- Takumi Toyama, Daniel Sonntag, Andreas Dengel, Takahiro Matsuda, Masakazu Iwamura, and Koichi Kise. 2014. A Mixed Reality Head-mounted Text Translation System Using Eye Gaze Input. In Proceedings of the 19th International Conference on Intelligent User Interfaces (IUI '14). ACM, New York, NY, USA, 329--334. DOI: http://dx.doi.org/10.1145/2557500.2557528Google ScholarDigital Library
- Outi Tuisku, Veikko Surakka, Ville Rantanen, Toni Vanhala, and Jukka Lekkala. 2013. Text Entry by Gazing and Smiling. Adv. in Hum.-Comp. Int. 2013, Article 1 (Jan. 2013), 1 pages. DOI: http://dx.doi.org/10.1155/2013/218084Google Scholar
- Cheng-Yao Wang, Wei-Chen Chu, Po-Tsung Chiu, Min-Chieh Hsiu, Yih-Harn Chiang, and Mike Y. Chen. 2015. PalmType: Using Palms As Keyboards for Smart Glasses. In Proc. MobileHCI'15. ACM, 153--160. DOI: http://dx.doi.org/10.1145/2785830.2785886Google Scholar
- David J. Ward, Alan F. Blackwell, and David J. C. MacKay. 2000. Dasher - a Data Entry Interface Using Continuous Gestures and Language Models. In Proceedings of the 13th Annual ACM Symposium on User Interface Software and Technology (UIST '00). ACM, New York, NY, USA, 129--137. DOI: http://dx.doi.org/10.1145/354401.354427Google ScholarDigital Library
- Eric Whitmire, Mohit Jain, Divye Jain, Greg Nelson, Ravi Karkar, Shwetak Patel, and Mayank Goel. 2017. DigiTouch: Reconfigurable Thumb-to-Finger Input and Text Entry on Head-mounted Displays. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 1, 3, Article 113 (Sept. 2017), 21 pages. DOI: http://dx.doi.org/10.1145/3130978Google ScholarDigital Library
- Jacob O. Wobbrock, Leah Findlater, Darren Gergle, and James J. Higgins. 2011. The Aligned Rank Transform for Nonparametric Factorial Analyses Using Only Anova Procedures. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '11). ACM, New York, NY, USA, 143--146. DOI: http://dx.doi.org/10.1145/1978942.1978963Google ScholarDigital Library
- Jacob O. Wobbrock, James Rubinstein, Michael W. Sawyer, and Andrew T. Duchowski. 2008. Longitudinal Evaluation of Discrete Consecutive Gaze Gestures for Text Entry. In Proceedings of the 2008 Symposium on Eye Tracking Research & Applications (ETRA '08). ACM, New York, NY, USA, 11--18. DOI: http://dx.doi.org/10.1145/1344471.1344475Google ScholarDigital Library
- Hui-Shyong Yeo, Xiao-Shen Phang, Taejin Ha, Woontack Woo, and Aaron Quigley. 2017. TiTAN: Exploring Midair Text Entry Using Freehand Input. In Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA '17). ACM, New York, NY, USA, 3041--3049. DOI:http://dx.doi.org/10.1145/3027063.3053228Google ScholarDigital Library
- Xin Yi, Chun Yu, Mingrui Zhang, Sida Gao, Ke Sun, and Yuanchun Shi. 2015. ATK: Enabling Ten-Finger Freehand Typing in Air Based on 3D Hand Tracking Data. In Proc. UIST'15. ACM, 539--548. DOI: http://dx.doi.org/10.1145/2807442.2807504Google ScholarDigital Library
- Chun Yu, Ke Sun, Mingyuan Zhong, Xincheng Li, Peijun Zhao, and Yuanchun Shi. 2016. One Dimensional Handwriting: Inputting Letters and Words on Smart Glasses. In Proc. CHI'16. ACM, 71--82. DOI: http://dx.doi.org/10.1145/2858036.2858542Google ScholarDigital Library
- Shumin Zhai, Carlos Morimoto, and Steven Ihde. 1999. Manual and Gaze Input Cascaded (MAGIC) Pointing. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '99). ACM, New York, NY, USA, 246--253. DOI: http://dx.doi.org/10.1145/302979.303053Google ScholarDigital Library
Index Terms
- Gaze-Assisted Typing for Smart Glasses
Recommendations
HGaze Typing: Head-Gesture Assisted Gaze Typing
ETRA '21 Full Papers: ACM Symposium on Eye Tracking Research and ApplicationsThis paper introduces a bi-modal typing interface, HGaze Typing, which combines the simplicity of head gestures with the speed of gaze inputs to provide efficient and comfortable dwell-free text entry. HGaze Typing uses gaze path information to compute ...
Gaze typing in virtual reality: impact of keyboard design, selection method, and motion
ETRA '18: Proceedings of the 2018 ACM Symposium on Eye Tracking Research & ApplicationsGaze tracking in virtual reality (VR) allows for hands-free text entry, but it has not yet been explored. We investigate how the keyboard design, selection method, and motion in the field of view may impact typing performance and user experience. We ...
Haptic feedback of gaze gestures with glasses: localization accuracy and effectiveness
UbiComp/ISWC'15 Adjunct: Adjunct Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2015 ACM International Symposium on Wearable ComputersWearable devices including smart eyewear require new interaction methods between the device and the user. In this paper, we describe our work on the combined use of eye tracking for input and haptic (touch) stimulation for output with eyewear. Input ...
Comments