ABSTRACT
In this work, we investigate how remote collaboration between a local worker and a remote collaborator will change if eye fixations of the collaborator are presented to the worker. We track the collaborator's points of gaze on a monitor screen displaying a physical workspace and visualize them onto the space by a projector or through an optical see-through head-mounted display. Through a series of user studies, we have found the followings: 1) Eye fixations can serve as a fast and precise pointer to objects of the collaborator's interest. 2) Eyes and other modalities, such as hand gestures and speech, are used differently for object identification and manipulation. 3) Eyes are used for explicit instructions only when they are combined with speech. 4) The worker can predict some intentions of the collaborator such as his/her current interest and next instruction.
Supplemental Material
- Matt Adcock, Stuart Anderson, and Bruce Thomas. RemoteFusion: Real Time Depth Camera Fusion for Remote Collaboration on Physical Tasks (VRCAI '13). Google ScholarDigital Library
- Matt Adcock, Dulitha Ranatunga, Ross Smith, and Bruce H. Thomas. Object-based Touch Manipulation for Remote Guidance of Physical Tasks (SUI '14). Google ScholarDigital Library
- Antti Ajanki, DavidR. Hardoon, Samuel Kaski, Kai Puolamki, and John Shawe-Taylor. 2009. Can eyes reveal interest? Implicit queries from gaze patterns. User Modeling and User-Adapted Interaction (2009). Google ScholarDigital Library
- Michael Argyle, Roger Ingham, Florisse Alkema, and Margaret McCallin. 1973. The Different Functions of Gaze. Semiotica 7 (1973).Google Scholar
- Roman Bednarik, Shahram Eivazi, and Hana Vrzakova. 2013. A Computational Approach for Prediction of Problem-Solving Behavior Using Support Vector Machines and Eye-Tracking Data. In Eye Gaze in Intelligent User Interfaces. 111-134.Google ScholarCross Ref
- Roman Bednarik, Hana Vrzakova, and Michal Hradis. 2012. What Do You Want to Do Next: A Novel Approach for Intent Prediction in Gaze-based Interaction (ETRA '12). Google ScholarDigital Library
- Boris Brandherm, Helmut Prendinger, and Mitsuru Ishizuka. 2007. Interest Estimation Based on Dynamic Bayesian Networks for Visual Attentive Presentation Agents (ICMI '07). Google ScholarDigital Library
- Andreas Bulling, Daniel Roggen, and Gerhard Troster. It's in Your Eyes: Towards Context-awareness and Mobile HCI Using Wearable EOG Goggles (UbiComp '08). Google ScholarDigital Library
- Jean Carletta, Robin L Hill, Craig Nicol, Tim Taylor, Jan Peter De Ruiter, and Ellen Gurman Bard. 2010. Eyetracking for Two-Person Tasks with Manipulation of a Virtual World. Behavior Research Methods 42, 1 (2010).Google Scholar
- Sicheng Chen, Miao Chen, Andreas Kunz, Asim Evren Yantaç, Mathias Bergmark, Anders Sundin, and Morten Fjeld. SEMarbeta: Mobile Sketch-gesture-video Remote Support for Car Drivers (AH '13). Google ScholarDigital Library
- Mauro Cherubini, Marc-Antoine Nüssli, and Pierre Dillenbourg. Deixis and Gaze in Collaborative Work at a Distance (over a Shared Map): A Computational Model to Detect Misunderstandings (ETRA '08). Google ScholarDigital Library
- Andrew J. Davison, Walterio W. Mayol, and David W. Murray. 2003. Real-Time Localisation and Mapping with Wearable Active Vision. In Proceedings of the 2Nd IEEE/ACM International Symposium on Mixed and Augmented Reality (ISMAR '03). 18-. Google ScholarDigital Library
- Kentaro Fukuchi, Toshiki Sato, Haruko Mamiya, and Hideki Koike. 2010. Pac-pac: Pinching Gesture Recognition for Tabletop Entertainment System (AVI '08). ACM, 267-273. Google ScholarDigital Library
- Susan R. Fussell, Leslie D. Setlock, and Robert E. Kraut. Effects of Head-mounted and Scene-oriented Video Systems on Remote Collaboration on Physical Tasks (CHI '03). Google ScholarDigital Library
- Susan R. Fussell, Leslie D. Setlock, and Elizabeth M. Parker. Where Do Helpers Look?: Gaze Targets During Collaborative Physical Tasks (CHI EA '03). Google ScholarDigital Library
- Susan R. Fussell, Leslie D. Setlock, Jie Yang, Jiazhi Ou, Elizabeth Mauer, and Adam D. I. Kramer. 2004. Gestures over Video Streams to Support Remote Collaboration on Physical Tasks. Human-Computer Interaction (2004). Google ScholarDigital Library
- Steffen Gauglitz, Benjamin Nuernberger, Matthew Turk, and Tobias Hollerer. World-stabilized Annotations and Virtual Scene Navigation for Remote Collaboration (UIST '14). Google ScholarDigital Library
- Mary Hayhoe and Dana Ballard. 2005. Eye Movements in Natural Behavior. Trends Cogn. Sci. (Regul. Ed.) 9, 4 (2005), 188-194.Google ScholarCross Ref
- Takatsugu Hirayama, Dodane Jean-Baptiste, Hiroaki Kawashima, and Takashi Matsuyama. 2010. Estimates of User Interest Using Timing Structures between Proactive Content-Display Updates and Eye Movements. IEICE Trans. Inf. & Syst. 93, 6 (2010).Google Scholar
- Shinsaku Hiura, Kenji Tojo, and Seiji Inokuchi. 3DD Tele-direction Interface Using Video Projector.Google Scholar
- Yoshio Ishiguro, Adiyan Mujibiya, Takashi Miyaki, and Jun Rekimoto. Aided Eyes: Eye Activity Sensing for Daily Life (AH '10). Google ScholarDigital Library
- Robert J. K. Jacob. 1990. What You Look at is What You Get: Eye Movement-based Interaction Techniques (CHI '90). Google ScholarDigital Library
- Patrick Jermann, Darren Gergle, Roman Bednarik, and Susan Brennan. Duet 2012: Dual Eye Tracking in CSCW. In Proceedings of the ACM 2012 conference on Computer Supported Cooperative Work Companion. 23-24. Google ScholarDigital Library
- Roland S. Johansson, Gran Westling, Anders Bckstrm, and J. Randall Flanagan. 2001. Eye-Hand Coordination in Object Manipulation. JOURNAL OF NEUROSCIENCE 21, 17 (2001), 6917-6932.Google ScholarCross Ref
- Yvonne Kammerer, Katharina Scheiter, and Wolfgang Beinhauer. Looking My Way Through the Menu: The Impact of Menu Design and Multimodal Input on Gaze-based Menu Selection (ETRA '08). Google ScholarDigital Library
- Shunichi Kasahara and Jun Rekimoto. JackIn: Integrating First-person View with Out-of-body Vision Generation for Human-human Augmentation (AH '14). Google ScholarDigital Library
- Hirokazu Kato and Mark Billinghurst. Marker Tracking and Hmd Calibration for a Video-based Augmented Reality Conferencing System (IWAR '99). Google ScholarDigital Library
- Adam Kendon. 1967. Some Functions of Gaze-Direction in Social Interaction. Acta Psychologica 26 (1967).Google Scholar
- Seungwon Kim, Gun Lee, Nobuyasu Sakata, and Mark Billinghurst. Improving Co-presence with Augmented Visual Communication Cues for Sharing Experience through Video Conference (ISMAR'14).Google Scholar
- David Kirk and Danae Stanton Fraser. Comparing Remote Gesture Technologies for Supporting Collaborative Physical Tasks (CHI '06). Google ScholarDigital Library
- Nikolina Koleva, Sabrina Hoppe, Mohammed Mehdi Moniri, Maria Staudte, and Andreas Bulling. On the Interplay between Spontaneous Spoken Instructions and Human Visual Behaviour in an Indoor Guidance Task (COGSCI '15).Google Scholar
- Robert E. Kraut, Susan R. Fussell, and Jane Siegel. 2003. Visual Information As a Conversational Resource in Collaborative Physical Tasks. Human-Computer Interaction 18, 1 (2003). Google ScholarDigital Library
- Takeshi Kurata, Nobuchika Sakata, Masakatsu Kourogi, Hideaki Kuzuoka, and Mark Billinghurst. Remote Collaboration using a Shoulder-worn Active Camera/Laser (ISWC '04). Google ScholarDigital Library
- Hideaki Kuzuoka. Spatial Workspace Collaboration: A SharedView Video Support System for Remote Collaboration Capability (CHI '92). Google ScholarDigital Library
- Hemin Omer Latif, Nasser Sherkat, and Ahmad Lot?. Teleoperation Through Eye Gaze (TeleGaze): A Multimodal Approach (ROBIO'09).Google Scholar
- Taehee Lee and Tobias Hollerer. Viewpoint Stabilization for Live Collaborative Video Augmentations (ISMAR '06). Google ScholarDigital Library
- Kana Misawa and Jun Rekimoto. Wearing Another's Personality: A Human-surrogate System with a Telepresence Face (ISWC '15). Google ScholarDigital Library
- A. Monden, K. Matsumoto, and M. Yamato. 2005. Evaluation of Gaze-Added Target Selection Methods Suitable for General GUIs. Int. J. Comput. Appl. Technol. 24, 1 (2005). Google ScholarDigital Library
- Romy Muller, Jens R Helmert, Sebastian Pannasch, and Boris M Velichkovsky. 2013. Gaze Transfer in Remote Cooperation: Is it always Helpful to See What Your Partner Is Attending to? The Quarterly Journal of Experimental Psychology 66, 7 (2013), 1302-1316.Google ScholarCross Ref
- Hideyuki Nakanishi, Satoshi Koizumi, Toru Ishida, and Hideaki Ito. Transcendent Communication: Location-based Guidance for Large-scale Public Spaces (CHI '04). Google ScholarDigital Library
- Jiazhi Ou, Lui Min Oh, Jie Yang, and Susan R. Fussell. Effects of Task Properties, Partner Actions, and Message Content on Eye Gaze Patterns in a Collaborative Task (CHI '05). Google ScholarDigital Library
- Ken Pfeuffer, Jason Alexander, Ming Ki Chong, and Hans Gellersen. Gaze-touch: Combining Gaze with Multi-touch for Interaction on the Same Surface (UIST '14). Google ScholarDigital Library
- Abhishek Ranjan, Jeremy P. Birnholtz, and Ravin Balakrishnan. Dynamic Shared Visual Spaces: Experimenting with Automatic Camera Control in a Remote Repair Task (CHI '07). Google ScholarDigital Library
- Kshitij Sharma, Patrick Jermann, Marc-Antoine Nussli, and Pierre Dillenbourg. Gaze evidence for different activities in program understanding.Google Scholar
- Jaana Simola, Jarkko Salojarvi, and Ilpo Kojo. 2008. Using Hidden Markov Model to Uncover Processing States from Eye Movements in Information Search Tasks. Cognitive Systems Research 9, 4 (2008), 237-251. Google ScholarDigital Library
- Rajinder S. Sodhi, Brett R. Jones, David Forsyth, Brian P. Bailey, and Giuliano Maciocci. Be There: 3D Mobile Collaboration with Spatial Input (CHI '13). Google ScholarDigital Library
- Aaron Stafford, Wayne Piekarski, and Bruce Thomas. Implementation of God-like Interaction Techniques for Supporting Collaboration Between Outdoor AR and Indoor Tabletop Users (ISMAR '06). Google ScholarDigital Library
- Sophie Stellmach and Raimund Dachselt. Look & Touch: Gaze-supported Target Acquisition (CHI '12). Google ScholarDigital Library
- Cara A. Stitzlein, Jane Li, and Alex Krumm-Heller. Gaze Analysis in a Remote Collaborative Setting (OZCHI '06). Google ScholarDigital Library
- Chiew Seng Sean Tan, Johannes Schoning, Kris Luyten, and Karin Coninx. Investigating the Effects of Using Biofeedback As Visual Stress Indicator During Video-mediated Collaboration (CHI '14).Google Scholar
- Jayson Turner, Jason Alexander, Andreas Bulling, and Hans Gellersen. Gaze+RST: Integrating Gaze and Multitouch for Remote Rotate-Scale-Translate Tasks (CHI '15). Google ScholarDigital Library
- Alfred Yarbus. 1967. Eye Movements and Vision. Plenum (1967).Google ScholarCross Ref
- Xianjun Sam Zheng, Cedric Foucault, Patrik Matos da Silva, Siddharth Dasari, Tao Yang, and Stuart Goose. Eye-Wearable Technology for Machine Maintenance: Effects of Display Position and Hands-free Operation (CHI '15).Google Scholar
Index Terms
- Can Eye Help You?: Effects of Visualizing Eye Fixations on Remote Collaboration Scenarios for Physical Tasks
Recommendations
An Eye For Design: Gaze Visualizations for Remote Collaborative Work
CHI '18: Proceedings of the 2018 CHI Conference on Human Factors in Computing SystemsIn remote collaboration, gaze visualizations are designed to display where collaborators are looking in a shared visual space. This type of gaze-based intervention can improve coordination, however researchers have yet to fully explore different gaze ...
GlassBoARd: A Gaze-Enabled AR Interface for Collaborative Work
CHI EA '24: Extended Abstracts of the 2024 CHI Conference on Human Factors in Computing SystemsRecent research on remote collaboration focuses on improving the sense of co-presence and mutual understanding among the collaborators, whereas there is limited research on using non-verbal cues such as gaze or head direction alongside their main ...
Can we beat the mouse with MAGIC?
CHI '13: Proceedings of the SIGCHI Conference on Human Factors in Computing SystemsMAGIC pointing techniques combine eye tracking with manual input. Since the mouse performs exceptionally well in a desktop setting, previous research on MAGIC pointing either resulted in minor improvements, or the techniques were applied to alternative ...
Comments