skip to main content
10.1145/2858036.2858438acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article

Can Eye Help You?: Effects of Visualizing Eye Fixations on Remote Collaboration Scenarios for Physical Tasks

Published:07 May 2016Publication History

ABSTRACT

In this work, we investigate how remote collaboration between a local worker and a remote collaborator will change if eye fixations of the collaborator are presented to the worker. We track the collaborator's points of gaze on a monitor screen displaying a physical workspace and visualize them onto the space by a projector or through an optical see-through head-mounted display. Through a series of user studies, we have found the followings: 1) Eye fixations can serve as a fast and precise pointer to objects of the collaborator's interest. 2) Eyes and other modalities, such as hand gestures and speech, are used differently for object identification and manipulation. 3) Eyes are used for explicit instructions only when they are combined with speech. 4) The worker can predict some intentions of the collaborator such as his/her current interest and next instruction.

Skip Supplemental Material Section

Supplemental Material

p5180-higuch.mp4

mp4

201.9 MB

References

  1. Matt Adcock, Stuart Anderson, and Bruce Thomas. RemoteFusion: Real Time Depth Camera Fusion for Remote Collaboration on Physical Tasks (VRCAI '13). Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Matt Adcock, Dulitha Ranatunga, Ross Smith, and Bruce H. Thomas. Object-based Touch Manipulation for Remote Guidance of Physical Tasks (SUI '14). Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Antti Ajanki, DavidR. Hardoon, Samuel Kaski, Kai Puolamki, and John Shawe-Taylor. 2009. Can eyes reveal interest? Implicit queries from gaze patterns. User Modeling and User-Adapted Interaction (2009). Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Michael Argyle, Roger Ingham, Florisse Alkema, and Margaret McCallin. 1973. The Different Functions of Gaze. Semiotica 7 (1973).Google ScholarGoogle Scholar
  5. Roman Bednarik, Shahram Eivazi, and Hana Vrzakova. 2013. A Computational Approach for Prediction of Problem-Solving Behavior Using Support Vector Machines and Eye-Tracking Data. In Eye Gaze in Intelligent User Interfaces. 111-134.Google ScholarGoogle ScholarCross RefCross Ref
  6. Roman Bednarik, Hana Vrzakova, and Michal Hradis. 2012. What Do You Want to Do Next: A Novel Approach for Intent Prediction in Gaze-based Interaction (ETRA '12). Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Boris Brandherm, Helmut Prendinger, and Mitsuru Ishizuka. 2007. Interest Estimation Based on Dynamic Bayesian Networks for Visual Attentive Presentation Agents (ICMI '07). Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Andreas Bulling, Daniel Roggen, and Gerhard Troster. It's in Your Eyes: Towards Context-awareness and Mobile HCI Using Wearable EOG Goggles (UbiComp '08). Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Jean Carletta, Robin L Hill, Craig Nicol, Tim Taylor, Jan Peter De Ruiter, and Ellen Gurman Bard. 2010. Eyetracking for Two-Person Tasks with Manipulation of a Virtual World. Behavior Research Methods 42, 1 (2010).Google ScholarGoogle Scholar
  10. Sicheng Chen, Miao Chen, Andreas Kunz, Asim Evren Yantaç, Mathias Bergmark, Anders Sundin, and Morten Fjeld. SEMarbeta: Mobile Sketch-gesture-video Remote Support for Car Drivers (AH '13). Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Mauro Cherubini, Marc-Antoine Nüssli, and Pierre Dillenbourg. Deixis and Gaze in Collaborative Work at a Distance (over a Shared Map): A Computational Model to Detect Misunderstandings (ETRA '08). Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Andrew J. Davison, Walterio W. Mayol, and David W. Murray. 2003. Real-Time Localisation and Mapping with Wearable Active Vision. In Proceedings of the 2Nd IEEE/ACM International Symposium on Mixed and Augmented Reality (ISMAR '03). 18-. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Kentaro Fukuchi, Toshiki Sato, Haruko Mamiya, and Hideki Koike. 2010. Pac-pac: Pinching Gesture Recognition for Tabletop Entertainment System (AVI '08). ACM, 267-273. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Susan R. Fussell, Leslie D. Setlock, and Robert E. Kraut. Effects of Head-mounted and Scene-oriented Video Systems on Remote Collaboration on Physical Tasks (CHI '03). Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Susan R. Fussell, Leslie D. Setlock, and Elizabeth M. Parker. Where Do Helpers Look?: Gaze Targets During Collaborative Physical Tasks (CHI EA '03). Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Susan R. Fussell, Leslie D. Setlock, Jie Yang, Jiazhi Ou, Elizabeth Mauer, and Adam D. I. Kramer. 2004. Gestures over Video Streams to Support Remote Collaboration on Physical Tasks. Human-Computer Interaction (2004). Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Steffen Gauglitz, Benjamin Nuernberger, Matthew Turk, and Tobias Hollerer. World-stabilized Annotations and Virtual Scene Navigation for Remote Collaboration (UIST '14). Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Mary Hayhoe and Dana Ballard. 2005. Eye Movements in Natural Behavior. Trends Cogn. Sci. (Regul. Ed.) 9, 4 (2005), 188-194.Google ScholarGoogle ScholarCross RefCross Ref
  19. Takatsugu Hirayama, Dodane Jean-Baptiste, Hiroaki Kawashima, and Takashi Matsuyama. 2010. Estimates of User Interest Using Timing Structures between Proactive Content-Display Updates and Eye Movements. IEICE Trans. Inf. & Syst. 93, 6 (2010).Google ScholarGoogle Scholar
  20. Shinsaku Hiura, Kenji Tojo, and Seiji Inokuchi. 3DD Tele-direction Interface Using Video Projector.Google ScholarGoogle Scholar
  21. Yoshio Ishiguro, Adiyan Mujibiya, Takashi Miyaki, and Jun Rekimoto. Aided Eyes: Eye Activity Sensing for Daily Life (AH '10). Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Robert J. K. Jacob. 1990. What You Look at is What You Get: Eye Movement-based Interaction Techniques (CHI '90). Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Patrick Jermann, Darren Gergle, Roman Bednarik, and Susan Brennan. Duet 2012: Dual Eye Tracking in CSCW. In Proceedings of the ACM 2012 conference on Computer Supported Cooperative Work Companion. 23-24. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Roland S. Johansson, Gran Westling, Anders Bckstrm, and J. Randall Flanagan. 2001. Eye-Hand Coordination in Object Manipulation. JOURNAL OF NEUROSCIENCE 21, 17 (2001), 6917-6932.Google ScholarGoogle ScholarCross RefCross Ref
  25. Yvonne Kammerer, Katharina Scheiter, and Wolfgang Beinhauer. Looking My Way Through the Menu: The Impact of Menu Design and Multimodal Input on Gaze-based Menu Selection (ETRA '08). Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Shunichi Kasahara and Jun Rekimoto. JackIn: Integrating First-person View with Out-of-body Vision Generation for Human-human Augmentation (AH '14). Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Hirokazu Kato and Mark Billinghurst. Marker Tracking and Hmd Calibration for a Video-based Augmented Reality Conferencing System (IWAR '99). Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Adam Kendon. 1967. Some Functions of Gaze-Direction in Social Interaction. Acta Psychologica 26 (1967).Google ScholarGoogle Scholar
  29. Seungwon Kim, Gun Lee, Nobuyasu Sakata, and Mark Billinghurst. Improving Co-presence with Augmented Visual Communication Cues for Sharing Experience through Video Conference (ISMAR'14).Google ScholarGoogle Scholar
  30. David Kirk and Danae Stanton Fraser. Comparing Remote Gesture Technologies for Supporting Collaborative Physical Tasks (CHI '06). Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Nikolina Koleva, Sabrina Hoppe, Mohammed Mehdi Moniri, Maria Staudte, and Andreas Bulling. On the Interplay between Spontaneous Spoken Instructions and Human Visual Behaviour in an Indoor Guidance Task (COGSCI '15).Google ScholarGoogle Scholar
  32. Robert E. Kraut, Susan R. Fussell, and Jane Siegel. 2003. Visual Information As a Conversational Resource in Collaborative Physical Tasks. Human-Computer Interaction 18, 1 (2003). Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Takeshi Kurata, Nobuchika Sakata, Masakatsu Kourogi, Hideaki Kuzuoka, and Mark Billinghurst. Remote Collaboration using a Shoulder-worn Active Camera/Laser (ISWC '04). Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Hideaki Kuzuoka. Spatial Workspace Collaboration: A SharedView Video Support System for Remote Collaboration Capability (CHI '92). Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Hemin Omer Latif, Nasser Sherkat, and Ahmad Lot?. Teleoperation Through Eye Gaze (TeleGaze): A Multimodal Approach (ROBIO'09).Google ScholarGoogle Scholar
  36. Taehee Lee and Tobias Hollerer. Viewpoint Stabilization for Live Collaborative Video Augmentations (ISMAR '06). Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Kana Misawa and Jun Rekimoto. Wearing Another's Personality: A Human-surrogate System with a Telepresence Face (ISWC '15). Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. A. Monden, K. Matsumoto, and M. Yamato. 2005. Evaluation of Gaze-Added Target Selection Methods Suitable for General GUIs. Int. J. Comput. Appl. Technol. 24, 1 (2005). Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. Romy Muller, Jens R Helmert, Sebastian Pannasch, and Boris M Velichkovsky. 2013. Gaze Transfer in Remote Cooperation: Is it always Helpful to See What Your Partner Is Attending to? The Quarterly Journal of Experimental Psychology 66, 7 (2013), 1302-1316.Google ScholarGoogle ScholarCross RefCross Ref
  40. Hideyuki Nakanishi, Satoshi Koizumi, Toru Ishida, and Hideaki Ito. Transcendent Communication: Location-based Guidance for Large-scale Public Spaces (CHI '04). Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. Jiazhi Ou, Lui Min Oh, Jie Yang, and Susan R. Fussell. Effects of Task Properties, Partner Actions, and Message Content on Eye Gaze Patterns in a Collaborative Task (CHI '05). Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. Ken Pfeuffer, Jason Alexander, Ming Ki Chong, and Hans Gellersen. Gaze-touch: Combining Gaze with Multi-touch for Interaction on the Same Surface (UIST '14). Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. Abhishek Ranjan, Jeremy P. Birnholtz, and Ravin Balakrishnan. Dynamic Shared Visual Spaces: Experimenting with Automatic Camera Control in a Remote Repair Task (CHI '07). Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. Kshitij Sharma, Patrick Jermann, Marc-Antoine Nussli, and Pierre Dillenbourg. Gaze evidence for different activities in program understanding.Google ScholarGoogle Scholar
  45. Jaana Simola, Jarkko Salojarvi, and Ilpo Kojo. 2008. Using Hidden Markov Model to Uncover Processing States from Eye Movements in Information Search Tasks. Cognitive Systems Research 9, 4 (2008), 237-251. Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. Rajinder S. Sodhi, Brett R. Jones, David Forsyth, Brian P. Bailey, and Giuliano Maciocci. Be There: 3D Mobile Collaboration with Spatial Input (CHI '13). Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. Aaron Stafford, Wayne Piekarski, and Bruce Thomas. Implementation of God-like Interaction Techniques for Supporting Collaboration Between Outdoor AR and Indoor Tabletop Users (ISMAR '06). Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. Sophie Stellmach and Raimund Dachselt. Look & Touch: Gaze-supported Target Acquisition (CHI '12). Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. Cara A. Stitzlein, Jane Li, and Alex Krumm-Heller. Gaze Analysis in a Remote Collaborative Setting (OZCHI '06). Google ScholarGoogle ScholarDigital LibraryDigital Library
  50. Chiew Seng Sean Tan, Johannes Schoning, Kris Luyten, and Karin Coninx. Investigating the Effects of Using Biofeedback As Visual Stress Indicator During Video-mediated Collaboration (CHI '14).Google ScholarGoogle Scholar
  51. Jayson Turner, Jason Alexander, Andreas Bulling, and Hans Gellersen. Gaze+RST: Integrating Gaze and Multitouch for Remote Rotate-Scale-Translate Tasks (CHI '15). Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. Alfred Yarbus. 1967. Eye Movements and Vision. Plenum (1967).Google ScholarGoogle ScholarCross RefCross Ref
  53. Xianjun Sam Zheng, Cedric Foucault, Patrik Matos da Silva, Siddharth Dasari, Tao Yang, and Stuart Goose. Eye-Wearable Technology for Machine Maintenance: Effects of Display Position and Hands-free Operation (CHI '15).Google ScholarGoogle Scholar

Index Terms

  1. Can Eye Help You?: Effects of Visualizing Eye Fixations on Remote Collaboration Scenarios for Physical Tasks

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      CHI '16: Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems
      May 2016
      6108 pages
      ISBN:9781450333627
      DOI:10.1145/2858036

      Copyright © 2016 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 7 May 2016

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      CHI '16 Paper Acceptance Rate565of2,435submissions,23%Overall Acceptance Rate6,199of26,314submissions,24%

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader