skip to main content
10.1145/3171221.3171287acmconferencesArticle/Chapter ViewAbstractPublication PageshriConference Proceedingsconference-collections
research-article

Eye-Hand Behavior in Human-Robot Shared Manipulation

Published:26 February 2018Publication History

ABSTRACT

Shared autonomy systems enhance people's abilities to perform activities of daily living using robotic manipulators. Recent systems succeed by first identifying their operators' intentions, typically by analyzing the user's joystick input. To enhance this recognition, it is useful to characterize people's behavior while performing such a task. Furthermore, eye gaze is a rich source of information for understanding operator intention. The goal of this paper is to provide novel insights into the dynamics of control behavior and eye gaze in human-robot shared manipulation tasks. To achieve this goal, we conduct a data collection study that uses an eye tracker to record eye gaze during a human-robot shared manipulation activity, both with and without shared autonomy assistance. We process the gaze signals from the study to extract gaze features like saccades, fixations, smooth pursuits, and scan paths. We analyze those features to identify novel patterns of gaze behaviors and highlight where these patterns are similar to and different from previous findings about eye gaze in human-only manipulation tasks. The work described in this paper lays a foundation for a model of natural human eye gaze in human-robot shared manipulation.

References

  1. Henny Admoni and Brian Scassellati. 2014. Data-Driven Model of Nonverbal Behavior for Socially Assistive Human-Robot Interactions. In ACM International Conference on Multimodal Interaction (ICMI). 196--199. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Henny Admoni and Brian Scassellati. 2017. Social Eye Gaze in Human-Robot Interaction: A Review. Journal of Human-Robot Interaction 6, 1 (2017), 25--63. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Henny Admoni and Siddhartha S. Srinivasa. 2016. Predicting User Intent Through Eye Gaze for Shared Autonomy. In Proceedings of the AAAI Fall Symposium: Shared Autonomy in Research and Practice. 298--303.Google ScholarGoogle Scholar
  4. Soichi Ando, Noriyuki Kida, and Shingo Oda. 2001. Central and Peripheral Visual Reaction Time of Soccer Players and Nonathletes. Perceptual and Motor Skills 92, 3 (2001), 786--794.Google ScholarGoogle ScholarCross RefCross Ref
  5. Brenna D Argall. 2015. Turning assistive machines into assistive robots. In Proc. SPIE 9370, Quantum Sensing and Nanophotonic Devices XII. International Society for Optics and Photonics.Google ScholarGoogle Scholar
  6. Michael Argyle. 1972. Non-verbal communication in human social interaction. In Non-verbal communication, R. A. Hinde (Ed.). Cambirdge University Press, Oxford, England.Google ScholarGoogle Scholar
  7. Michael Argyle and Mark Cook. 1976. Gaze and Mutual Gaze. Cambridge University Press, Oxford, England.Google ScholarGoogle Scholar
  8. Jackson Beatty. 1982. Task-evoked pupillary responses, processing load, and the structure of processing resources. Psychological bulletin 91, 2 (1982), 276.Google ScholarGoogle Scholar
  9. Jean-David Boucher, Ugo Pattacini, Amelie Lelong, Gerrard Bailly, Frederic Elisei, Sascha Fagel, Peter Ford Dominey, and Jocelyne Ventre-Dominey. 2012. I Reach Faster When I See You Look: Gaze Effects in Human-Human and Human-Robot Face-to-Face Cooperation. Frontiers in neurorobotics 6, May (Jan. 2012), 1--11.Google ScholarGoogle Scholar
  10. C. Braunagel, E. Kasneci, W. Stolzmann, and W. Rosenstiel. 2015. Driver-activity recognition in the context of conditionally autonomous driving. In IEEE 18th International Conference on Intelligent Transportation Systems. IEEE, 1652--1657. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Andreas Bulling, Jamie A Ward, Hans Gellersen, and Gerhard Tröster. 2009. Eye movement analysis for activity recognition. In Ubicomp. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Tom Carlson and Yiannis Demiris. 2012. Collaborative control for a robotic wheelchair: evaluation of performance, attention, and workload. IEEE Transactions on Systems, Man, and Cybernetics (2012). Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Mark Cook. 1977. Gaze and Mutual Gaze in Social Encounters: How long-and when- we look others "in the eye" is one of the main signals in nonverbal communication. American Scientist 65, 3 (1977), 328--333.Google ScholarGoogle Scholar
  14. Navneet Dalal and Bill Triggs. 2005. Histograms of Oriented Gradients for Human Detection. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '05). IEEE Computer Society, Washington, DC, USA, 886--893. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Anca Dragan and Siddhartha Srinivasa. 2013. A Policy Blending Formalism for Shared Control. The International Journal of Robotics Research (May 2013). Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Exact Dynamics. 2016. Exact Dynamics. (2016). Retrieved Jan 5, 2018 from http://www.exactdynamics.nlGoogle ScholarGoogle Scholar
  17. Wolfgang Einhäuser, James Stout, Christof Koch, and Olivia Carter. 2008. Pupil dilation reflects perceptual selection and predicts subsequent stability in perceptual rivalry. Proceedings of the National Academy of Sciences (2008).Google ScholarGoogle ScholarCross RefCross Ref
  18. Alireza Fathi, Yin Li, and James M Rehg. 2012. Learning to recognize daily actions using gaze. In ECCV. Springer. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. J Randall Flanagan, Gerben Rotman, Andreas F Reichelt, and Roland S Johansson. 2013. The role of observers' gaze behaviour when watching object manipulation tasks: predicting and evaluating the consequences of action. Philosophical Transactions of the Royal Society of London B: Biological Sciences 368, 1628 (2013), 20130063.Google ScholarGoogle ScholarCross RefCross Ref
  20. Andreas Gegenfurtner, Erno Lehtinen, and Roger Säljö. 2011. Expertise differences in the comprehension of visualizations: A meta-analysis of eye-tracking research in professional domains. Educational Psychology Review 23, 4 (2011), 523--552.Google ScholarGoogle ScholarCross RefCross Ref
  21. Eric Granholm and Stuart R Steinhauer. 2004. Pupillometric measures of cognitive and emotional processes. International Journal of Psychophysiology 52, 1 (2004), 1--6.Google ScholarGoogle ScholarCross RefCross Ref
  22. Zenzi M. Griffin and Kathryn Bock. 2000. What the Eyes Say About Speaking. Psychological Science 11, 4 (July 2000), 274--279.Google ScholarGoogle ScholarCross RefCross Ref
  23. Elena Corina Grigore, Kerstin Eder, Anthony G. Pipe, Chris Melhuish, and Ute Leonards. 2013. Joint Action Understanding improves Robot-to-Human Object Handover. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, Tokyo, Japan.Google ScholarGoogle ScholarCross RefCross Ref
  24. Joy E. Hanna and Susan E. Brennan. 2007. Speakers' eye gaze disambiguates referring expressions early during face-to-face conversation. Journal of Memory and Language 57 (2007), 596--615.Google ScholarGoogle ScholarCross RefCross Ref
  25. Sandra G. Hart. 2006. Nasa-Task Load Index (NASA-TLX); 20 Years Later. Proceedings of the Human Factors and Ergonomics Society Annual Meeting 50, 9 (2006), 904--908.Google ScholarGoogle ScholarCross RefCross Ref
  26. Mary Hayhoe and Dana Ballard. 2005. Eye movements in natural behavior. Trends in Cognitive Sciences 9, 4 (2005), 188--194.Google ScholarGoogle ScholarCross RefCross Ref
  27. Laura V Herlant, Rachel M Holladay, and Siddhartha S Srinivasa. 2016. Assistive Teleoperation of Robot Arms via Automatic Time-Optimal Mode Switching. In ACM/IEEE International Conference on Human-Robot Interaction (HRI). 35--42. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Rachel Holladay, Laura Herlant, Henny Admoni, and Siddhartha S. Srinivasa. 2016. Visibility Optimization in Manipulation Tasks for a Wheelchair-Mounted Robot Arm. In RO-MAN Workshop on Human-Oriented Approaches for Assistive and Rehabilitation Robotics (HUMORARR).Google ScholarGoogle Scholar
  29. Kenneth Holmqvist, Marcus Nyström, Richard Andersson, Richard Dewhurst, Halszka Jarodzka, and Joost Van de Weijer. 2011. Eye tracking: A comprehensive guide to methods and measures. Oxford University Press.Google ScholarGoogle Scholar
  30. Chien-Ming Huang and Bilge Mutlu. 2016. Anticipatory robot control for efficient human-robot collaboration. In ACM/IEEE International Conference on HumanRobot Interaction (HRI). 83--90. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Laurent Itti and Christof Koch. 2001. Computational modelling of visual attention. Nature Reviews Neuroscience (March 2001), 194--203. Issue 2.Google ScholarGoogle Scholar
  32. Shervin Javdani, Henny Admoni, Stefania Pellegrinelli, Siddhartha S. Srinivasa, and J. Andrew Bagnell. 2017. Shared Autonomy via Hindsight Optimization for Teleoperation and Teaming. arXiv preprint arXiv:1706.00155 (2017).Google ScholarGoogle Scholar
  33. Shervin Javdani, Siddhartha S. Srinivasa, and J. Andrew Bagnell. 2015. Shared autonomy via hindsight optimization. In Robotics: Science and Systems (RSS).Google ScholarGoogle Scholar
  34. Qiang Ji, Zhiwei Zhu, and Peilin Lan. 2004. Real-time nonintrusive monitoring and prediction of driver fatigue. IEEE transactions on vehicular technology 53, 4 (2004), 1052--1068.Google ScholarGoogle Scholar
  35. R S Johansson, G Westling, A Bäckström, and J R Flanagan. 2001. Eye-hand coordination in object manipulation. The Journal of Neuroscience 21, 17 (Sept. 2001), 6917--6932.Google ScholarGoogle ScholarCross RefCross Ref
  36. Enkelejda Kasneci, Gjergji Kasneci, Thomas C Kübler, and Wolfgang Rosenstiel. 2015. Online recognition of fixations, saccades, and smooth pursuits for automated analysis of traffic hazard perception. In Artificial neural networks. Springer, 411--434.Google ScholarGoogle Scholar
  37. Enkelejda Kasneci, Thomas Kübler, Klaus Broelemann, and Gjergji Kasneci. 2017. Aggregating physiological and eye tracking signals to predict perception in the absence of ground truth. Computers in Human Behavior (2017). Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Adam Kendon. 1967. Some functions of gaze-direction in social interaction. Acta Psychologica 26, 1 (1967), 22--63.Google ScholarGoogle ScholarCross RefCross Ref
  39. Kinova Robotics, Inc. 2018. Robot arms. (2018). Retrieved Jan 5, 2018 from http://www.kinovarobotics.com/assistive-robotics/products/robot-arms/Google ScholarGoogle Scholar
  40. Thomas Kinsman, Karen Evans, Glenn Sweeney, Tommy Keane, and Jeff Pelz. 2012. Ego-motion Compensation Improves Fixation Detection in Wearable Eye Tracking. In Proceedings of the Symposium on Eye Tracking Research and Applications (ETRA '12). ACM, New York, NY, USA, 221--224. Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. Chris L. Kleinke. 1986. Gaze and eye contact: A research review. Psychological Bulletin 100, 1 (July 1986), 78--100.Google ScholarGoogle ScholarCross RefCross Ref
  42. Niels A Kloosterman, Thomas Meindertsma, Anouk M Loon, Victor AF Lamme, Yoram S Bonneh, and Tobias H Donner. 2015. Pupil size tracks perceptual content and surprise. European Journal of Neuroscience (2015).Google ScholarGoogle Scholar
  43. Thomas C Kübler, Colleen Rothe, Ulrich Schiefer, Wolfgang Rosenstiel, and Enkelejda Kasneci. 2017. SubsMatch 2.0: Scanpath comparison and classification based on subsequence frequencies. Behavior research methods 49, 3 (2017), 1048-- 1064.Google ScholarGoogle Scholar
  44. Thomas C Kübler, Katrin Sippel, Wolfgang Fuhl, Guilherme Schievelbein, Johanna Aufreiter, Raphael Rosenberg, Wolfgang Rosenstiel, and Enkelejda Kasneci. 2015. Analysis of eye movements with Eyetrace. In International Joint Conference on Biomedical Engineering Systems and Technologies. Springer, 458--471.Google ScholarGoogle ScholarCross RefCross Ref
  45. Andrew L Kun, Oskar Palinko, and Ivan Razumenić. 2012. Exploring the effects of size and luminance of visual targets on the pupillary light reflex. In Proceedings of the International Conference on Automotive User Interfaces and Interactive Vehicular Applications. ACM, 183--186. Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. M F Land and M Hayhoe. 2001. In what ways do eye movements contribute to everyday activities? Vision Research 41, 25--26 (Jan. 2001), 3559--65.Google ScholarGoogle ScholarCross RefCross Ref
  47. Michael F Land, N Mennie, and J Rusted. 1999. Eye movements and the roles of vision in activities of daily living: making a cup of tea. Perception 28, 4 (1999), 1311--1328.Google ScholarGoogle ScholarCross RefCross Ref
  48. Veronique Maheu, Julie Frappier, Philippe S. Archambault, and Francois Routhier. 2011. Evaluation of the JACO robotic arm: Clinico-economic study for powered wheelchair users with upper-extremity disabilities. In IEEE/RAS-EMBS International Conference on Rehabilitation Robotics (ICORR). 1--5.Google ScholarGoogle ScholarCross RefCross Ref
  49. Sandra P Marshall. 2002. The index of cognitive activity: Measuring cognitive workload. In Proceedings of the IEEE Conference on Human Factors and Power Plants. IEEE.Google ScholarGoogle ScholarCross RefCross Ref
  50. MATLAB. 2016. version 9.1.0 (R2016b). The MathWorks, Inc., Natick, MA.Google ScholarGoogle Scholar
  51. Gregor Mehlmann, Markus Häring, Kathrin Janowski, Tobias Baur, Patrick Gebhard, and Elisabeth André. 2014. Exploring a Model of Gaze for Grounding in Multimodal HRI. In Proceedings of the 16th International Conference on Multimodal Interaction (ICMI). ACM, New York, NY, USA, 247--254. Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. Oskar Palinko, Andrew L Kun, Alexander Shyrokov, and Peter Heeman. 2010. Estimating cognitive load using remote eye tracking in a driving simulator. In Proceedings of the 2010 symposium on eye-tracking research&applications. ACM, 141--144. Google ScholarGoogle ScholarDigital LibraryDigital Library
  53. Pupil Labs, Inc. 2017. Pupil Labs - Pupil. (2017). Retrieved Jan 5, 2018 from https://pupil-labs.com/pupil/Google ScholarGoogle Scholar
  54. Edward Rosten and Tom Drummond. 2005. Fusing points and lines for high performance tracking.. In IEEE International Conference on Computer Vision, Vol. 2. 1508--1511. Google ScholarGoogle ScholarDigital LibraryDigital Library
  55. Uta Sailer, J Randall Flanagan, and Roland S Johansson. 2005. Eye-hand coordination during learning of a novel visuomotor task. Journal of Neuroscience 25, 39 (2005), 8833--8842.Google ScholarGoogle ScholarCross RefCross Ref
  56. K. Sakita, K. Ogawara, S. Murakami, K. Kawamura, and K. Ikeuchi. 2004. Flexible cooperation between human and robot by interpreting human intention from gaze information. In 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No.04CH37566), Vol. 1.Google ScholarGoogle Scholar
  57. Dario D Salvucci and Joseph H Goldberg. 2000. Identifying fixations and saccades in eye-tracking protocols. In Proceedings of the 2000 symposium on Eye tracking research&applications. ACM, 71--78. Google ScholarGoogle ScholarDigital LibraryDigital Library
  58. Thiago Santini, Wolfgang Fuhl, and Enkelejda Kasneci. 2017. CalibMe: Fast and Unsupervised Eye Tracker Calibration for Gaze-Based Pervasive HumanComputer Interaction. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. ACM, 2594--2605. Google ScholarGoogle ScholarDigital LibraryDigital Library
  59. Thiago Santini, Wolfgang Fuhl, Thomas Kübler, and Enkelejda Kasneci. 2016. Bayesian identification of fixations, saccades, and smooth pursuits. In Proceedings of the Ninth Biennial ACM Symposium on Eye Tracking Research&Applications. ACM, 163--170. Google ScholarGoogle ScholarDigital LibraryDigital Library
  60. Emrah Akin Sisbot and Rachid Alami. 2012. A Human-Aware Manipulation Planner. IEEE Transactions on Robotics 28, 5 (Oct. 2012), 1045--1057. Google ScholarGoogle ScholarDigital LibraryDigital Library
  61. Maria Staudte and Matthew W Crocker. 2011. Investigating joint attention mechanisms through spoken human-robot interaction. Cognition 120 (Aug. 2011), 268--291.Google ScholarGoogle Scholar
  62. Kyle Strabala, Min Kyung Lee, Anca Dragan, Jodi Forlizzi, Siddhartha S. Srinivasa, Maya Cakmak, and Vincenzo Micelli. 2013. Toward Seamless Human-Robot Handovers. Journal of Human-Robot Interaction 2, 1 (2013), 112--132. Google ScholarGoogle ScholarDigital LibraryDigital Library
  63. P.H.S. Torr and A. Zisserman. 2000. MLESAC. Comput. Vis. Image Underst. 78, 1 (April 2000), 138--156. Google ScholarGoogle ScholarDigital LibraryDigital Library
  64. Matthew van der Zwan, Valeriu Codreanu, and Alexandru Telea. 2016. CUBu: universal real-time bundling for large graphs. IEEE transactions on visualization and computer graphics 22, 12 (2016), 2550--2563. Google ScholarGoogle ScholarDigital LibraryDigital Library
  65. Roel Vertegaal, Robert Slagter, Gerrit van der Veer, and Anton Nijholt. 2001. Eye Gaze Patterns in Conversations: There is More to Conversational Agents Than Meets the Eyes. In Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI 2001) (CHI '01), Vol. 3. ACM, Acm, 301--308. Issue 1. Google ScholarGoogle ScholarDigital LibraryDigital Library
  66. Joan N Vickers. 2009. Advances in coupling perception and action: the quiet eye as a bidirectional link between gaze, attention, and action. Progress in brain research 174 (2009), 279--288.Google ScholarGoogle Scholar
  67. Tian Xu, Hui Zhang, and Chen Yu. 2013. Cooperative gazing behaviors in human multi-robot interaction. Interaction Studies 14, 3 (2013), 390--418.Google ScholarGoogle ScholarCross RefCross Ref
  68. Yuichiro Yoshikawa, Kazuhiko Shinozawa, Hiroshi Ishiguro, Norihiro Hagita, and Takanori Miyamoto. 2006. The effects of responsive eye movement and blinking behavior in a communication robot. In 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS '06). 4564--4569.Google ScholarGoogle ScholarCross RefCross Ref
  69. Chen Yu, Paul Schermerhorn, and Matthias Scheutz. 2012. Adaptive Eye Gaze Patterns In Interactions with Human and Artificial Agents. ACM Transactions on Interactive Intelligent Systems 1, 2 (January 2012). Google ScholarGoogle ScholarDigital LibraryDigital Library
  70. Lei Yu and Huan Liu. 2003. Feature selection for high-dimensional data: A fast correlation-based filter solution. In Proceedings of the 20th international conference on machine learning (ICML-03). 856--863. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Eye-Hand Behavior in Human-Robot Shared Manipulation

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Conferences
          HRI '18: Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction
          February 2018
          468 pages
          ISBN:9781450349536
          DOI:10.1145/3171221

          Copyright © 2018 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 26 February 2018

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article

          Acceptance Rates

          HRI '18 Paper Acceptance Rate49of206submissions,24%Overall Acceptance Rate242of1,000submissions,24%

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader