skip to main content
research-article
Open Access

Social eye gaze in human-robot interaction: a review

Published:26 May 2017Publication History
Skip Abstract Section

Abstract

This article reviews the state of the art in social eye gaze for human-robot interaction (HRI). It establishes three categories of gaze research in HRI, defined by differences in goals and methods: a human-centered approach, which focuses on people's responses to gaze; a design-centered approach, which addresses the features of robot gaze behavior and appearance that improve interaction; and a technology-centered approach, which is concentrated on the computational tools for implementing social eye gaze in robots. This paper begins with background information about gaze research in HRI and ends with a set of open questions.

References

  1. Admoni, H., Bank, C., Tan, J., & Toneva, M. (2011). Robot gaze does not reflexively cue human attention. In L. Carlson, C. Hölscher, & T. Shipley (Eds.), Proceedings of the Annual Conference of the Cognitive Science Society (CogSci) (p. 1983--1988). Austin, TX: Cognitive Science Society.Google ScholarGoogle Scholar
  2. Admoni, H., Datsikas, C., & Scassellati, B. (2014). Speech and gaze conflicts in collaborative human-robot interactions. In Proceedings of the Annual Conference of the Cognitive Science Society (CogSci) (pp. 104--109). Quebec City, Canada.Google ScholarGoogle Scholar
  3. Admoni, H., Dragan, A., Srinivasa, S., & Scassellati, B. (2014). Deliberate delays during robot-to-human handovers improve compliance with gaze communication. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI) (pp. 49--56). Bielefeld, Germany. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Admoni, H., Hayes, B., Feil-Seifer, D., Ullman, D., & Scassellati, B. (2013). Are you looking at me? Perception of robot attention is mediated by gaze type and group size. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI) (pp. 389--396). Tokyo, Japan. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Admoni, H., & Scassellati, B. (2012). Robot gaze is different from human gaze: Evidence that robot gaze does not cue reflexive attention. In Proceedings of the "Gaze in Human-Robot Interaction" Workshop at HRI 2012. Boston, MA.Google ScholarGoogle Scholar
  6. Admoni, H., & Scassellati, B. (2014). Data-driven model of nonverbal behavior for socially assistive human-robot interactions. In Proceedings of the ACM International Conference on Multimodal Interaction (ICMI). Istanbul, Turkey. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Al Moubayed, S., & Skantze, G. (2012). Perception of gaze direction for situated interaction. In Proceedings of the 4th Workshop on Eye Gaze in Intelligent Human Machine Interaction (Gaze-In) (p. 1--6). Santa Monica, CA. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Aldebaran. (2015, July). {Website} Retrieved from: https://www.aldebaran.com/en.Google ScholarGoogle Scholar
  9. Al Moubayed, S., Beskow, J., Skantze, G., & Granström, B. (2012). Furhat: A back-projected human-like robot head for multiparty human-machine interaction. In Cognitive Behavioural Systems (Vol. 7403, pp. 114--130). Springer Berlin Heidelberg. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Al Moubayed, S., Edlund, J., & Beskow, J. (2012, January). Taming Mona Lisa: Communicating gaze faithfully in 2d and 3d facial projections. ACM Transactions on Interactive Intelligent Systems, 1(2). Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Anderson, J. R., Matessa, M., & Lebiere, C. (1997). ACT-R: A theory of higher level cognition and its relation to visual attention. Human-Computer Interaction, 12(4), 439--462. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Andrist, S., Collier, W., Gleicher, M., Mutlu, B., & Shaffer, D. (2015). Look together: Analyzing gaze coordination with epistemic network analysis. Frontiers in Psychology, 6(1016).Google ScholarGoogle Scholar
  13. Andrist, S., Mutlu, B., & Gleicher, M. (2013). Conversational gaze aversion for virtual agents. In E. R. Aylett, B. Krenn, C. Pelachaud, H. Shimodaira (Ed.), Intelligent Virtual Agents (Vol. LNCS 8108, pp. 249--262).Google ScholarGoogle Scholar
  14. Andrist, S., Mutlu, B., & Tapus, A. (2015, April). Look Like Me: Matching Robot Personality via Gaze to Increase Motivation. In Proceedings of the ACM Annual Conference on Human Factors in Computing Systems (CHI). Seoul, Republic of Korea: ACM Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Andrist, S., Pejsa, T., Mutlu, B., & Gleicher, M. (2012a). Designing effective gaze mechanisms for virtual agents. In Proceedings of the ACM Annual Conference on Human Factors in Computing Systems (CHI) (pp. 705--714). Austin, TX: ACM Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Andrist, S., Pejsa, T., Mutlu, B., & Gleicher, M. (2012b). A head-eye coordination model for animating gaze shifts of virtual characters. In Proceedings of the 4th Workshop on Eye Gaze in Intelligent Human Machine Interaction. Santa Monica, CA: ACM Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Andrist, S., Tan, X. Z., Gleicher, M., & Mutlu, B. (2014). Conversational gaze aversion for humanlike robots. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI). Bielefeld, Germany: ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Argall, B. D., Chernova, S., Veloso, M., & Browning, B. (2009). A survey of robot learning from demonstration. Robotics and Autonomous Systems, 57(5), 469--483. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Argyle, M. (1972). Non-verbal communication in human social interaction. In R. A. Hinde (Ed.), Non-verbal communication. Oxford, England: Cambridge University Press.Google ScholarGoogle Scholar
  20. Argyle, M., & Cook, M. (1976). Gaze and mutual gaze. Oxford, England: Cambridge University Press.Google ScholarGoogle Scholar
  21. Argyle, M., & Ingham, R. (1972). Gaze, mutual gaze, and proximity. Semiotica, 6(1), 32--49.Google ScholarGoogle ScholarCross RefCross Ref
  22. Bailly, G., Raidt, S., & Elisei, F. (2010, June). Gaze, conversational agents and face-to-face communication. Speech Communication, 52(6), 598--612. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Bainbridge, W. A., Hart, J. W., Kim, E. S., & Scassellati, B. (2011). The benefits of interactions with physically present robots over video-displayed agents. International Journal of Social Robotics, 3, 41--52.Google ScholarGoogle ScholarCross RefCross Ref
  24. Bartneck, C., Bleeker, T., Bun, J., Fens, P., & Riet, L. (2010). The influence of robot anthropomorphism on the feelings of embarrassment when interacting with robots. PALADYN Journal of Behavioral Robotics, 1, 109--115.Google ScholarGoogle ScholarCross RefCross Ref
  25. Bartneck, C., Kulić, D., Croft, E., & Zoghbi, S. (2009). Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. International Journal of Social Robotics, 1, 71--81.Google ScholarGoogle ScholarCross RefCross Ref
  26. Beatty, J., & Lucero-Wagoner, B. (2000). The pupillary system. In J. T. Cacioppo, L. G. Tassinary, & G. G. Berntson (Eds.), Handbook of Psychophysiology (p. 142--162). Oxford, England: Cambridge University Press.Google ScholarGoogle Scholar
  27. Bekele, E., Lahiri, U., Swanson, A., Crittendon, J., Warren, Z., & Sarkar, N. (2013). A step towards developing adaptive robot-mediated intervention architecture (ARIA) for children with autism. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 21(2), 289--299.Google ScholarGoogle ScholarCross RefCross Ref
  28. Bennewitz, M., Faber, F., Joho, D., Schreiber, M., & Behnke, S. (2005, December). Towards a humanoid museum guide robot that interacts with multiple persons. In Proceedings of the IEEE-RAS International Conference on Humanoid Robots (Humanoids) (pp. 418--423). Tukuba, Japan: IEEE.Google ScholarGoogle Scholar
  29. Bohus, D., & Horvitz, E. (2010). Facilitating multiparty dialog with gaze, gesture, and speech. In International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction (ICMI-MLMI) (p. 1). Beijing, China. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Borji, A., & Itti, L. (2013, January). State-of-the-art in visual attention modeling. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(1), 185--207. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Boucher, J.-D., Pattacini, U., Lelong, A., Bailly, G., Elisei, F., Fagel, S., et al. (2012, January). I reach faster when i see you look: Gaze effects in human-human and human-robot face-to-face cooperation. Frontiers in Neurorobotics, 6(May), 1--11.Google ScholarGoogle ScholarCross RefCross Ref
  32. Breazeal, C., Hoffman, G., & Lockerd, A. (2004). Teaching and working with robots as a collaboration. In Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS) (pp. 1030--1037). IEEE Computer Society. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Breazeal, C., Kidd, C. D., Thomaz, A. L., Hoffman, G., & Berlin, M. (2005). Effects of nonverbal communication on efficiency and robustness in human-robot teamwork. 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, 708--713.Google ScholarGoogle ScholarCross RefCross Ref
  34. Breazeal, C., & Scassellati, B. (1999a). A context-dependent attention system for a social robot. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI). Stockholm, Sweden. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Breazeal, C., & Scassellati, B. (1999b). How to build robots that make friends and influence people. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (Vol. 2, pp. 858--863). Kyongju, South Korea.Google ScholarGoogle ScholarCross RefCross Ref
  36. Brooks, R., & Meltzoff, A. N. (2002). The importance of eyes: How infants interpret adult looking behavior. Developmental Psychology, 38(6), 958--966.Google ScholarGoogle ScholarCross RefCross Ref
  37. Broz, F., Lehmann, H., Nehaniv, C. L., & Dautenhahn, K. (2012, March 5). Mutual gaze, personality, and familiarity: Dual eye-tracking during conversation. In Proceedings of the "Gaze in Human-Robot Interaction" Workshop at the ACM/IEEE International Conference on Human-Robot Interaction (HRI). Boston, MA.Google ScholarGoogle Scholar
  38. Bruce, A. (2002). The role of expressiveness and attention in human-robot interaction. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA) (p. 38--42). Washington, DC: IEEE Press.Google ScholarGoogle ScholarCross RefCross Ref
  39. Calder, A. J., Lawrence, A. D., Keane, J., Scott, S. K., Owen, A. M., Christoffels, I., et al. (2002). Reading the mind from eye gaze. Neuropsychologia, 40(8), 1129--1138.Google ScholarGoogle ScholarCross RefCross Ref
  40. Cappella, J. N., & Pelachaud, C. (2002). Rules for responsive robots: Using human interactions to build virtual interactions. Stability and Change in Relationships, 325--354.Google ScholarGoogle Scholar
  41. Cassell, J. (2000, April). Embodied conversational interface agents. Communications of the ACM, 34(4). Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. Cassell, J., Torres, O., & Prevost, S. (1998). Turn taking vs. discourse structure: How best to model multimodal conversation. In Machine Conversations (pp. 143--154). Kluwer.Google ScholarGoogle Scholar
  43. Cassell, J., Vilhjálmsson, H. H., & Bickmore, T. (2004). BEAT: The behavior expression animation toolkit. Life-Like Characters, 477--486.Google ScholarGoogle Scholar
  44. Chawarska, K., & Shic, F. (2009). Looking but not seeing: Atypical visual scanning and recognition of faces in 2 and 4-year-old children with autism spectrum disorder. Journal of Autism and Developmental Disorders, 39(12), 1663--1672.Google ScholarGoogle ScholarCross RefCross Ref
  45. Chidambaram, V., Chiang, Y.-H., & Mutlu, B. (2012, March). Designing persuasive robots: How robots might persuade people using vocal and nonverbal cues. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI) (p. 293--300). Boston, MA: ACM Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. Choi, J. J., Kim, Y., & Kwak, S. S. (2013, March). Have you ever lied?: The impacts of gaze avoidance on people's perception of a robot. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI) (p. 105--106). Tokyo, Japan. Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. Cohen, J. (1960). A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20(1), 37--46.Google ScholarGoogle ScholarCross RefCross Ref
  48. Colburn, R. A., Cohen, M. F., & Drucker, S. M. (2000). The role of eye gaze in avatar mediated conversational interfaces (Vol. 81; Tech. Rep.). Microsoft Research.Google ScholarGoogle Scholar
  49. Cook, M. (1977). Gaze and mutual gaze in social encounters: How long---and when---we look others "in the eye" is one of the main signals in nonverbal communication. American Scientist, 65(3), 328--333.Google ScholarGoogle Scholar
  50. Dautenhahn, K., Nehaniv, C. L., Walters, M. L., Robins, B., Kose-Bagci, H., Mirza, N. A., et al. (2009, December). KASPAR--a minimally expressive humanoid robot for human-robot interaction research. Applied Bionics and Biomechanics, 6(3--4), 369--397. Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. De Silva, P. R. S., Tadano, K., Saito, A., Lambacher, S. G., & Higashi, M. (2009). Therapeutic-assisted robot for children with autism. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (p. 3561--3567). St. Louis, MO. Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. Delaunay, F. (2015). A retro-projected robotic head for social human-robot interaction. Unpublished doctoral dissertation, Plymouth University, Plymouth, England.Google ScholarGoogle Scholar
  53. Delaunay, F., Greeff, J. de, & Belpaeme, T. (2009). Towards retro-projected robot faces: An alternative to mechatronic and android faces. In Proceedings of the IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) (p. 306--311). Toyama, Japan.Google ScholarGoogle ScholarCross RefCross Ref
  54. Delaunay, F., Greeff, J. de, & Belpaeme, T. (2010). A study of a retro-projected robotic face and its effectiveness for gaze reading by humans. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI) (p. 39--44). Osaka, Japan: IEEE. Google ScholarGoogle ScholarDigital LibraryDigital Library
  55. D'Mello, S., Olney, A., Williams, C., & Hays, P. (2012, May). Gaze tutor: A gaze-reactive intelligent tutoring system. International Journal of Human-Computer Studies, 70(5), 377--398. Google ScholarGoogle ScholarDigital LibraryDigital Library
  56. Doniec, M., Sun, G., & Scassellati, B. (2006, December). Active learning of joint attention. In Proceedings of the IEEE-RAS International Conference on Humanoid Robots (Humanoids) (p. 34--39). Genova, Italy.Google ScholarGoogle Scholar
  57. Downing, P. E., Dodds, C. M., & Bray, D. (2004). Why does the gaze of others direct visual attention. Visual Cognition, 11(1), 71--79.Google ScholarGoogle ScholarCross RefCross Ref
  58. Driver, J., Davis, G., Ricciardelli, P., Kidd, P., Maxwell, E., & Baron-Cohen, S. (1999). Gaze perception triggers reflexive visuospatial orienting. Visual Cognition, 6(5), 509--540.Google ScholarGoogle ScholarCross RefCross Ref
  59. Emery, N. (2000). The eyes have it: the neuroethology, function and evolution of social gaze. Neuroscience & Biobehavioral Reviews, 24, 581--604.Google ScholarGoogle ScholarCross RefCross Ref
  60. Ferrari, E., Robins, B., & Dautenhahn, K. (2009). Therapeutic and educational objectives in robot assisted play for children with autism. In Proceedings of the IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) (p. 108--114). Toyama, Japan.Google ScholarGoogle ScholarCross RefCross Ref
  61. Fong, T., Nourbakhsh, I., & Dautenhahn, K. (2003, March). A survey of socially interactive robots. Robotics and Autonomous Systems, 42(3--4), 143--166.Google ScholarGoogle Scholar
  62. Freedman, E. G., & Sparks, D. L. (2000). Coordination of the eyes and head: movement kinematics. Experimental Brain Research, 131(1), 22--32.Google ScholarGoogle ScholarCross RefCross Ref
  63. Friesen, C. K., Ristic, J., & Kingstone, A. (2004). Attentional effects of counterpredictive gaze and arrow cues. Journal of Experimental Psychology: Human Perception and Performance, 30(2), 319--329.Google ScholarGoogle ScholarCross RefCross Ref
  64. Frintrop, S., Rome, E., & Christensen, H. I. (2010, January). Computational visual attention systems and their cognitive foundations: A survey. ACM Transactions on Applied Perception, 7(1), 6:1--6:39. Google ScholarGoogle ScholarDigital LibraryDigital Library
  65. Frischen, A., Bayliss, A. P., & Tipper, S. P. (2007). Gaze cueing of attention: Visual attention, social cognition, and individual differences. Psychological Bulletin, 133(4), 694--724.Google ScholarGoogle ScholarCross RefCross Ref
  66. Fukayama, A., Ohno, T., Mukawa, N., Sawaki, M., & Hagita, N. (2002). Messages embedded in gaze of interface agents --- impression management with agent's gaze. In Proceedings of the ACM Annual Conference on Human Factors in Computing Systems (CHI) (p. 41). New York, NY: ACM Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  67. Garau, M., Slater, M., Bee, S., & Sasse, M. (2001). The impact of eye gaze on communication using humanoid avatars. In Proceedings of the ACM Annual Conference on Human Factors in Computing Systems (CHI) (p. 309--316). Seattle, WA: ACM Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  68. Garau, M., Slater, M., Vinayagamoorthy, V., Brogni, A., Steed, A., & Sasse, M. A. (2003). The impact of avatar realism and eye gaze control on perceived quality of communication in a shared immersive virtual environment. In Proceedings of the ACM Annual Conference on Human Factors in Computing Systems (CHI) (Vol. 5, p. 529). New York, NY: ACM Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  69. Goldin-Meadow, S. (1999). The role of gesture in communication and thinking. Trends in Cognitive Sciences, 3(11), 419 -- 429.Google ScholarGoogle ScholarCross RefCross Ref
  70. Griffin, Z. M., & Bock, K. (2000, July). What the eyes say about speaking. Psychological Science, 11(4), 274--279.Google ScholarGoogle ScholarCross RefCross Ref
  71. Grigore, E. C., Eder, K., Pipe, A. G., Melhuish, C., & Leonards, U. (2013, November). Joint action understanding improves robot-to-human object handover. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Tokyo, Japan: IEEE.Google ScholarGoogle Scholar
  72. Gu, E., & Badler, N. (2006). Visual attention and eye gaze during multiparty conversations with distractions. In J. Gratch, M. Young, R. Aylett, D. Ballin, & P. Olivier (Eds.), Intelligent virtual agents (Vol. 4133, pp. 193--204). Springer Berlin Heidelberg. Google ScholarGoogle ScholarDigital LibraryDigital Library
  73. Ham, J., Cuijpers, R. H., & Cabibihan, J.-J. (2015). Combining robotic persuasive strategies: The persuasive power of a storytelling robot that uses gazing and gestures. International Journal of Social Robotics, 7, 479--487.Google ScholarGoogle ScholarCross RefCross Ref
  74. Hanna, J. E., & Brennan, S. E. (2007). Speakers' eye gaze disambiguates referring expressions early during face-to-face conversation. Journal of Memory and Language, 57, 596--615.Google ScholarGoogle ScholarCross RefCross Ref
  75. Hayhoe, M., & Ballard, D. (2005). Eye movements in natural behavior. Trends in Cognitive Sciences, 9(4), 188--194.Google ScholarGoogle ScholarCross RefCross Ref
  76. Hoffman, M. W., Grimes, D. B., Shon, A. P., & Rao, R. P. (2006). A probabilistic model of gaze imitation and shared attention. Neural Networks, 19, 299--310. Google ScholarGoogle ScholarDigital LibraryDigital Library
  77. Holroyd, A., Rich, C., Sidner, C. L., & Ponsler, B. (2011, July). Generating connection events for human-robot collaboration. In Proceedings of the IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) (p. 241--246). Atlanta, GA: IEEE.Google ScholarGoogle Scholar
  78. Hood, B. M., Willen, J. D., & Driver, J. (1998, March). Adult's eyes trigger shifts of visual attention in human infants. Psychological Science, 9(2).Google ScholarGoogle Scholar
  79. Huang, C.-M., & Mutlu, B. (2012). Robot behavior toolkit: Generating effective social behaviors for robots. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI) (pp. 25--32). Boston, MA. Google ScholarGoogle ScholarDigital LibraryDigital Library
  80. Huang, C.-M., & Mutlu, B. (2013, June). The repertoire of robot behavior: Designing social behaviors to support human-robot joint activity. Journal of Human-Robot Interaction (JHRI), 2(2), 80--102. Google ScholarGoogle ScholarDigital LibraryDigital Library
  81. Huang, C.-M., & Mutlu, B. (2014). Learning-based modeling of multimodal behaviors for humanlike robots. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI) (pp. 57--64). Bielefeld, Germany: ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  82. Huang, C.-M., & Mutlu, B. (2016). Anticipatory robot control for efficient human-robot collaboration. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI) (p. 83--90). Christchurch, New Zealand. Google ScholarGoogle ScholarDigital LibraryDigital Library
  83. Huang, C.-M., & Thomaz, A. L. (2011). Effects of responding to, initiating and ensuring joint attention in human-robot interaction. In Proceedings of the IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) (p. 65--71). Atlanta, GA.Google ScholarGoogle ScholarCross RefCross Ref
  84. Hyönä, J., Tommola, J., & Alaja, A.-M. (1995). Pupil dilation as a measure of processing load in simultaneous interpretation and other language tasks. The Quarterly Journal of Experimental Psychology, 48(3), 598--612.Google ScholarGoogle ScholarCross RefCross Ref
  85. Imai, M., Kanda, T., Ono, T., Ishiguro, H., & Mase, K. (2002). Robot mediated round table: Analysis of the effect of robot's gaze. In Proceedings of the IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) (p. 411--416).Google ScholarGoogle ScholarCross RefCross Ref
  86. Ishi, C. T., Liu, C., Ishiguro, H., & Hagita, N. (2010). Head motion during dialogue speech and nod timing control in humanoid robots. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI) (pp. 293--300). Osaka, Japan: IEEE. Google ScholarGoogle ScholarDigital LibraryDigital Library
  87. Itakura, S., Ishida, H., Kanda, T., Shimada, Y., Ishiguro, H., & Lee, K. (2008). How to build an intentional android: Infants' imitation of a robot's goal-directed actions. Infancy, 13(5), 519--532.Google ScholarGoogle ScholarCross RefCross Ref
  88. Ito, A., Hayakawa, S., & Terada, T. (2004, September). Why robots need body for mind communication--An attempt of eye-contact between human and robot. In Proceedings of the IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) (p. 473--478). Kurashiki, Okayama Japan: IEEE.Google ScholarGoogle Scholar
  89. Itti, L., Dhavale, N., & Pighin, F. (2004). Realistic avatar eye and head animation using a neurobiological model of visual attention. In B. Bosacchi, D. B. Fogel, & J. C. Bezdek (Eds.), Spie (Vol. 5200, pp. 64--78).Google ScholarGoogle Scholar
  90. Itti, L., Dhavale, N., & Pighin, F. (2006). Photorealistic attention-based gaze animation. In Proceedings of IEEE International Conference on Multimedia and Expo (ICME) (pp. 521--524). Toronto, Canada.Google ScholarGoogle ScholarCross RefCross Ref
  91. Itti, L., & Koch, C. (2000). A saliency-based search mechanism for overt and covert shifts of visual attention. Vision Research, 40(10--12), 1489--1506.Google ScholarGoogle Scholar
  92. Itti, L., & Koch, C. (2001, March). Computational modelling of visual attention. Nature Reviews Neuroscience, 2, 194--203.Google ScholarGoogle ScholarCross RefCross Ref
  93. Johansson, R. S., Westling, G., Bäckström, A., & Flanagan, J. R. (2001, September). Eye-hand coordination in object manipulation. The Journal of Neuroscience, 21(17), 6917--6932.Google ScholarGoogle ScholarCross RefCross Ref
  94. Johnson, W. L., Rickel, J. W., & Lester, J. C. (2000). Animated pedagogical agents: Face-to-face interaction in interactive learning environments. International Journal of Artificial Intelligence in Education, 11, 47--78.Google ScholarGoogle Scholar
  95. Jung, M. F., Lee, J. J., DePalma, N., Adalgeirsson, S. O., Hinds, P. J., & Breazeal, C. (2013). Engaging robots: Easing complex human-robot teamwork using backchanneling. In Proceedings of the Conference on Computer Supported Cooperative Work (CSCW) (p. 1555--1566). San Antonio, TX. Google ScholarGoogle ScholarDigital LibraryDigital Library
  96. Kang, S., Gratch, J., & Sidner, C. (2012). Towards building a virtual counselor: Modeling nonverbal behavior during intimate self-disclosure. In Conitzer, Winikoff, Padgham, & van der Hoek (Eds.), Proceedings of the International Conference on Autonomous Agents and Mutliagent Systems (AAMAS) (p. 4--8). Valencia, Spain. Google ScholarGoogle ScholarDigital LibraryDigital Library
  97. Kanngiesser, P., Itakura, S., Zhou, Y., Kanda, T., Ishiguro, H., & Hood, B. (2015). The role of social eye-gaze in children's and adults' ownership attributions to robotic agents in three cultures. Interaction Studies, 16(1), 1--28.Google ScholarGoogle ScholarCross RefCross Ref
  98. Karreman, D. E., Ludden, G. D., Dijk, E. M. van, & Evers, V. (2015, April). How can a tour guide robot influence visitors' engagement, orientation and group formations? In M. Salem, A. Weiss, P. Baxter, & K. Dautenhahn (Eds.), 4th International Symposium on New Frontiers in Human-Robot Interaction. Canterbury, UK.Google ScholarGoogle Scholar
  99. Karreman, D. E., Sepúlveda Bradford, G., Dijk, B. van, Lohse, M., & Evers, V. (2013). What happens when a robot favors someone? In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI) (pp. 157--158). Tokya, Japan. Google ScholarGoogle ScholarDigital LibraryDigital Library
  100. Kemp, C. C., Edsinger, A., & Torres-Jara, E. (2007). Challenges for robot manipulation in human environments. IEEE Robotics and Automation Magazine, 14(1), 20.Google ScholarGoogle ScholarCross RefCross Ref
  101. Kendon, A. (1967). Some functions of gaze-direction in social interaction. Acta Psychologica, 26(1), 22--63.Google ScholarGoogle ScholarCross RefCross Ref
  102. Kennedy, J., Baxter, P., & Belpaeme, T. (2015a). Comparing robot embodiments in a guided discovery learning interaction with children. International Journal of Social Robotics (IJSR), 7(2), 293--308.Google ScholarGoogle ScholarCross RefCross Ref
  103. Kennedy, J., Baxter, P., & Belpaeme, T. (2015b). Head pose estimation is an inadequate replacement for eye gaze in child-robot interaction. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI) Late Breaking Report (p. 35--36). Portland, OR: ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  104. Khullar, S. C., & Badler, N. I. (2001). Where to look? Automating Attending Behaviors of Virtual Human Characters. Autonomous Agents and Multi-Agent Systems, 4(1-2), 9--23. Google ScholarGoogle ScholarDigital LibraryDigital Library
  105. Kingstone, A., Tipper, C., Ristic, J., & Ngan, E. (2004). The eyes have it!: An fMRI investigation. Brain and Cognition, 55, 269--271.Google ScholarGoogle ScholarCross RefCross Ref
  106. Kirchner, N., Alempijevic, A., & Dissanayake, G. (2011). Nonverbal robot-group interaction using an imitated gaze cue. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI) (p. 497). New York, NY: ACM Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  107. Kleinke, C. L. (1986, July). Gaze and eye contact: A research review. Psychological Bulletin, 100(1), 78--100.Google ScholarGoogle ScholarCross RefCross Ref
  108. Knight, H., & Simmons, R. (2013, May). Estimating human interest and attention via gaze analysis. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA). Karlsruhe, Germany.Google ScholarGoogle Scholar
  109. Kobayashi, H., & Kohshima, S. (2001). Unique morphology of the human eye and its adaptive meaning: comparative studies on external morphology of the primate eye. Journal of Human Evolution, 40(5), 419--435.Google ScholarGoogle ScholarCross RefCross Ref
  110. Kousidis, S., & Schlangen, D. (2015). The power of a glance: Evaluating embodiment and turn-tracking strategies of an active robotic overhearer. In Proceedings of the 2015 AAAI Spring Symposium: Turn-Taking and Coordination in Human-Machine Interaction (p. 36--43). Palo Alto, CA: AAAI Press.Google ScholarGoogle Scholar
  111. Kozima, H., & Ito, A. (1998). Towards language acquisition by an attention-sharing robot. In D. Powers (Ed.), Proceedings of the International Conference on New Methods in Language Processing and Computational Natural Language Learning (CoNLL) (p. 245--246). Sydney, Australia. Google ScholarGoogle ScholarDigital LibraryDigital Library
  112. Kozima, H., Michalowski, M. P., & Nakagawa, C. (2009). Keepon: A playful robot for research, therapy, and entertainment. International Journal of Social Robotics, 1, 3--18.Google ScholarGoogle ScholarCross RefCross Ref
  113. Kozima, H., & Yano, H. (2001). A robot that learns to communicate with human caregivers. In Proceedings of the International Workshop on Epigenetic Robotics (p. 47--52). Lund, Sweden.Google ScholarGoogle Scholar
  114. Kuno, Y., Sadazuka, K., Kawashima, M., Yamazaki, K., Yamazaki, A., & Kuzuoka, H. (2007). Museum guide robot based on sociological interaction analysis. In Proceedings of the ACM Annual Conference on Human Factors in Computing Systems (CHI) (p. 1191--1194). San Jose, CA. Google ScholarGoogle ScholarDigital LibraryDigital Library
  115. Kuratate, T., Matsusaka, Y., Pierce, B., & Cheng, G. (2011). "mask-bot": A life-size robot head using talking head animation for human-robot communication. In Proceedings of the IEEE-RAS International Conference on Humanoid Robots (Humanoids) (pp. 99--104). Bled, Slovenia.Google ScholarGoogle ScholarCross RefCross Ref
  116. Land, M. F., & Hayhoe, M. (2001, January). In what ways do eye movements contribute to everyday activities? Vision Research, 41(25--26), 3559--65.Google ScholarGoogle Scholar
  117. Langton, S. R. H., & Bruce, V. (1999). Reflexive visual orienting in response to the social attention of others. Visual Cognition, 6(5), 541--567.Google ScholarGoogle ScholarCross RefCross Ref
  118. Lee, J., Marsella, S., Traum, D., Gratch, J., & Lance, B. (2007). The rickel gaze model: A window on the mind of a virtual human. In C. Pelachaud (Ed.), Intelligent virtual agents (Vol. LNAI 4722, pp. 296--303). SpringerBerlin-Heidelberg. Google ScholarGoogle ScholarDigital LibraryDigital Library
  119. Lee, K. M., Jung, Y., Kim, J., & Kim, S. R. (2006, October). Are physically embodied social agents better than disembodied social agents? The effects of physical embodiment, tactile interaction, and people's loneliness in humanrobot interaction. International Journal of Human-Computer Studies, 64(10), 962--973. Google ScholarGoogle ScholarDigital LibraryDigital Library
  120. Leyzbeg, D., Spaulding, S., Toneva, M., & Scassellati, B. (2012). The physical presence of a robot tutor increases cognitive learning gains. In Proceedings of the Conference of the Cognitive Science Society (CogSci). Sapporo, Japan.Google ScholarGoogle Scholar
  121. Li, & Mao, X. (2012a, May). EEMML: The emotional eye movement animation toolkit. Multimedia Tools and Applications, 60(1), 181--201. Google ScholarGoogle ScholarDigital LibraryDigital Library
  122. Li, & Mao, X. (2012b, October). Emotional eye movement generation based on Geneva Emotion Wheel for virtual agents. Journal of Visual Languages & Computing, 23(5), 299--310. Google ScholarGoogle ScholarDigital LibraryDigital Library
  123. Li, J. (2015). The benefit of being physically present: A survey of experimental works comparing copresent robots, telepresent robots and virtual agents. International Journal of Human-Computer Studies, 77, 23--37. Google ScholarGoogle ScholarDigital LibraryDigital Library
  124. Liu, C., Ishi, C. T., Ishiguro, H., & Hagita, N. (2012, March). Generation of nodding, head tilting and eye gazing for human-robot dialogue interaction. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI) (p. 285--292). Boston, MA: ACM Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  125. Lockerd, A., & Breazeal, C. (2004). Tutelage and socially guided robot learning. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (Vol. 4, p. 3475--3480).Google ScholarGoogle ScholarCross RefCross Ref
  126. Lungarella, M., & Metta, G. (2003). Beyond gazing, pointing, and reaching: A survey of developmental robotics. In Proceedings of the Joint IEEE International Conference on Developmental and Learning and on Epigenetic Robotics (pp. 81--89).Google ScholarGoogle Scholar
  127. Matsusaka, Y., Fujie, S., & Kobayashi, T. (2001). Modeling of conversational strategy for the robot participating in the group conversation. In Proceedings of the European Conference on Speech Communication and Technology (Eurospeech) (Vol. 1, p. 2173--2176).Google ScholarGoogle Scholar
  128. Matsusaka, Y., Tojo, T., Kubota, S., Furukawa, K., Tamiya, D., Hayata, K., et al. (1999). Multi-person conversation via multi-modal interface: A robot who communicate with multi-user. In Proceedings of the European Conference on Speech Communications and Technology (Eurospeech) (p. 1723--1726).Google ScholarGoogle Scholar
  129. Mavridis, N. (2015). A review of verbal and non-verbal human-robot interactive communication. Robotics and Autonomous Systems, 63, 22--35. Google ScholarGoogle ScholarDigital LibraryDigital Library
  130. McNeill, D. (1992). Hand and mind: What gestures reveal about thought. Chicago: The University of Chicago Press.Google ScholarGoogle Scholar
  131. Meltzoff, A. N., Brooks, R., Shon, A. P., & Rao, R. P. N. (2010). "Social" robots are psychological agents for infants: A test of gaze following. Neural Networks, 23, 966--972. Google ScholarGoogle ScholarDigital LibraryDigital Library
  132. Metta, G., Sandini, G., Vernon, D., Natale, L., & Nori, F. (2008). The iCub humanoid robot: An open platform for research in embodied cognition. In Proceedings of the Workshop on Performance Metrics for Intelligent Systems (pp. 50--56). Gaithersburg, MD: ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  133. Moon, A., Troniak, D. M., Gleeson, B., Pan, M. K. X. J., Zheng, M., Blumer, B. A., et al. (2014). Meet me where i'm gazing: How shared attention gaze affects human-robot handover timing. In Proceedings of the 9th ACM/IEEE interational conference on human-robot interaction (HRI) (pp. 334--341). Bielefeld, Germany: ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  134. Moore, C., & Dunham, P. (2014). Joint attention: Its origins and role in development. New York, NY: Psychology Press.Google ScholarGoogle Scholar
  135. Movellan, J. R., & Watson, J. S. (2002). The development of gaze following as a Bayesian systems identification problem. In Proceedings of the IEEE International Conference on Development and Learning (ICDL) (pp. 34--40). Cambridge, MA. Google ScholarGoogle ScholarDigital LibraryDigital Library
  136. Mumm, J., & Mutlu, B. (2011, March). Human-robot proxemics: Physical and psychological distancing in human-robot interaction. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI). Lausanne, Switzerland. Google ScholarGoogle ScholarDigital LibraryDigital Library
  137. Murphy, R., Gonzales, J., & Srinivasan, V. (2011). Inferring social gaze from conversational structure and timing. Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI), 209--210. Google ScholarGoogle ScholarDigital LibraryDigital Library
  138. Mutlu, B., Forlizzi, J., & Hodgins, J. (2006). A storytelling robot: Modeling and evaluation of human-like gaze behavior. In Proceedings of the IEEE-RAS International Conference on Humanoid Robots (Humanoids) (pp. 518--523). Genova, Italy.Google ScholarGoogle ScholarCross RefCross Ref
  139. Mutlu, B., Kanda, T., Forlizzi, J., Hodgins, J., & Ishiguro, H. (2012, January). Conversational gaze mechansims for humanlike robots. ACM Transactions on Interactive Intelligent Systems, 1(2). Google ScholarGoogle ScholarDigital LibraryDigital Library
  140. Mutlu, B., Shiwa, T., Kanda, T., Ishiguro, H., & Hagita, N. (2009, March). Footing in human-robot conversations: How robots might shape participant roles using gaze cues. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI) (pp. 61--68). La Jolla, CA: ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  141. Mutlu, B., Yamaoka, F., Kanda, T., Ishiguro, H., & Hagita, N. (2009, March). Nonverbal leakage in robots: Communication of intentions through seemingly unintentional behavior. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI) (pp. 69--76). La Jolla, CA: ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  142. Nagai, Y., Hosoda, K., Morita, A., & Asada, M. (2003, December). A constructive model for the development of joint attention. Connection Science, 15(4), 211--229.Google ScholarGoogle ScholarCross RefCross Ref
  143. Normoyle, A., Badler, J. B., Fan, T., Badler, N. I., Cassol, V. J., & Musse, S. R. (2013). Evaluating perceived trust from procedurally animated gaze. In Proceedings of Motion on Games (pp. 119:141--119:148). New York, NY: ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  144. Oertel, C., Włodarczak, M., Edlund, J., Wagner, P., & Gustafson, J. (2012, September). Gaze patterns in turn-taking. In Proceedings of the Annual Conference of the International Speech Communication Association (INTERSPEECH) (Vol. 3, pp. 2243--2246). Portland, OR.Google ScholarGoogle Scholar
  145. Okumura, Y., Kanakogi, Y., Kanda, T., Ishiguro, H., & Itakura, S. (2013a). Infants understand the referential nature of human gaze but not robot gaze. Journal of Experimental Child Psychology, 116(1), 86--95.Google ScholarGoogle ScholarCross RefCross Ref
  146. Okumura, Y., Kanakogi, Y., Kanda, T., Ishiguro, H., & Itakura, S. (2013b, August). The power of human gaze on infant learning. Cognition, 128(2), 127--33.Google ScholarGoogle ScholarCross RefCross Ref
  147. Ono, T., Imai, M., & Ishiguro, H. (2001). A model of embodied communications with gestures between humans and robots. In Proceedings of the Annual Conference of the Cognitive Science Society (CogSci) (p. 732--737). Edinburgh, Scotland.Google ScholarGoogle Scholar
  148. Otsuka, K., Takemae, Y., & Yamato, J. (2005). A probabilistic inference of multiparty-conversation structure based on Markov-switching models of gaze patterns, head directions, and utterances. In Proceedings of the International Conference on Multimodal Interfaces (pp. 191--198). New York, NY, USA: ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  149. Pandey, A. K., Ali, M., & Alami, R. (2013). Towards a task-aware proactive sociable robot based on multi-state perspective-taking. International Journal of Social Robotics, 5(2), 215--236.Google ScholarGoogle ScholarCross RefCross Ref
  150. Pelachaud, C., & Bilvi, M. (2003). Modelling gaze behavior for conversational agents. In Intelligent Virtual Agents (IVA) (Vol. LNAI 2792, p. 93--100). Germany: Springer-Verlag.Google ScholarGoogle Scholar
  151. Peters, C., Pelachaud, C., Bevacqua, E., Mancini, M., & Poggi, I. (2005). A model of attention and interest using gaze behavior. In T. Panayiotopoulos, J. Gratch, R. Aylett, D. Ballin, P. Olivier, & T. Rist (Eds.), Intelligent virtual agents (Vol. 3661, p. 229--240). Springer Berlin Heidelberg. Google ScholarGoogle ScholarDigital LibraryDigital Library
  152. Pfeiffer, U. J., Timmermans, B., Bente, G., Vogeley, K., & Schilbach, L. (2011, January). A non-verbal Turing test: Differentiating mind from machine in gaze-based social interaction. PloS one, 6(11), e27591.Google ScholarGoogle ScholarCross RefCross Ref
  153. Pfeiffer-Lessmann, N., Pfeiffer, T., & Wachsmuth, I. (2012). An operational model of joint attention-timing of gaze patterns in interactions between humans and a virtual human. In N. Miyake, D. Peebles, & R. P. Cooper (Eds.), Proceedings of the Annual Conference of the Cognitive Science Society (CogSci) (p. 851--856). Austin, TX: Cognitive Science Society.Google ScholarGoogle Scholar
  154. Pierno, A., Becchio, C., Wall, M., Smith, A., Turella, L., & Castiello, U. (2006). When gaze turns into grasp. Journal of Cognitive Neuroscience, 18(12), 2130--2137. Google ScholarGoogle ScholarDigital LibraryDigital Library
  155. Pitsch, K., Vollmer, A.-L., & Mühlig, M. (2013). Robot feedback shapes the tutor's presentation: How a robot's online gaze strategies lead to micro-adaptation of the human's conduct. Interaction Studies, 14(2), 268--296.Google ScholarGoogle ScholarCross RefCross Ref
  156. Poggi, I., Pelachaud, C., & De Rosis, F. (2000). Eye communication in a conversational 3D synthetic agent. AI Communications, 13(3), 169--181. Google ScholarGoogle ScholarDigital LibraryDigital Library
  157. Powers, A., Kiesler, S., Fussell, S., & Torrey, C. (2007). Comparing a computer agent with a humanoid robot. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI) (p. 145--152). Washington, DC. Google ScholarGoogle ScholarDigital LibraryDigital Library
  158. Rich, C., & Ponsler, B. (2010). Recognizing engagement in human-robot interaction. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI) (p. 375--382). Osaka, Japan. Google ScholarGoogle ScholarDigital LibraryDigital Library
  159. Ristic, J., Mottron, L., Friesen, C. K., Iarocci, G., Burack, J. A., & Kingstone, A. (2005). Eyes are special but not for everyone: The case of autism. Cognitive Brain Research, 24, 715--718. (Short communication)Google ScholarGoogle ScholarCross RefCross Ref
  160. Ruesch, J., Lopes, M., Bernardino, A., Hörnstein, J., Santos-Victor, J., & Pfeifer, R. (2008, May). Multimodal saliency-based bottom-up attention: A framework for the humanoid robot iCub. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA) (p. 962--967). Pasadena, CA.Google ScholarGoogle Scholar
  161. Ruhland, K., Peters, C. E., Andrist, S., Badler, J. B., Badler, N. I., Gleicher, M., et al. (2015). A review of eye gaze in virtual agents, social robotics and HCI: Behaviour generation, user interaction and perception. Computer Graphics Forum, 1--28. Google ScholarGoogle ScholarDigital LibraryDigital Library
  162. Saerbeck, M., Schut, T., Bartneck, C., & Janse, M. D. (2010). Expressive robots in education: varying the degree of social supportive behavior of a robotic tutor. In Proceedings of the International Conference on Human Factors in Computing Systems (CHI) (p. 1613--1622). Atlanta, GA. Google ScholarGoogle ScholarDigital LibraryDigital Library
  163. Sakamoto, D., Kanda, T., Ono, T., Ishiguro, H., & Hagita, N. (2007). Android as a telecommunication medium with a human-like presence. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI) (p. 193--200). Washington, DC. Google ScholarGoogle ScholarDigital LibraryDigital Library
  164. Sakita, K., Ogawara, K., Murakami, S., Kawamura, K., & Ikeuchi, K. (2004). Flexible cooperation between human and robot by interpreting human intention from gaze information. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (Vol. 1).Google ScholarGoogle ScholarCross RefCross Ref
  165. Satake, S., Kanda, T., Glas, D. F., Imai, M., Ishiguro, H., & Hagita, N. (2010). How to approach humans?-strategies for social robots to initiate interaction. Journal of the Robotics Society of Japan, 28(3), 327--337.Google ScholarGoogle ScholarCross RefCross Ref
  166. Sauppé, A., & Mutlu, B. (2014). Robot deictics: How gesture and context shape referential communication. In Proceedings of the ACM/IEEE International Conference on Human-robot Interaction (HRI) (p. 342--349). Bielefeld, Germany. Google ScholarGoogle ScholarDigital LibraryDigital Library
  167. Scassellati, B. (1996). Mechanisms of shared attention for a humanoid robot. In Proceedings of the AAAI Fall Symposium on Embodied Cognition and Action (Vol. 4, p. 21). Cambridge, MA.Google ScholarGoogle Scholar
  168. Scassellati, B. (1999). Imitation and mechanisms of joint attention: A developmental structure for building social skills on a humanoid robot. In Computation For Metaphors, Analogy, and Agents (Vol. 1562, p. 176--195). Springer Lecture Notes in Artificial Intelligence. Google ScholarGoogle ScholarDigital LibraryDigital Library
  169. Scassellati, B. (2007). How social robots will help us to diagnose, treat, and understand autism. In S. Thrun, R. Brooks, & H. Durrant-Whyte (Eds.), Robotics research (Vol. 28, p. 552--563). Springer Berlin-Heidelberg.Google ScholarGoogle Scholar
  170. Scassellati, B., Admoni, H., & Matarić, M. (2012). Robots for use in autism research. Annual Review of Biomedical Engineering, 14, 275--294.Google ScholarGoogle ScholarCross RefCross Ref
  171. Shah, J., & Breazeal, C. (2010). An empirical analysis of team coordination behaviors and action planning with application to human-robot teaming. Human Factors, 52(2), 234--245.Google ScholarGoogle ScholarCross RefCross Ref
  172. Shic, F., Scassellati, B., Lin, D., & Chawarska, K. (2007, July). Measuring context: The gaze patterns of children with autism evaluated from the bottom-up. In Proceedings of the IEEE International Conference on Development and Learning (ICDL) (p. 70--75). London, UK.Google ScholarGoogle Scholar
  173. Sidner, C. L., Kidd, C. D., Lee, C., & Lesh, N. (2004). Where to look: A study of human-robot engagement. In Proceedings of the International Conference on Intelligent User Interfaces (IUI '04) (pp. 78--84). New York, NY: ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  174. Siegle, G. J., Ichikawa, N., & Steinhauer, S. (2008). Blink before and after you think: Blinks occur prior to and following cognitive load indexed by pupillary responses. Psychophysiology, 45(5), 679--687.Google ScholarGoogle ScholarCross RefCross Ref
  175. Simmons, R., Makatchev, M., Kirby, R., Lee, M. K., Fanaswala, I., Browning, B., et al. (2011). Believable robot characters. AI Magazine, 32(4), 39--52.Google ScholarGoogle ScholarDigital LibraryDigital Library
  176. Sisbot, E. A., & Alami, R. (2012, October). A human-aware manipulation planner. IEEE Transactions on Robotics, 28(5), 1045--1057. Google ScholarGoogle ScholarDigital LibraryDigital Library
  177. Sorostinean, M., Ferland, F., Dang, T.-H.-H., & Tapus, A. (2014). Motion-oriented attention for a social gaze robot behavior. In Proceedings of the International Conference on Social Robotics (ICSR) (pp. 310--319). Sydney, Australia: Springer. (LNAI 8755)Google ScholarGoogle ScholarCross RefCross Ref
  178. Spexard, T., Hanheide, M., & Sagerer, G. (2007, October). Human-oriented interaction with an anthropomorphic robot. IEEE Transactions on Robotics, 23(5), 852--862. Google ScholarGoogle ScholarDigital LibraryDigital Library
  179. Srinivasan, V. (2014). High social acceptance of head gaze loosely synchronized with speech for social robots. Unpublished doctoral dissertation, Texas A & M University.Google ScholarGoogle Scholar
  180. Srinivasan, V., & Murphy, R. (2011, March). A survey of social gaze. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI) (p. 253--254). Lausanne, Switzerland. Google ScholarGoogle ScholarDigital LibraryDigital Library
  181. Staudte, M., & Crocker, M. W. (2009, March). Visual attention in spoken human-robot interaction. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI) (pp. 77--84). La Jolla, CA: ACM Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  182. Staudte, M., & Crocker, M. W. (2011, August). Investigating joint attention mechanisms through spoken human-robot interaction. Cognition, 120, 268--291.Google ScholarGoogle ScholarCross RefCross Ref
  183. Strabala, K., Lee, M. K., Dragan, A., Forlizzi, J., & Srinivasa, S. S. (2012, September). Learning the communication of intent prior to physical collaboration. In Proceedings of the IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) (pp. 968--973). IEEE.Google ScholarGoogle Scholar
  184. Strabala, K., Lee, M. K., Dragan, A., Forlizzi, J., Srinivasa, S. S., Cakmak, M., et al. (2013). Toward seamless human-robot handovers. Journal of Human-Robot Interaction (JHRI), 2(1), 112--132. Google ScholarGoogle ScholarDigital LibraryDigital Library
  185. Szafir, D., & Mutlu, B. (2012, May). Pay attention! Designing adaptive agents that monitor and improve user engagement. In Proceedings of the ACM Annual Conference on Human Factors in Computing Systems (CHI) (pp. 11--20). Austin, TX. Google ScholarGoogle ScholarDigital LibraryDigital Library
  186. Takayama, L., Dooley, D., & Ju, W. (2011, March). Expressing thought: Improving robot readability with animation principles. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI) (pp. 69--76). Lausanne, Switzerland: ACM Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  187. Tapus, A., Matarić, M., & Scassellati, B. (2007, March). Socially assistive robotics: The grand challenges in helping humans through social interaction. IEEE Robotics and Automation Magazine, 35--42.Google ScholarGoogle Scholar
  188. Tapus, A., Peca, A., Aly, A., Pop, C., Jisa, L., Pintea, S., et al. (2012). Children with autism social engagement in interaction with Nao, an imitative robot A series of single case experiments. Interaction Studies, 13(3), 315--347.Google ScholarGoogle ScholarCross RefCross Ref
  189. Thórisson, K. (1997). Gandalf: An embodied humanoid capable of real-time multimodal dialogue with people. In Proceedings of the First International Conference on Autonomous Agents (AGENTS). Marina Del Rey, CA: ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  190. Thórisson, K. R. (1994, August). Face-to-face communication with computer agents. In AAAI Spring Symposium on Believable Agents (pp. 86--90). Stanford University, CA. Retrieved from http://alumni.media.mit.edu/kris/ftp/f_t_f_comm.pdfGoogle ScholarGoogle Scholar
  191. Thórisson, K. R. (1997). Layered modular action control for communicative humanoids. In Computer Animation (pp. 134--143). Google ScholarGoogle ScholarDigital LibraryDigital Library
  192. Trafton, J., & Bugajska, M. (2008). Integrating Vision and Audition within a Cognitive Architecture to Track Conversations. In Proceedings of the 3rd International Conference on Human-Robot Interaction (HRI) (pp. 201--208). Amsterdam, Netherlands. Google ScholarGoogle ScholarDigital LibraryDigital Library
  193. Triesch, J., Teuscher, C., Deák, G. O., & Carlson, E. (2006). Gaze following: Why (not) learn it? Developmental Science, 9(2), 125--147.Google ScholarGoogle ScholarCross RefCross Ref
  194. Van Schendel, J. A., & Cuijpers, R. H. (2015, April). Turn-yielding cues in robot-human conversation. In K. D. M. Salem, A. Weiss, P. Baxter (Ed.), 4th International Symposium on New Frontiers in Human-Robot Interaction. Canterbury, UK.Google ScholarGoogle Scholar
  195. Vázquez, M., Steinfeld, A., Hudson, S. E., & Forlizzi, J. (2014). Spatial and other social engagement cues in a child-robot interaction: Effects of a sidekick. In Proceedings of the 9th ACM/IEEE International Conference on Human-Robot Interaction (HRI). Bielefeld, Germany: ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  196. Vertegaal, R., & Ding, Y. (2002). Explaining effects of eye gaze on mediated group conversations: Amount or synchronization? In Proceedings of the ACM Conference on Computer Supported Cooperative Work (CSCW) (p. 41--48). New York, NY: ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  197. Vertegaal, R., Slagter, R., Veer, G. van der, & Nijholt, A. (2001). Eye gaze patterns in conversations: There is more to conversational agents than meets the eyes. In Proceedings of the ACM Conference on Human Factors in Computing Systems (SIGCHI) (Vol. 3, p. 301--308). Seattle, WA. Google ScholarGoogle ScholarDigital LibraryDigital Library
  198. Vijayakumar, S., Conradt, J., Shibata, T., & Schaal, S. (2001, October). Overt visual attention for a humanoid robot. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (Vol. 4, p. 2332--2337). Maui, HI.Google ScholarGoogle Scholar
  199. Wainer, J., Feil-Seifer, D., Shell, D. A., & Matarić, M. J. (2007, August). Embodiment and human-robot interaction: A task-based perspective. In IEEE Proceedings of the International Workshop on Robot and Human Interactive Communication (RO-MAN) (p. 872--877). Jeju Island, South Korea.Google ScholarGoogle Scholar
  200. Wang, N., & Gratch, J. (2010, April). Don't just stare at me! In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI) (p. 1241--1249). Atlanta, GA: ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  201. Xu, T., Zhang, H., & Yu, C. (2013). Cooperative gazing behaviors in human multi-robot interaction. Interaction Studies, 14(3), 390--418.Google ScholarGoogle ScholarCross RefCross Ref
  202. Yamazaki, Yamazaki, K., Burdelski, M., Kuno, Y., & Fukushima, M. (2010). Coordination of verbal and non-verbal actions in human-robot interaction at museums and exhibitions. Journal of Pragmatics, 42, 2398--2414.Google ScholarGoogle ScholarCross RefCross Ref
  203. Yamazaki, K., Kawashima, M., Kuno, Y., Akiya, N., Burdelski, M., Yamazaki, A., et al. (2007, September). Prior-to-request and request behaviors within elderly day care: Implications for developing service robots for use in multiparty settings. In L. Bannon, I. Wagner, C. Gutwin, R. Harper, & K. Schmidt (Eds.), ECSCW (p. 61--78). Limerick, Ireland: Springer.Google ScholarGoogle Scholar
  204. Yonezawa, T., Yamazoe, H., Utsumi, A., & Abe, S. (2007, November). Gaze-communicative behavior of stuffed-toy robot with joint attention and eye contact based on ambient gaze-tracking. In Proceedings of the International Conference on Multimodal Interfaces (ICMI) (p. 140--145). Nagoya, Japan: ACM Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  205. Yoshikawa, Y., & Shinozawa, K. (2006, Aug). Responsive Robot Gaze to Interaction Partner. In Robotics: Science and Systems. Philadelphia, PA.Google ScholarGoogle Scholar
  206. Yoshikawa, Y., Shinozawa, K., Ishiguro, H., Hagita, N., & Miyamoto, T. (2006, October). The effects of responsive eye movement and blinking behavior in a communication robot. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 4564--4569).Google ScholarGoogle Scholar
  207. Yu, C., Schermerhorn, P., & Scheutz, M. (2012, January). Adaptive eye gaze patterns in interactions with human and artificial agents. ACM Transactions on Interactive Intelligent Systems, 1(2). Google ScholarGoogle ScholarDigital LibraryDigital Library
  208. Zaraki, A., Mazzei, D., Giuliani, M., & De Rossi, D. (2014, April). Designing and evaluating a social gaze-control system for a humanoid robot. IEEE Transactions on Human-Machine Systems, 44(2), 157--168.Google ScholarGoogle ScholarCross RefCross Ref
  209. Zheng, M., Moon, A., Croft, E. A., & Meng, M. Q.-H. (2015). Impacts of robot head gaze on robot-to-human handovers. International Journal of Social Robotics.Google ScholarGoogle Scholar

Index Terms

  1. Social eye gaze in human-robot interaction: a review
    Index terms have been assigned to the content through auto-classification.

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    Full Access

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader