Abstract
This article reviews the state of the art in social eye gaze for human-robot interaction (HRI). It establishes three categories of gaze research in HRI, defined by differences in goals and methods: a human-centered approach, which focuses on people's responses to gaze; a design-centered approach, which addresses the features of robot gaze behavior and appearance that improve interaction; and a technology-centered approach, which is concentrated on the computational tools for implementing social eye gaze in robots. This paper begins with background information about gaze research in HRI and ends with a set of open questions.
- Admoni, H., Bank, C., Tan, J., & Toneva, M. (2011). Robot gaze does not reflexively cue human attention. In L. Carlson, C. Hölscher, & T. Shipley (Eds.), Proceedings of the Annual Conference of the Cognitive Science Society (CogSci) (p. 1983--1988). Austin, TX: Cognitive Science Society.Google Scholar
- Admoni, H., Datsikas, C., & Scassellati, B. (2014). Speech and gaze conflicts in collaborative human-robot interactions. In Proceedings of the Annual Conference of the Cognitive Science Society (CogSci) (pp. 104--109). Quebec City, Canada.Google Scholar
- Admoni, H., Dragan, A., Srinivasa, S., & Scassellati, B. (2014). Deliberate delays during robot-to-human handovers improve compliance with gaze communication. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI) (pp. 49--56). Bielefeld, Germany. Google ScholarDigital Library
- Admoni, H., Hayes, B., Feil-Seifer, D., Ullman, D., & Scassellati, B. (2013). Are you looking at me? Perception of robot attention is mediated by gaze type and group size. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI) (pp. 389--396). Tokyo, Japan. Google ScholarDigital Library
- Admoni, H., & Scassellati, B. (2012). Robot gaze is different from human gaze: Evidence that robot gaze does not cue reflexive attention. In Proceedings of the "Gaze in Human-Robot Interaction" Workshop at HRI 2012. Boston, MA.Google Scholar
- Admoni, H., & Scassellati, B. (2014). Data-driven model of nonverbal behavior for socially assistive human-robot interactions. In Proceedings of the ACM International Conference on Multimodal Interaction (ICMI). Istanbul, Turkey. Google ScholarDigital Library
- Al Moubayed, S., & Skantze, G. (2012). Perception of gaze direction for situated interaction. In Proceedings of the 4th Workshop on Eye Gaze in Intelligent Human Machine Interaction (Gaze-In) (p. 1--6). Santa Monica, CA. Google ScholarDigital Library
- Aldebaran. (2015, July). {Website} Retrieved from: https://www.aldebaran.com/en.Google Scholar
- Al Moubayed, S., Beskow, J., Skantze, G., & Granström, B. (2012). Furhat: A back-projected human-like robot head for multiparty human-machine interaction. In Cognitive Behavioural Systems (Vol. 7403, pp. 114--130). Springer Berlin Heidelberg. Google ScholarDigital Library
- Al Moubayed, S., Edlund, J., & Beskow, J. (2012, January). Taming Mona Lisa: Communicating gaze faithfully in 2d and 3d facial projections. ACM Transactions on Interactive Intelligent Systems, 1(2). Google ScholarDigital Library
- Anderson, J. R., Matessa, M., & Lebiere, C. (1997). ACT-R: A theory of higher level cognition and its relation to visual attention. Human-Computer Interaction, 12(4), 439--462. Google ScholarDigital Library
- Andrist, S., Collier, W., Gleicher, M., Mutlu, B., & Shaffer, D. (2015). Look together: Analyzing gaze coordination with epistemic network analysis. Frontiers in Psychology, 6(1016).Google Scholar
- Andrist, S., Mutlu, B., & Gleicher, M. (2013). Conversational gaze aversion for virtual agents. In E. R. Aylett, B. Krenn, C. Pelachaud, H. Shimodaira (Ed.), Intelligent Virtual Agents (Vol. LNCS 8108, pp. 249--262).Google Scholar
- Andrist, S., Mutlu, B., & Tapus, A. (2015, April). Look Like Me: Matching Robot Personality via Gaze to Increase Motivation. In Proceedings of the ACM Annual Conference on Human Factors in Computing Systems (CHI). Seoul, Republic of Korea: ACM Press. Google ScholarDigital Library
- Andrist, S., Pejsa, T., Mutlu, B., & Gleicher, M. (2012a). Designing effective gaze mechanisms for virtual agents. In Proceedings of the ACM Annual Conference on Human Factors in Computing Systems (CHI) (pp. 705--714). Austin, TX: ACM Press. Google ScholarDigital Library
- Andrist, S., Pejsa, T., Mutlu, B., & Gleicher, M. (2012b). A head-eye coordination model for animating gaze shifts of virtual characters. In Proceedings of the 4th Workshop on Eye Gaze in Intelligent Human Machine Interaction. Santa Monica, CA: ACM Press. Google ScholarDigital Library
- Andrist, S., Tan, X. Z., Gleicher, M., & Mutlu, B. (2014). Conversational gaze aversion for humanlike robots. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI). Bielefeld, Germany: ACM. Google ScholarDigital Library
- Argall, B. D., Chernova, S., Veloso, M., & Browning, B. (2009). A survey of robot learning from demonstration. Robotics and Autonomous Systems, 57(5), 469--483. Google ScholarDigital Library
- Argyle, M. (1972). Non-verbal communication in human social interaction. In R. A. Hinde (Ed.), Non-verbal communication. Oxford, England: Cambridge University Press.Google Scholar
- Argyle, M., & Cook, M. (1976). Gaze and mutual gaze. Oxford, England: Cambridge University Press.Google Scholar
- Argyle, M., & Ingham, R. (1972). Gaze, mutual gaze, and proximity. Semiotica, 6(1), 32--49.Google ScholarCross Ref
- Bailly, G., Raidt, S., & Elisei, F. (2010, June). Gaze, conversational agents and face-to-face communication. Speech Communication, 52(6), 598--612. Google ScholarDigital Library
- Bainbridge, W. A., Hart, J. W., Kim, E. S., & Scassellati, B. (2011). The benefits of interactions with physically present robots over video-displayed agents. International Journal of Social Robotics, 3, 41--52.Google ScholarCross Ref
- Bartneck, C., Bleeker, T., Bun, J., Fens, P., & Riet, L. (2010). The influence of robot anthropomorphism on the feelings of embarrassment when interacting with robots. PALADYN Journal of Behavioral Robotics, 1, 109--115.Google ScholarCross Ref
- Bartneck, C., Kulić, D., Croft, E., & Zoghbi, S. (2009). Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. International Journal of Social Robotics, 1, 71--81.Google ScholarCross Ref
- Beatty, J., & Lucero-Wagoner, B. (2000). The pupillary system. In J. T. Cacioppo, L. G. Tassinary, & G. G. Berntson (Eds.), Handbook of Psychophysiology (p. 142--162). Oxford, England: Cambridge University Press.Google Scholar
- Bekele, E., Lahiri, U., Swanson, A., Crittendon, J., Warren, Z., & Sarkar, N. (2013). A step towards developing adaptive robot-mediated intervention architecture (ARIA) for children with autism. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 21(2), 289--299.Google ScholarCross Ref
- Bennewitz, M., Faber, F., Joho, D., Schreiber, M., & Behnke, S. (2005, December). Towards a humanoid museum guide robot that interacts with multiple persons. In Proceedings of the IEEE-RAS International Conference on Humanoid Robots (Humanoids) (pp. 418--423). Tukuba, Japan: IEEE.Google Scholar
- Bohus, D., & Horvitz, E. (2010). Facilitating multiparty dialog with gaze, gesture, and speech. In International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction (ICMI-MLMI) (p. 1). Beijing, China. Google ScholarDigital Library
- Borji, A., & Itti, L. (2013, January). State-of-the-art in visual attention modeling. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(1), 185--207. Google ScholarDigital Library
- Boucher, J.-D., Pattacini, U., Lelong, A., Bailly, G., Elisei, F., Fagel, S., et al. (2012, January). I reach faster when i see you look: Gaze effects in human-human and human-robot face-to-face cooperation. Frontiers in Neurorobotics, 6(May), 1--11.Google ScholarCross Ref
- Breazeal, C., Hoffman, G., & Lockerd, A. (2004). Teaching and working with robots as a collaboration. In Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS) (pp. 1030--1037). IEEE Computer Society. Google ScholarDigital Library
- Breazeal, C., Kidd, C. D., Thomaz, A. L., Hoffman, G., & Berlin, M. (2005). Effects of nonverbal communication on efficiency and robustness in human-robot teamwork. 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, 708--713.Google ScholarCross Ref
- Breazeal, C., & Scassellati, B. (1999a). A context-dependent attention system for a social robot. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI). Stockholm, Sweden. Google ScholarDigital Library
- Breazeal, C., & Scassellati, B. (1999b). How to build robots that make friends and influence people. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (Vol. 2, pp. 858--863). Kyongju, South Korea.Google ScholarCross Ref
- Brooks, R., & Meltzoff, A. N. (2002). The importance of eyes: How infants interpret adult looking behavior. Developmental Psychology, 38(6), 958--966.Google ScholarCross Ref
- Broz, F., Lehmann, H., Nehaniv, C. L., & Dautenhahn, K. (2012, March 5). Mutual gaze, personality, and familiarity: Dual eye-tracking during conversation. In Proceedings of the "Gaze in Human-Robot Interaction" Workshop at the ACM/IEEE International Conference on Human-Robot Interaction (HRI). Boston, MA.Google Scholar
- Bruce, A. (2002). The role of expressiveness and attention in human-robot interaction. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA) (p. 38--42). Washington, DC: IEEE Press.Google ScholarCross Ref
- Calder, A. J., Lawrence, A. D., Keane, J., Scott, S. K., Owen, A. M., Christoffels, I., et al. (2002). Reading the mind from eye gaze. Neuropsychologia, 40(8), 1129--1138.Google ScholarCross Ref
- Cappella, J. N., & Pelachaud, C. (2002). Rules for responsive robots: Using human interactions to build virtual interactions. Stability and Change in Relationships, 325--354.Google Scholar
- Cassell, J. (2000, April). Embodied conversational interface agents. Communications of the ACM, 34(4). Google ScholarDigital Library
- Cassell, J., Torres, O., & Prevost, S. (1998). Turn taking vs. discourse structure: How best to model multimodal conversation. In Machine Conversations (pp. 143--154). Kluwer.Google Scholar
- Cassell, J., Vilhjálmsson, H. H., & Bickmore, T. (2004). BEAT: The behavior expression animation toolkit. Life-Like Characters, 477--486.Google Scholar
- Chawarska, K., & Shic, F. (2009). Looking but not seeing: Atypical visual scanning and recognition of faces in 2 and 4-year-old children with autism spectrum disorder. Journal of Autism and Developmental Disorders, 39(12), 1663--1672.Google ScholarCross Ref
- Chidambaram, V., Chiang, Y.-H., & Mutlu, B. (2012, March). Designing persuasive robots: How robots might persuade people using vocal and nonverbal cues. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI) (p. 293--300). Boston, MA: ACM Press. Google ScholarDigital Library
- Choi, J. J., Kim, Y., & Kwak, S. S. (2013, March). Have you ever lied?: The impacts of gaze avoidance on people's perception of a robot. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI) (p. 105--106). Tokyo, Japan. Google ScholarDigital Library
- Cohen, J. (1960). A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20(1), 37--46.Google ScholarCross Ref
- Colburn, R. A., Cohen, M. F., & Drucker, S. M. (2000). The role of eye gaze in avatar mediated conversational interfaces (Vol. 81; Tech. Rep.). Microsoft Research.Google Scholar
- Cook, M. (1977). Gaze and mutual gaze in social encounters: How long---and when---we look others "in the eye" is one of the main signals in nonverbal communication. American Scientist, 65(3), 328--333.Google Scholar
- Dautenhahn, K., Nehaniv, C. L., Walters, M. L., Robins, B., Kose-Bagci, H., Mirza, N. A., et al. (2009, December). KASPAR--a minimally expressive humanoid robot for human-robot interaction research. Applied Bionics and Biomechanics, 6(3--4), 369--397. Google ScholarDigital Library
- De Silva, P. R. S., Tadano, K., Saito, A., Lambacher, S. G., & Higashi, M. (2009). Therapeutic-assisted robot for children with autism. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (p. 3561--3567). St. Louis, MO. Google ScholarDigital Library
- Delaunay, F. (2015). A retro-projected robotic head for social human-robot interaction. Unpublished doctoral dissertation, Plymouth University, Plymouth, England.Google Scholar
- Delaunay, F., Greeff, J. de, & Belpaeme, T. (2009). Towards retro-projected robot faces: An alternative to mechatronic and android faces. In Proceedings of the IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) (p. 306--311). Toyama, Japan.Google ScholarCross Ref
- Delaunay, F., Greeff, J. de, & Belpaeme, T. (2010). A study of a retro-projected robotic face and its effectiveness for gaze reading by humans. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI) (p. 39--44). Osaka, Japan: IEEE. Google ScholarDigital Library
- D'Mello, S., Olney, A., Williams, C., & Hays, P. (2012, May). Gaze tutor: A gaze-reactive intelligent tutoring system. International Journal of Human-Computer Studies, 70(5), 377--398. Google ScholarDigital Library
- Doniec, M., Sun, G., & Scassellati, B. (2006, December). Active learning of joint attention. In Proceedings of the IEEE-RAS International Conference on Humanoid Robots (Humanoids) (p. 34--39). Genova, Italy.Google Scholar
- Downing, P. E., Dodds, C. M., & Bray, D. (2004). Why does the gaze of others direct visual attention. Visual Cognition, 11(1), 71--79.Google ScholarCross Ref
- Driver, J., Davis, G., Ricciardelli, P., Kidd, P., Maxwell, E., & Baron-Cohen, S. (1999). Gaze perception triggers reflexive visuospatial orienting. Visual Cognition, 6(5), 509--540.Google ScholarCross Ref
- Emery, N. (2000). The eyes have it: the neuroethology, function and evolution of social gaze. Neuroscience & Biobehavioral Reviews, 24, 581--604.Google ScholarCross Ref
- Ferrari, E., Robins, B., & Dautenhahn, K. (2009). Therapeutic and educational objectives in robot assisted play for children with autism. In Proceedings of the IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) (p. 108--114). Toyama, Japan.Google ScholarCross Ref
- Fong, T., Nourbakhsh, I., & Dautenhahn, K. (2003, March). A survey of socially interactive robots. Robotics and Autonomous Systems, 42(3--4), 143--166.Google Scholar
- Freedman, E. G., & Sparks, D. L. (2000). Coordination of the eyes and head: movement kinematics. Experimental Brain Research, 131(1), 22--32.Google ScholarCross Ref
- Friesen, C. K., Ristic, J., & Kingstone, A. (2004). Attentional effects of counterpredictive gaze and arrow cues. Journal of Experimental Psychology: Human Perception and Performance, 30(2), 319--329.Google ScholarCross Ref
- Frintrop, S., Rome, E., & Christensen, H. I. (2010, January). Computational visual attention systems and their cognitive foundations: A survey. ACM Transactions on Applied Perception, 7(1), 6:1--6:39. Google ScholarDigital Library
- Frischen, A., Bayliss, A. P., & Tipper, S. P. (2007). Gaze cueing of attention: Visual attention, social cognition, and individual differences. Psychological Bulletin, 133(4), 694--724.Google ScholarCross Ref
- Fukayama, A., Ohno, T., Mukawa, N., Sawaki, M., & Hagita, N. (2002). Messages embedded in gaze of interface agents --- impression management with agent's gaze. In Proceedings of the ACM Annual Conference on Human Factors in Computing Systems (CHI) (p. 41). New York, NY: ACM Press. Google ScholarDigital Library
- Garau, M., Slater, M., Bee, S., & Sasse, M. (2001). The impact of eye gaze on communication using humanoid avatars. In Proceedings of the ACM Annual Conference on Human Factors in Computing Systems (CHI) (p. 309--316). Seattle, WA: ACM Press. Google ScholarDigital Library
- Garau, M., Slater, M., Vinayagamoorthy, V., Brogni, A., Steed, A., & Sasse, M. A. (2003). The impact of avatar realism and eye gaze control on perceived quality of communication in a shared immersive virtual environment. In Proceedings of the ACM Annual Conference on Human Factors in Computing Systems (CHI) (Vol. 5, p. 529). New York, NY: ACM Press. Google ScholarDigital Library
- Goldin-Meadow, S. (1999). The role of gesture in communication and thinking. Trends in Cognitive Sciences, 3(11), 419 -- 429.Google ScholarCross Ref
- Griffin, Z. M., & Bock, K. (2000, July). What the eyes say about speaking. Psychological Science, 11(4), 274--279.Google ScholarCross Ref
- Grigore, E. C., Eder, K., Pipe, A. G., Melhuish, C., & Leonards, U. (2013, November). Joint action understanding improves robot-to-human object handover. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Tokyo, Japan: IEEE.Google Scholar
- Gu, E., & Badler, N. (2006). Visual attention and eye gaze during multiparty conversations with distractions. In J. Gratch, M. Young, R. Aylett, D. Ballin, & P. Olivier (Eds.), Intelligent virtual agents (Vol. 4133, pp. 193--204). Springer Berlin Heidelberg. Google ScholarDigital Library
- Ham, J., Cuijpers, R. H., & Cabibihan, J.-J. (2015). Combining robotic persuasive strategies: The persuasive power of a storytelling robot that uses gazing and gestures. International Journal of Social Robotics, 7, 479--487.Google ScholarCross Ref
- Hanna, J. E., & Brennan, S. E. (2007). Speakers' eye gaze disambiguates referring expressions early during face-to-face conversation. Journal of Memory and Language, 57, 596--615.Google ScholarCross Ref
- Hayhoe, M., & Ballard, D. (2005). Eye movements in natural behavior. Trends in Cognitive Sciences, 9(4), 188--194.Google ScholarCross Ref
- Hoffman, M. W., Grimes, D. B., Shon, A. P., & Rao, R. P. (2006). A probabilistic model of gaze imitation and shared attention. Neural Networks, 19, 299--310. Google ScholarDigital Library
- Holroyd, A., Rich, C., Sidner, C. L., & Ponsler, B. (2011, July). Generating connection events for human-robot collaboration. In Proceedings of the IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) (p. 241--246). Atlanta, GA: IEEE.Google Scholar
- Hood, B. M., Willen, J. D., & Driver, J. (1998, March). Adult's eyes trigger shifts of visual attention in human infants. Psychological Science, 9(2).Google Scholar
- Huang, C.-M., & Mutlu, B. (2012). Robot behavior toolkit: Generating effective social behaviors for robots. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI) (pp. 25--32). Boston, MA. Google ScholarDigital Library
- Huang, C.-M., & Mutlu, B. (2013, June). The repertoire of robot behavior: Designing social behaviors to support human-robot joint activity. Journal of Human-Robot Interaction (JHRI), 2(2), 80--102. Google ScholarDigital Library
- Huang, C.-M., & Mutlu, B. (2014). Learning-based modeling of multimodal behaviors for humanlike robots. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI) (pp. 57--64). Bielefeld, Germany: ACM. Google ScholarDigital Library
- Huang, C.-M., & Mutlu, B. (2016). Anticipatory robot control for efficient human-robot collaboration. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI) (p. 83--90). Christchurch, New Zealand. Google ScholarDigital Library
- Huang, C.-M., & Thomaz, A. L. (2011). Effects of responding to, initiating and ensuring joint attention in human-robot interaction. In Proceedings of the IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) (p. 65--71). Atlanta, GA.Google ScholarCross Ref
- Hyönä, J., Tommola, J., & Alaja, A.-M. (1995). Pupil dilation as a measure of processing load in simultaneous interpretation and other language tasks. The Quarterly Journal of Experimental Psychology, 48(3), 598--612.Google ScholarCross Ref
- Imai, M., Kanda, T., Ono, T., Ishiguro, H., & Mase, K. (2002). Robot mediated round table: Analysis of the effect of robot's gaze. In Proceedings of the IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) (p. 411--416).Google ScholarCross Ref
- Ishi, C. T., Liu, C., Ishiguro, H., & Hagita, N. (2010). Head motion during dialogue speech and nod timing control in humanoid robots. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI) (pp. 293--300). Osaka, Japan: IEEE. Google ScholarDigital Library
- Itakura, S., Ishida, H., Kanda, T., Shimada, Y., Ishiguro, H., & Lee, K. (2008). How to build an intentional android: Infants' imitation of a robot's goal-directed actions. Infancy, 13(5), 519--532.Google ScholarCross Ref
- Ito, A., Hayakawa, S., & Terada, T. (2004, September). Why robots need body for mind communication--An attempt of eye-contact between human and robot. In Proceedings of the IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) (p. 473--478). Kurashiki, Okayama Japan: IEEE.Google Scholar
- Itti, L., Dhavale, N., & Pighin, F. (2004). Realistic avatar eye and head animation using a neurobiological model of visual attention. In B. Bosacchi, D. B. Fogel, & J. C. Bezdek (Eds.), Spie (Vol. 5200, pp. 64--78).Google Scholar
- Itti, L., Dhavale, N., & Pighin, F. (2006). Photorealistic attention-based gaze animation. In Proceedings of IEEE International Conference on Multimedia and Expo (ICME) (pp. 521--524). Toronto, Canada.Google ScholarCross Ref
- Itti, L., & Koch, C. (2000). A saliency-based search mechanism for overt and covert shifts of visual attention. Vision Research, 40(10--12), 1489--1506.Google Scholar
- Itti, L., & Koch, C. (2001, March). Computational modelling of visual attention. Nature Reviews Neuroscience, 2, 194--203.Google ScholarCross Ref
- Johansson, R. S., Westling, G., Bäckström, A., & Flanagan, J. R. (2001, September). Eye-hand coordination in object manipulation. The Journal of Neuroscience, 21(17), 6917--6932.Google ScholarCross Ref
- Johnson, W. L., Rickel, J. W., & Lester, J. C. (2000). Animated pedagogical agents: Face-to-face interaction in interactive learning environments. International Journal of Artificial Intelligence in Education, 11, 47--78.Google Scholar
- Jung, M. F., Lee, J. J., DePalma, N., Adalgeirsson, S. O., Hinds, P. J., & Breazeal, C. (2013). Engaging robots: Easing complex human-robot teamwork using backchanneling. In Proceedings of the Conference on Computer Supported Cooperative Work (CSCW) (p. 1555--1566). San Antonio, TX. Google ScholarDigital Library
- Kang, S., Gratch, J., & Sidner, C. (2012). Towards building a virtual counselor: Modeling nonverbal behavior during intimate self-disclosure. In Conitzer, Winikoff, Padgham, & van der Hoek (Eds.), Proceedings of the International Conference on Autonomous Agents and Mutliagent Systems (AAMAS) (p. 4--8). Valencia, Spain. Google ScholarDigital Library
- Kanngiesser, P., Itakura, S., Zhou, Y., Kanda, T., Ishiguro, H., & Hood, B. (2015). The role of social eye-gaze in children's and adults' ownership attributions to robotic agents in three cultures. Interaction Studies, 16(1), 1--28.Google ScholarCross Ref
- Karreman, D. E., Ludden, G. D., Dijk, E. M. van, & Evers, V. (2015, April). How can a tour guide robot influence visitors' engagement, orientation and group formations? In M. Salem, A. Weiss, P. Baxter, & K. Dautenhahn (Eds.), 4th International Symposium on New Frontiers in Human-Robot Interaction. Canterbury, UK.Google Scholar
- Karreman, D. E., Sepúlveda Bradford, G., Dijk, B. van, Lohse, M., & Evers, V. (2013). What happens when a robot favors someone? In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI) (pp. 157--158). Tokya, Japan. Google ScholarDigital Library
- Kemp, C. C., Edsinger, A., & Torres-Jara, E. (2007). Challenges for robot manipulation in human environments. IEEE Robotics and Automation Magazine, 14(1), 20.Google ScholarCross Ref
- Kendon, A. (1967). Some functions of gaze-direction in social interaction. Acta Psychologica, 26(1), 22--63.Google ScholarCross Ref
- Kennedy, J., Baxter, P., & Belpaeme, T. (2015a). Comparing robot embodiments in a guided discovery learning interaction with children. International Journal of Social Robotics (IJSR), 7(2), 293--308.Google ScholarCross Ref
- Kennedy, J., Baxter, P., & Belpaeme, T. (2015b). Head pose estimation is an inadequate replacement for eye gaze in child-robot interaction. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI) Late Breaking Report (p. 35--36). Portland, OR: ACM. Google ScholarDigital Library
- Khullar, S. C., & Badler, N. I. (2001). Where to look? Automating Attending Behaviors of Virtual Human Characters. Autonomous Agents and Multi-Agent Systems, 4(1-2), 9--23. Google ScholarDigital Library
- Kingstone, A., Tipper, C., Ristic, J., & Ngan, E. (2004). The eyes have it!: An fMRI investigation. Brain and Cognition, 55, 269--271.Google ScholarCross Ref
- Kirchner, N., Alempijevic, A., & Dissanayake, G. (2011). Nonverbal robot-group interaction using an imitated gaze cue. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI) (p. 497). New York, NY: ACM Press. Google ScholarDigital Library
- Kleinke, C. L. (1986, July). Gaze and eye contact: A research review. Psychological Bulletin, 100(1), 78--100.Google ScholarCross Ref
- Knight, H., & Simmons, R. (2013, May). Estimating human interest and attention via gaze analysis. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA). Karlsruhe, Germany.Google Scholar
- Kobayashi, H., & Kohshima, S. (2001). Unique morphology of the human eye and its adaptive meaning: comparative studies on external morphology of the primate eye. Journal of Human Evolution, 40(5), 419--435.Google ScholarCross Ref
- Kousidis, S., & Schlangen, D. (2015). The power of a glance: Evaluating embodiment and turn-tracking strategies of an active robotic overhearer. In Proceedings of the 2015 AAAI Spring Symposium: Turn-Taking and Coordination in Human-Machine Interaction (p. 36--43). Palo Alto, CA: AAAI Press.Google Scholar
- Kozima, H., & Ito, A. (1998). Towards language acquisition by an attention-sharing robot. In D. Powers (Ed.), Proceedings of the International Conference on New Methods in Language Processing and Computational Natural Language Learning (CoNLL) (p. 245--246). Sydney, Australia. Google ScholarDigital Library
- Kozima, H., Michalowski, M. P., & Nakagawa, C. (2009). Keepon: A playful robot for research, therapy, and entertainment. International Journal of Social Robotics, 1, 3--18.Google ScholarCross Ref
- Kozima, H., & Yano, H. (2001). A robot that learns to communicate with human caregivers. In Proceedings of the International Workshop on Epigenetic Robotics (p. 47--52). Lund, Sweden.Google Scholar
- Kuno, Y., Sadazuka, K., Kawashima, M., Yamazaki, K., Yamazaki, A., & Kuzuoka, H. (2007). Museum guide robot based on sociological interaction analysis. In Proceedings of the ACM Annual Conference on Human Factors in Computing Systems (CHI) (p. 1191--1194). San Jose, CA. Google ScholarDigital Library
- Kuratate, T., Matsusaka, Y., Pierce, B., & Cheng, G. (2011). "mask-bot": A life-size robot head using talking head animation for human-robot communication. In Proceedings of the IEEE-RAS International Conference on Humanoid Robots (Humanoids) (pp. 99--104). Bled, Slovenia.Google ScholarCross Ref
- Land, M. F., & Hayhoe, M. (2001, January). In what ways do eye movements contribute to everyday activities? Vision Research, 41(25--26), 3559--65.Google Scholar
- Langton, S. R. H., & Bruce, V. (1999). Reflexive visual orienting in response to the social attention of others. Visual Cognition, 6(5), 541--567.Google ScholarCross Ref
- Lee, J., Marsella, S., Traum, D., Gratch, J., & Lance, B. (2007). The rickel gaze model: A window on the mind of a virtual human. In C. Pelachaud (Ed.), Intelligent virtual agents (Vol. LNAI 4722, pp. 296--303). SpringerBerlin-Heidelberg. Google ScholarDigital Library
- Lee, K. M., Jung, Y., Kim, J., & Kim, S. R. (2006, October). Are physically embodied social agents better than disembodied social agents? The effects of physical embodiment, tactile interaction, and people's loneliness in humanrobot interaction. International Journal of Human-Computer Studies, 64(10), 962--973. Google ScholarDigital Library
- Leyzbeg, D., Spaulding, S., Toneva, M., & Scassellati, B. (2012). The physical presence of a robot tutor increases cognitive learning gains. In Proceedings of the Conference of the Cognitive Science Society (CogSci). Sapporo, Japan.Google Scholar
- Li, & Mao, X. (2012a, May). EEMML: The emotional eye movement animation toolkit. Multimedia Tools and Applications, 60(1), 181--201. Google ScholarDigital Library
- Li, & Mao, X. (2012b, October). Emotional eye movement generation based on Geneva Emotion Wheel for virtual agents. Journal of Visual Languages & Computing, 23(5), 299--310. Google ScholarDigital Library
- Li, J. (2015). The benefit of being physically present: A survey of experimental works comparing copresent robots, telepresent robots and virtual agents. International Journal of Human-Computer Studies, 77, 23--37. Google ScholarDigital Library
- Liu, C., Ishi, C. T., Ishiguro, H., & Hagita, N. (2012, March). Generation of nodding, head tilting and eye gazing for human-robot dialogue interaction. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI) (p. 285--292). Boston, MA: ACM Press. Google ScholarDigital Library
- Lockerd, A., & Breazeal, C. (2004). Tutelage and socially guided robot learning. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (Vol. 4, p. 3475--3480).Google ScholarCross Ref
- Lungarella, M., & Metta, G. (2003). Beyond gazing, pointing, and reaching: A survey of developmental robotics. In Proceedings of the Joint IEEE International Conference on Developmental and Learning and on Epigenetic Robotics (pp. 81--89).Google Scholar
- Matsusaka, Y., Fujie, S., & Kobayashi, T. (2001). Modeling of conversational strategy for the robot participating in the group conversation. In Proceedings of the European Conference on Speech Communication and Technology (Eurospeech) (Vol. 1, p. 2173--2176).Google Scholar
- Matsusaka, Y., Tojo, T., Kubota, S., Furukawa, K., Tamiya, D., Hayata, K., et al. (1999). Multi-person conversation via multi-modal interface: A robot who communicate with multi-user. In Proceedings of the European Conference on Speech Communications and Technology (Eurospeech) (p. 1723--1726).Google Scholar
- Mavridis, N. (2015). A review of verbal and non-verbal human-robot interactive communication. Robotics and Autonomous Systems, 63, 22--35. Google ScholarDigital Library
- McNeill, D. (1992). Hand and mind: What gestures reveal about thought. Chicago: The University of Chicago Press.Google Scholar
- Meltzoff, A. N., Brooks, R., Shon, A. P., & Rao, R. P. N. (2010). "Social" robots are psychological agents for infants: A test of gaze following. Neural Networks, 23, 966--972. Google ScholarDigital Library
- Metta, G., Sandini, G., Vernon, D., Natale, L., & Nori, F. (2008). The iCub humanoid robot: An open platform for research in embodied cognition. In Proceedings of the Workshop on Performance Metrics for Intelligent Systems (pp. 50--56). Gaithersburg, MD: ACM. Google ScholarDigital Library
- Moon, A., Troniak, D. M., Gleeson, B., Pan, M. K. X. J., Zheng, M., Blumer, B. A., et al. (2014). Meet me where i'm gazing: How shared attention gaze affects human-robot handover timing. In Proceedings of the 9th ACM/IEEE interational conference on human-robot interaction (HRI) (pp. 334--341). Bielefeld, Germany: ACM. Google ScholarDigital Library
- Moore, C., & Dunham, P. (2014). Joint attention: Its origins and role in development. New York, NY: Psychology Press.Google Scholar
- Movellan, J. R., & Watson, J. S. (2002). The development of gaze following as a Bayesian systems identification problem. In Proceedings of the IEEE International Conference on Development and Learning (ICDL) (pp. 34--40). Cambridge, MA. Google ScholarDigital Library
- Mumm, J., & Mutlu, B. (2011, March). Human-robot proxemics: Physical and psychological distancing in human-robot interaction. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI). Lausanne, Switzerland. Google ScholarDigital Library
- Murphy, R., Gonzales, J., & Srinivasan, V. (2011). Inferring social gaze from conversational structure and timing. Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI), 209--210. Google ScholarDigital Library
- Mutlu, B., Forlizzi, J., & Hodgins, J. (2006). A storytelling robot: Modeling and evaluation of human-like gaze behavior. In Proceedings of the IEEE-RAS International Conference on Humanoid Robots (Humanoids) (pp. 518--523). Genova, Italy.Google ScholarCross Ref
- Mutlu, B., Kanda, T., Forlizzi, J., Hodgins, J., & Ishiguro, H. (2012, January). Conversational gaze mechansims for humanlike robots. ACM Transactions on Interactive Intelligent Systems, 1(2). Google ScholarDigital Library
- Mutlu, B., Shiwa, T., Kanda, T., Ishiguro, H., & Hagita, N. (2009, March). Footing in human-robot conversations: How robots might shape participant roles using gaze cues. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI) (pp. 61--68). La Jolla, CA: ACM. Google ScholarDigital Library
- Mutlu, B., Yamaoka, F., Kanda, T., Ishiguro, H., & Hagita, N. (2009, March). Nonverbal leakage in robots: Communication of intentions through seemingly unintentional behavior. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI) (pp. 69--76). La Jolla, CA: ACM. Google ScholarDigital Library
- Nagai, Y., Hosoda, K., Morita, A., & Asada, M. (2003, December). A constructive model for the development of joint attention. Connection Science, 15(4), 211--229.Google ScholarCross Ref
- Normoyle, A., Badler, J. B., Fan, T., Badler, N. I., Cassol, V. J., & Musse, S. R. (2013). Evaluating perceived trust from procedurally animated gaze. In Proceedings of Motion on Games (pp. 119:141--119:148). New York, NY: ACM. Google ScholarDigital Library
- Oertel, C., Włodarczak, M., Edlund, J., Wagner, P., & Gustafson, J. (2012, September). Gaze patterns in turn-taking. In Proceedings of the Annual Conference of the International Speech Communication Association (INTERSPEECH) (Vol. 3, pp. 2243--2246). Portland, OR.Google Scholar
- Okumura, Y., Kanakogi, Y., Kanda, T., Ishiguro, H., & Itakura, S. (2013a). Infants understand the referential nature of human gaze but not robot gaze. Journal of Experimental Child Psychology, 116(1), 86--95.Google ScholarCross Ref
- Okumura, Y., Kanakogi, Y., Kanda, T., Ishiguro, H., & Itakura, S. (2013b, August). The power of human gaze on infant learning. Cognition, 128(2), 127--33.Google ScholarCross Ref
- Ono, T., Imai, M., & Ishiguro, H. (2001). A model of embodied communications with gestures between humans and robots. In Proceedings of the Annual Conference of the Cognitive Science Society (CogSci) (p. 732--737). Edinburgh, Scotland.Google Scholar
- Otsuka, K., Takemae, Y., & Yamato, J. (2005). A probabilistic inference of multiparty-conversation structure based on Markov-switching models of gaze patterns, head directions, and utterances. In Proceedings of the International Conference on Multimodal Interfaces (pp. 191--198). New York, NY, USA: ACM. Google ScholarDigital Library
- Pandey, A. K., Ali, M., & Alami, R. (2013). Towards a task-aware proactive sociable robot based on multi-state perspective-taking. International Journal of Social Robotics, 5(2), 215--236.Google ScholarCross Ref
- Pelachaud, C., & Bilvi, M. (2003). Modelling gaze behavior for conversational agents. In Intelligent Virtual Agents (IVA) (Vol. LNAI 2792, p. 93--100). Germany: Springer-Verlag.Google Scholar
- Peters, C., Pelachaud, C., Bevacqua, E., Mancini, M., & Poggi, I. (2005). A model of attention and interest using gaze behavior. In T. Panayiotopoulos, J. Gratch, R. Aylett, D. Ballin, P. Olivier, & T. Rist (Eds.), Intelligent virtual agents (Vol. 3661, p. 229--240). Springer Berlin Heidelberg. Google ScholarDigital Library
- Pfeiffer, U. J., Timmermans, B., Bente, G., Vogeley, K., & Schilbach, L. (2011, January). A non-verbal Turing test: Differentiating mind from machine in gaze-based social interaction. PloS one, 6(11), e27591.Google ScholarCross Ref
- Pfeiffer-Lessmann, N., Pfeiffer, T., & Wachsmuth, I. (2012). An operational model of joint attention-timing of gaze patterns in interactions between humans and a virtual human. In N. Miyake, D. Peebles, & R. P. Cooper (Eds.), Proceedings of the Annual Conference of the Cognitive Science Society (CogSci) (p. 851--856). Austin, TX: Cognitive Science Society.Google Scholar
- Pierno, A., Becchio, C., Wall, M., Smith, A., Turella, L., & Castiello, U. (2006). When gaze turns into grasp. Journal of Cognitive Neuroscience, 18(12), 2130--2137. Google ScholarDigital Library
- Pitsch, K., Vollmer, A.-L., & Mühlig, M. (2013). Robot feedback shapes the tutor's presentation: How a robot's online gaze strategies lead to micro-adaptation of the human's conduct. Interaction Studies, 14(2), 268--296.Google ScholarCross Ref
- Poggi, I., Pelachaud, C., & De Rosis, F. (2000). Eye communication in a conversational 3D synthetic agent. AI Communications, 13(3), 169--181. Google ScholarDigital Library
- Powers, A., Kiesler, S., Fussell, S., & Torrey, C. (2007). Comparing a computer agent with a humanoid robot. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI) (p. 145--152). Washington, DC. Google ScholarDigital Library
- Rich, C., & Ponsler, B. (2010). Recognizing engagement in human-robot interaction. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI) (p. 375--382). Osaka, Japan. Google ScholarDigital Library
- Ristic, J., Mottron, L., Friesen, C. K., Iarocci, G., Burack, J. A., & Kingstone, A. (2005). Eyes are special but not for everyone: The case of autism. Cognitive Brain Research, 24, 715--718. (Short communication)Google ScholarCross Ref
- Ruesch, J., Lopes, M., Bernardino, A., Hörnstein, J., Santos-Victor, J., & Pfeifer, R. (2008, May). Multimodal saliency-based bottom-up attention: A framework for the humanoid robot iCub. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA) (p. 962--967). Pasadena, CA.Google Scholar
- Ruhland, K., Peters, C. E., Andrist, S., Badler, J. B., Badler, N. I., Gleicher, M., et al. (2015). A review of eye gaze in virtual agents, social robotics and HCI: Behaviour generation, user interaction and perception. Computer Graphics Forum, 1--28. Google ScholarDigital Library
- Saerbeck, M., Schut, T., Bartneck, C., & Janse, M. D. (2010). Expressive robots in education: varying the degree of social supportive behavior of a robotic tutor. In Proceedings of the International Conference on Human Factors in Computing Systems (CHI) (p. 1613--1622). Atlanta, GA. Google ScholarDigital Library
- Sakamoto, D., Kanda, T., Ono, T., Ishiguro, H., & Hagita, N. (2007). Android as a telecommunication medium with a human-like presence. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI) (p. 193--200). Washington, DC. Google ScholarDigital Library
- Sakita, K., Ogawara, K., Murakami, S., Kawamura, K., & Ikeuchi, K. (2004). Flexible cooperation between human and robot by interpreting human intention from gaze information. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (Vol. 1).Google ScholarCross Ref
- Satake, S., Kanda, T., Glas, D. F., Imai, M., Ishiguro, H., & Hagita, N. (2010). How to approach humans?-strategies for social robots to initiate interaction. Journal of the Robotics Society of Japan, 28(3), 327--337.Google ScholarCross Ref
- Sauppé, A., & Mutlu, B. (2014). Robot deictics: How gesture and context shape referential communication. In Proceedings of the ACM/IEEE International Conference on Human-robot Interaction (HRI) (p. 342--349). Bielefeld, Germany. Google ScholarDigital Library
- Scassellati, B. (1996). Mechanisms of shared attention for a humanoid robot. In Proceedings of the AAAI Fall Symposium on Embodied Cognition and Action (Vol. 4, p. 21). Cambridge, MA.Google Scholar
- Scassellati, B. (1999). Imitation and mechanisms of joint attention: A developmental structure for building social skills on a humanoid robot. In Computation For Metaphors, Analogy, and Agents (Vol. 1562, p. 176--195). Springer Lecture Notes in Artificial Intelligence. Google ScholarDigital Library
- Scassellati, B. (2007). How social robots will help us to diagnose, treat, and understand autism. In S. Thrun, R. Brooks, & H. Durrant-Whyte (Eds.), Robotics research (Vol. 28, p. 552--563). Springer Berlin-Heidelberg.Google Scholar
- Scassellati, B., Admoni, H., & Matarić, M. (2012). Robots for use in autism research. Annual Review of Biomedical Engineering, 14, 275--294.Google ScholarCross Ref
- Shah, J., & Breazeal, C. (2010). An empirical analysis of team coordination behaviors and action planning with application to human-robot teaming. Human Factors, 52(2), 234--245.Google ScholarCross Ref
- Shic, F., Scassellati, B., Lin, D., & Chawarska, K. (2007, July). Measuring context: The gaze patterns of children with autism evaluated from the bottom-up. In Proceedings of the IEEE International Conference on Development and Learning (ICDL) (p. 70--75). London, UK.Google Scholar
- Sidner, C. L., Kidd, C. D., Lee, C., & Lesh, N. (2004). Where to look: A study of human-robot engagement. In Proceedings of the International Conference on Intelligent User Interfaces (IUI '04) (pp. 78--84). New York, NY: ACM. Google ScholarDigital Library
- Siegle, G. J., Ichikawa, N., & Steinhauer, S. (2008). Blink before and after you think: Blinks occur prior to and following cognitive load indexed by pupillary responses. Psychophysiology, 45(5), 679--687.Google ScholarCross Ref
- Simmons, R., Makatchev, M., Kirby, R., Lee, M. K., Fanaswala, I., Browning, B., et al. (2011). Believable robot characters. AI Magazine, 32(4), 39--52.Google ScholarDigital Library
- Sisbot, E. A., & Alami, R. (2012, October). A human-aware manipulation planner. IEEE Transactions on Robotics, 28(5), 1045--1057. Google ScholarDigital Library
- Sorostinean, M., Ferland, F., Dang, T.-H.-H., & Tapus, A. (2014). Motion-oriented attention for a social gaze robot behavior. In Proceedings of the International Conference on Social Robotics (ICSR) (pp. 310--319). Sydney, Australia: Springer. (LNAI 8755)Google ScholarCross Ref
- Spexard, T., Hanheide, M., & Sagerer, G. (2007, October). Human-oriented interaction with an anthropomorphic robot. IEEE Transactions on Robotics, 23(5), 852--862. Google ScholarDigital Library
- Srinivasan, V. (2014). High social acceptance of head gaze loosely synchronized with speech for social robots. Unpublished doctoral dissertation, Texas A & M University.Google Scholar
- Srinivasan, V., & Murphy, R. (2011, March). A survey of social gaze. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI) (p. 253--254). Lausanne, Switzerland. Google ScholarDigital Library
- Staudte, M., & Crocker, M. W. (2009, March). Visual attention in spoken human-robot interaction. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI) (pp. 77--84). La Jolla, CA: ACM Press. Google ScholarDigital Library
- Staudte, M., & Crocker, M. W. (2011, August). Investigating joint attention mechanisms through spoken human-robot interaction. Cognition, 120, 268--291.Google ScholarCross Ref
- Strabala, K., Lee, M. K., Dragan, A., Forlizzi, J., & Srinivasa, S. S. (2012, September). Learning the communication of intent prior to physical collaboration. In Proceedings of the IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) (pp. 968--973). IEEE.Google Scholar
- Strabala, K., Lee, M. K., Dragan, A., Forlizzi, J., Srinivasa, S. S., Cakmak, M., et al. (2013). Toward seamless human-robot handovers. Journal of Human-Robot Interaction (JHRI), 2(1), 112--132. Google ScholarDigital Library
- Szafir, D., & Mutlu, B. (2012, May). Pay attention! Designing adaptive agents that monitor and improve user engagement. In Proceedings of the ACM Annual Conference on Human Factors in Computing Systems (CHI) (pp. 11--20). Austin, TX. Google ScholarDigital Library
- Takayama, L., Dooley, D., & Ju, W. (2011, March). Expressing thought: Improving robot readability with animation principles. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI) (pp. 69--76). Lausanne, Switzerland: ACM Press. Google ScholarDigital Library
- Tapus, A., Matarić, M., & Scassellati, B. (2007, March). Socially assistive robotics: The grand challenges in helping humans through social interaction. IEEE Robotics and Automation Magazine, 35--42.Google Scholar
- Tapus, A., Peca, A., Aly, A., Pop, C., Jisa, L., Pintea, S., et al. (2012). Children with autism social engagement in interaction with Nao, an imitative robot A series of single case experiments. Interaction Studies, 13(3), 315--347.Google ScholarCross Ref
- Thórisson, K. (1997). Gandalf: An embodied humanoid capable of real-time multimodal dialogue with people. In Proceedings of the First International Conference on Autonomous Agents (AGENTS). Marina Del Rey, CA: ACM. Google ScholarDigital Library
- Thórisson, K. R. (1994, August). Face-to-face communication with computer agents. In AAAI Spring Symposium on Believable Agents (pp. 86--90). Stanford University, CA. Retrieved from http://alumni.media.mit.edu/kris/ftp/f_t_f_comm.pdfGoogle Scholar
- Thórisson, K. R. (1997). Layered modular action control for communicative humanoids. In Computer Animation (pp. 134--143). Google ScholarDigital Library
- Trafton, J., & Bugajska, M. (2008). Integrating Vision and Audition within a Cognitive Architecture to Track Conversations. In Proceedings of the 3rd International Conference on Human-Robot Interaction (HRI) (pp. 201--208). Amsterdam, Netherlands. Google ScholarDigital Library
- Triesch, J., Teuscher, C., Deák, G. O., & Carlson, E. (2006). Gaze following: Why (not) learn it? Developmental Science, 9(2), 125--147.Google ScholarCross Ref
- Van Schendel, J. A., & Cuijpers, R. H. (2015, April). Turn-yielding cues in robot-human conversation. In K. D. M. Salem, A. Weiss, P. Baxter (Ed.), 4th International Symposium on New Frontiers in Human-Robot Interaction. Canterbury, UK.Google Scholar
- Vázquez, M., Steinfeld, A., Hudson, S. E., & Forlizzi, J. (2014). Spatial and other social engagement cues in a child-robot interaction: Effects of a sidekick. In Proceedings of the 9th ACM/IEEE International Conference on Human-Robot Interaction (HRI). Bielefeld, Germany: ACM. Google ScholarDigital Library
- Vertegaal, R., & Ding, Y. (2002). Explaining effects of eye gaze on mediated group conversations: Amount or synchronization? In Proceedings of the ACM Conference on Computer Supported Cooperative Work (CSCW) (p. 41--48). New York, NY: ACM. Google ScholarDigital Library
- Vertegaal, R., Slagter, R., Veer, G. van der, & Nijholt, A. (2001). Eye gaze patterns in conversations: There is more to conversational agents than meets the eyes. In Proceedings of the ACM Conference on Human Factors in Computing Systems (SIGCHI) (Vol. 3, p. 301--308). Seattle, WA. Google ScholarDigital Library
- Vijayakumar, S., Conradt, J., Shibata, T., & Schaal, S. (2001, October). Overt visual attention for a humanoid robot. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (Vol. 4, p. 2332--2337). Maui, HI.Google Scholar
- Wainer, J., Feil-Seifer, D., Shell, D. A., & Matarić, M. J. (2007, August). Embodiment and human-robot interaction: A task-based perspective. In IEEE Proceedings of the International Workshop on Robot and Human Interactive Communication (RO-MAN) (p. 872--877). Jeju Island, South Korea.Google Scholar
- Wang, N., & Gratch, J. (2010, April). Don't just stare at me! In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI) (p. 1241--1249). Atlanta, GA: ACM. Google ScholarDigital Library
- Xu, T., Zhang, H., & Yu, C. (2013). Cooperative gazing behaviors in human multi-robot interaction. Interaction Studies, 14(3), 390--418.Google ScholarCross Ref
- Yamazaki, Yamazaki, K., Burdelski, M., Kuno, Y., & Fukushima, M. (2010). Coordination of verbal and non-verbal actions in human-robot interaction at museums and exhibitions. Journal of Pragmatics, 42, 2398--2414.Google ScholarCross Ref
- Yamazaki, K., Kawashima, M., Kuno, Y., Akiya, N., Burdelski, M., Yamazaki, A., et al. (2007, September). Prior-to-request and request behaviors within elderly day care: Implications for developing service robots for use in multiparty settings. In L. Bannon, I. Wagner, C. Gutwin, R. Harper, & K. Schmidt (Eds.), ECSCW (p. 61--78). Limerick, Ireland: Springer.Google Scholar
- Yonezawa, T., Yamazoe, H., Utsumi, A., & Abe, S. (2007, November). Gaze-communicative behavior of stuffed-toy robot with joint attention and eye contact based on ambient gaze-tracking. In Proceedings of the International Conference on Multimodal Interfaces (ICMI) (p. 140--145). Nagoya, Japan: ACM Press. Google ScholarDigital Library
- Yoshikawa, Y., & Shinozawa, K. (2006, Aug). Responsive Robot Gaze to Interaction Partner. In Robotics: Science and Systems. Philadelphia, PA.Google Scholar
- Yoshikawa, Y., Shinozawa, K., Ishiguro, H., Hagita, N., & Miyamoto, T. (2006, October). The effects of responsive eye movement and blinking behavior in a communication robot. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 4564--4569).Google Scholar
- Yu, C., Schermerhorn, P., & Scheutz, M. (2012, January). Adaptive eye gaze patterns in interactions with human and artificial agents. ACM Transactions on Interactive Intelligent Systems, 1(2). Google ScholarDigital Library
- Zaraki, A., Mazzei, D., Giuliani, M., & De Rossi, D. (2014, April). Designing and evaluating a social gaze-control system for a humanoid robot. IEEE Transactions on Human-Machine Systems, 44(2), 157--168.Google ScholarCross Ref
- Zheng, M., Moon, A., Croft, E. A., & Meng, M. Q.-H. (2015). Impacts of robot head gaze on robot-to-human handovers. International Journal of Social Robotics.Google Scholar
Index Terms
- Social eye gaze in human-robot interaction: a review
Recommendations
4th workshop on eye gaze in intelligent human machine interaction: eye gaze and multimodality
ICMI '12: Proceedings of the 14th ACM international conference on Multimodal interactionThis is the fourth workshop in a series of workshops on Eye Gaze in Intelligent Human Machine Interaction, in which we have discussed a wide range of issues for eye gaze; technologies for sensing human attentional behaviors, roles of attentional ...
Augmenting human interaction capabilities with proximity, natural gestures, and eye gaze
MobileHCI '17: Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and ServicesNowadays, humans are surrounded by many complex computer systems. When people interact among each other, they use multiple modalities including voice, body posture, hand gestures, facial expressions, or eye gaze. Currently, computers can only understand ...
Eye-Hand Behavior in Human-Robot Shared Manipulation
HRI '18: Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot InteractionShared autonomy systems enhance people's abilities to perform activities of daily living using robotic manipulators. Recent systems succeed by first identifying their operators' intentions, typically by analyzing the user's joystick input. To enhance ...
Comments