skip to main content
10.1145/2696454.2696478acmconferencesArticle/Chapter ViewAbstractPublication PageshriConference Proceedingsconference-collections
research-article

Robot Form and Motion Influences Social Attention

Authors Info & Claims
Published:02 March 2015Publication History

ABSTRACT

For social robots to be successful, they need to be accepted by humans. Human-robot interaction (HRI) researchers are aware of the need to develop the right kinds of robots with appropriate, natural ways for them to interact with humans. However, much of human perception and cognition occurs outside of conscious awareness, and how robotic agents engage these processes is currently unknown. Here, we explored automatic, reflexive social attention, which operates outside of conscious control within a fraction of a second to discover whether and how these processes generalize to agents with varying humanlikeness in their form and motion. Using a social variant of a well-established spatial attention paradigm, we tested whether robotic or human appearance and/or motion influenced an agent's ability to capture or direct implicit social attention. In each trial, either images or videos of agents looking to one side of space (a head turn) were presented to human observers. We measured reaction time to a peripheral target as an index of attentional capture and direction. We found that all agents, regardless of humanlike form or motion, were able to direct spatial attention in the cued direction. However, differences in the form of the agent affected attentional capture, i.e., how quickly the observers could disengage attention from the agent and respond to the target. This effect further interacted with whether the spatial cue (head turn) was presented through static images or videos. Overall whereas reflexive social attention operated in the same manner for human and robot social agents for spatial attentional cueing, robotic appearance, as well as whether the agent was static or moving significantly influenced unconscious attentional capture processes. Overall the studies reveal how unconscious social attentional processes operate when the agent is a human vs. a robot, add novel manipulations to the literature such as the role of visual motion, and provide a link between attention studies in HRI, and decades of research on unconscious social attention in experimental psychology and vision science.

References

  1. Coradeschi, S., Ishiguro, H., Asada, M., Shapiro, S. C., Thielscher, M., Breazeal, C., Mataric, M. J. and Ishida, H. Human-inspired robots. Ieee Intelligent Systems, 21, 4 (JulAug 2006), 74--85. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Breazeal, C. Toward sociable robots. Robotics and Autonomous Systems, 42, 3--4 (Mar 31 2003), 167--175.Google ScholarGoogle ScholarCross RefCross Ref
  3. Kanda, T., Miyashita, T., Osada, T., Haikawa, Y. and Ishiguro, H. Analysis of humanoid appearances in humanrobot interaction. IEEE Transactions on Robotics, 24, 3 2008), 725--735. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Seyama, J. and Nagayama, R. The uncanny valley: Effect of realism on the impression of artificial human faces. Presence: Teleoperators and Virtual Environments, 162007), 337--351. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Mori, M., MacDorman, K. F. and Kageki, N. The uncanny valley {from the field}. Robotics & Automation Magazine, IEEE, 19, 2 2012), 98--100.Google ScholarGoogle ScholarCross RefCross Ref
  6. Mori, M. The uncanny valley. Energy, 7, 4 1970), 33--35.Google ScholarGoogle Scholar
  7. Piwek, L., McKay, L. S. and Pollick, F. E. Empirical evaluation of the uncanny valley hypothesis fails to confirm the predicted effect of motion. Cognition, 130, 3 (Mar 2014), 271--277.Google ScholarGoogle ScholarCross RefCross Ref
  8. White, G., McKay, L. and Pollick, F. Motion and the uncanny valley. Journal of Vision, 7, 9 2007), 477--477.Google ScholarGoogle Scholar
  9. MacDorman, K. F. and Ishiguro, H. The uncanny advantage of using androids in cognitive and social science research. Interaction Studies, 7, 3 2006), 297--337.Google ScholarGoogle Scholar
  10. Takano, E., Matsumoto, Y., Nakamura, Y., Ishiguro, H. and Sugamoto, K. The Psychological Effects of Attendance of an Android on Communication. Spr Tra Adv Robot, 542009), 221--228.Google ScholarGoogle Scholar
  11. Hanson, D., Olney, A., Prilliman, S., Mathews, E., Zielke, M., Hammons, D., Fernandez, R. and Stephanou, H. Upending the uncanny valley. Menlo Park, CA; Cambridge, MA; London; AAAI Press; MIT Press; 1999, City, 2005.Google ScholarGoogle Scholar
  12. Bartneck, C., Croft, E., Kulic, D. and Zoghbi, S. Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. International Journal of Social Robotics, 1, 1 2009), 71--81.Google ScholarGoogle ScholarCross RefCross Ref
  13. Adair, J. G. and Spinner, B. Subjects Access to CognitiveProcesses - Demand Characteristics and Verbal Report. J Theor Soc Behav, 11, 1 1981), 31--52.Google ScholarGoogle ScholarCross RefCross Ref
  14. Sardar, A., Joosse, M., Weiss, A. and Evers, V. Don't stand so close to me: users' attitudinal and behavioral responses to personal space invasion by robots. ACM, City, 2012.Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. MacDorman, K. F., Minato, T., Shimada, M., Itakura, S., Cowley, S. and Ishiguro, H. Assessing human likeness by eye contact in an android testbed. City, 2005.Google ScholarGoogle Scholar
  16. Mutlu, B., Yamaoka, F., Kanda, T., Ishiguro, H. and Hagita, N. Nonverbal leakage in robots: communication of intentions through seemingly unintentional behavior. ACM, City, 2009.Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Minato, T., Shimada, M., Itakura, S., Lee, K. and Ishiguro, H. Evaluating the human likeness of an android by comparing gaze behaviors elicited by the android and a person. Advanced Robotics, 20, 10 2006), 1147.Google ScholarGoogle ScholarCross RefCross Ref
  18. Cheetham, M., Pavlovic, I., Jordan, N., Suter, P. and Jancke, L. Category Processing and the human likeness dimension of the Uncanny Valley Hypothesis: Eye-Tracking Data. Frontiers in Psychology, 42013).Google ScholarGoogle Scholar
  19. Seyama, J. i. and Nagayama, R. Probing the uncanny valley with the eye size aftereffect. Presence, 18, 5 2009), 321--339. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Cheetham, M., Suter, P. and Jancke, L. The human likeness dimension of the "Uncanny Valley Hypothesis": Behavioral and functional MRI findings. Front Hum Neurosci, 52011), 126.Google ScholarGoogle Scholar
  21. Urgen, B. A., Plank, M., Ishiguro, H., Poizner, H. and Saygin, A. P. EEG theta and Mu oscillations during perception of human and robot actions. Frontiers in neurorobotics, 72013), 19.Google ScholarGoogle Scholar
  22. Saygin, A. P., Chaminade, T., Ishiguro, H., Driver, J. and Frith, C. The thing that should not be: predictive coding and the uncanny valley in perceiving human and humanoid robot actions. Soc Cogn Affect Neurosci, 7, 4 (Apr 2012), 413--422.Google ScholarGoogle ScholarCross RefCross Ref
  23. Tinwell, A., Grimshaw, M. and Williams, A. Uncanny behaviour in survival horror games. Journal of Gaming & Virtual Worlds, 2, 1 2010), 3--25.Google ScholarGoogle ScholarCross RefCross Ref
  24. Birmingham, E. and Kingstone, A. Human social attention. Prog Brain Res, 1762009), 309--320.Google ScholarGoogle Scholar
  25. Wiese, E., Wykowska, A., Zwickel, J. and Müller, H. J. I see what you mean: how attentional selection is shaped by ascribing intentions to others. PloS one, 7, 9 2012), e45391.Google ScholarGoogle Scholar
  26. Mutlu, B., Shiwa, T., Kanda, T., Ishiguro, H. and Hagita, N. Footing in human-robot conversations: how robots might shape participant roles using gaze cues. ACM, City, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Mutlu, B., Forlizzi, J. and Hodgins, J. A storytelling robot: Modeling and evaluation of human-like gaze behavior. IEEE, City, 2006.Google ScholarGoogle ScholarCross RefCross Ref
  28. Friesen, C. K. and Kingstone, A. The eyes have it! Reflexive orienting is triggered by nonpredictive gaze. Psychonomic Bulletin & Review, 51998), 490--495.Google ScholarGoogle Scholar
  29. Driver, J., Davis, G., Riccardelli, P., Kidd, P., Maxwell, E. and Baron-Cohen, S. Gaze perception triggers reflexive visuospatial orienting. Visual Cognition, 6, 5 1999), 509--540.Google ScholarGoogle ScholarCross RefCross Ref
  30. Posner, M. I. Orienting of attention. Quarterly Journal of Experimental Psychology, 321980), 3--25.Google ScholarGoogle Scholar
  31. Hietanen, J. K. Social attention orienting integrates visual information from head and body orientation. Psychol Res, 66, 3 (Aug 2002), 174--179.Google ScholarGoogle ScholarCross RefCross Ref
  32. Hietanen, J. K. Does your gaze direction and head orientation shift my visual attention? Neuroreport, 10, 16 (Nov 8 1999), 3443--3447.Google ScholarGoogle ScholarCross RefCross Ref
  33. Admoni, H., Bank, C., Tan, J., Toneva, M., & Scassellati, B. Robot gaze does not reflexively cue human attention. . Proceedings of the 33rd Annual Conference of the Cognitive Science Society, Boston, MA, USA 2011), 1983--1988.Google ScholarGoogle Scholar
  34. Chaminade, T. and Okka, M. M. Comparing the effect of humanoid and human face for the spatial orientation of attention. Frontiers in neurorobotics, 72013), 12.Google ScholarGoogle Scholar
  35. Thompson, J. C., Trafton, J. G. and McKnight, P. The perception of humanness from the movements of synthetic agents. Perception, 40, 6 2011), 695--704.Google ScholarGoogle ScholarCross RefCross Ref
  36. Frischen, A., Bayliss, A. P. and Tipper, S. P. Gaze cueing of attention: visual attention, social cognition, and individual differences. Psychol Bull, 133, 4 (Jul 2007), 694--724.Google ScholarGoogle ScholarCross RefCross Ref
  37. Quadflieg, S., Mason, M. F. and Macrae, C. N. The owl and the pussycat: gaze cues and visuospatial orienting. Psychon Bull Rev, 11, 5 (Oct 2004), 826--831.Google ScholarGoogle ScholarCross RefCross Ref
  38. Langton, S. R. and Bruce, V. You must see the point: automatic processing of cues to the direction of social attention. J Exp Psychol Hum Percept Perform, 26, 2 (Apr 2000), 747--757.Google ScholarGoogle ScholarCross RefCross Ref
  39. Langton, S. R. The mutual influence of gaze and head orientation in the analysis of social attention direction. The Quarterly journal of experimental psychology. A, Human experimental psychology, 53, 3 (Aug 2000), 825--845.Google ScholarGoogle Scholar
  40. Hietanen, J. K. and Leppanen, J. M. Does facial expression affect attention orienting by gaze direction cues? J Exp Psychol Hum Percept Perform, 29, 6 (Dec 2003), 12281243.Google ScholarGoogle ScholarCross RefCross Ref
  41. Farroni, T., Massaccesi, S., Pividori, D. and Johnson, M. H. Gaze following in newborns. Infancy, 5, 1 2004), 39--60.Google ScholarGoogle ScholarCross RefCross Ref
  42. Moore, C., Angelopoulos, M. and Bennett, P. The role of movement in the development of joint visual attention. Infant Behav Dev, 20, 1 (Jan-Mar 1997), 83--92.Google ScholarGoogle ScholarCross RefCross Ref
  43. Rohlfing, K. J., Longo, M. R. and Bertenthal, B. I. Dynamic pointing triggers shifts of visual attention in young infants. Developmental Science, 15, 3 (May 2012), 426--435.Google ScholarGoogle ScholarCross RefCross Ref
  44. Naccache, L., Blandin, E. and Dehaene, S. Unconscious masked priming depends on temporal attention. Psychological Science, 13, 5 2002), 416--424.Google ScholarGoogle ScholarCross RefCross Ref
  45. Wiese, E., Wykowska, A. and Müller, H. J. What we observe is biased by what other people tell us: Beliefs about the reliability of gaze behavior modulate attentional orienting to gaze cues. PloS one, 9, 4 2014), e94529.Google ScholarGoogle Scholar
  46. Abrams, R. A. and Christ, S. E. Motion onset captures attention: A rejoinder to Franconeri and Simons (2005). Perception & psychophysics, 68, 1 2006), 114--117.Google ScholarGoogle Scholar
  47. Parkhurst, D., Law, K. and Niebur, E. Modeling the role of salience in the allocation of overt visual attention. Vision Research, 42, 1 (Jan 2002), 107--123.Google ScholarGoogle ScholarCross RefCross Ref
  48. Langton, S. R. H., Law, A. S., Burton, A. M. and Schweinberger, S. R. Attention capture by faces. Cognition, 107, 1 (Apr 2008), 330--342.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Robot Form and Motion Influences Social Attention

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        HRI '15: Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction
        March 2015
        368 pages
        ISBN:9781450328838
        DOI:10.1145/2696454

        Copyright © 2015 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 2 March 2015

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

        Acceptance Rates

        HRI '15 Paper Acceptance Rate43of169submissions,25%Overall Acceptance Rate242of1,000submissions,24%

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader