ABSTRACT
For social robots to be successful, they need to be accepted by humans. Human-robot interaction (HRI) researchers are aware of the need to develop the right kinds of robots with appropriate, natural ways for them to interact with humans. However, much of human perception and cognition occurs outside of conscious awareness, and how robotic agents engage these processes is currently unknown. Here, we explored automatic, reflexive social attention, which operates outside of conscious control within a fraction of a second to discover whether and how these processes generalize to agents with varying humanlikeness in their form and motion. Using a social variant of a well-established spatial attention paradigm, we tested whether robotic or human appearance and/or motion influenced an agent's ability to capture or direct implicit social attention. In each trial, either images or videos of agents looking to one side of space (a head turn) were presented to human observers. We measured reaction time to a peripheral target as an index of attentional capture and direction. We found that all agents, regardless of humanlike form or motion, were able to direct spatial attention in the cued direction. However, differences in the form of the agent affected attentional capture, i.e., how quickly the observers could disengage attention from the agent and respond to the target. This effect further interacted with whether the spatial cue (head turn) was presented through static images or videos. Overall whereas reflexive social attention operated in the same manner for human and robot social agents for spatial attentional cueing, robotic appearance, as well as whether the agent was static or moving significantly influenced unconscious attentional capture processes. Overall the studies reveal how unconscious social attentional processes operate when the agent is a human vs. a robot, add novel manipulations to the literature such as the role of visual motion, and provide a link between attention studies in HRI, and decades of research on unconscious social attention in experimental psychology and vision science.
- Coradeschi, S., Ishiguro, H., Asada, M., Shapiro, S. C., Thielscher, M., Breazeal, C., Mataric, M. J. and Ishida, H. Human-inspired robots. Ieee Intelligent Systems, 21, 4 (JulAug 2006), 74--85. Google ScholarDigital Library
- Breazeal, C. Toward sociable robots. Robotics and Autonomous Systems, 42, 3--4 (Mar 31 2003), 167--175.Google ScholarCross Ref
- Kanda, T., Miyashita, T., Osada, T., Haikawa, Y. and Ishiguro, H. Analysis of humanoid appearances in humanrobot interaction. IEEE Transactions on Robotics, 24, 3 2008), 725--735. Google ScholarDigital Library
- Seyama, J. and Nagayama, R. The uncanny valley: Effect of realism on the impression of artificial human faces. Presence: Teleoperators and Virtual Environments, 162007), 337--351. Google ScholarDigital Library
- Mori, M., MacDorman, K. F. and Kageki, N. The uncanny valley {from the field}. Robotics & Automation Magazine, IEEE, 19, 2 2012), 98--100.Google ScholarCross Ref
- Mori, M. The uncanny valley. Energy, 7, 4 1970), 33--35.Google Scholar
- Piwek, L., McKay, L. S. and Pollick, F. E. Empirical evaluation of the uncanny valley hypothesis fails to confirm the predicted effect of motion. Cognition, 130, 3 (Mar 2014), 271--277.Google ScholarCross Ref
- White, G., McKay, L. and Pollick, F. Motion and the uncanny valley. Journal of Vision, 7, 9 2007), 477--477.Google Scholar
- MacDorman, K. F. and Ishiguro, H. The uncanny advantage of using androids in cognitive and social science research. Interaction Studies, 7, 3 2006), 297--337.Google Scholar
- Takano, E., Matsumoto, Y., Nakamura, Y., Ishiguro, H. and Sugamoto, K. The Psychological Effects of Attendance of an Android on Communication. Spr Tra Adv Robot, 542009), 221--228.Google Scholar
- Hanson, D., Olney, A., Prilliman, S., Mathews, E., Zielke, M., Hammons, D., Fernandez, R. and Stephanou, H. Upending the uncanny valley. Menlo Park, CA; Cambridge, MA; London; AAAI Press; MIT Press; 1999, City, 2005.Google Scholar
- Bartneck, C., Croft, E., Kulic, D. and Zoghbi, S. Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. International Journal of Social Robotics, 1, 1 2009), 71--81.Google ScholarCross Ref
- Adair, J. G. and Spinner, B. Subjects Access to CognitiveProcesses - Demand Characteristics and Verbal Report. J Theor Soc Behav, 11, 1 1981), 31--52.Google ScholarCross Ref
- Sardar, A., Joosse, M., Weiss, A. and Evers, V. Don't stand so close to me: users' attitudinal and behavioral responses to personal space invasion by robots. ACM, City, 2012.Google ScholarDigital Library
- MacDorman, K. F., Minato, T., Shimada, M., Itakura, S., Cowley, S. and Ishiguro, H. Assessing human likeness by eye contact in an android testbed. City, 2005.Google Scholar
- Mutlu, B., Yamaoka, F., Kanda, T., Ishiguro, H. and Hagita, N. Nonverbal leakage in robots: communication of intentions through seemingly unintentional behavior. ACM, City, 2009.Google ScholarDigital Library
- Minato, T., Shimada, M., Itakura, S., Lee, K. and Ishiguro, H. Evaluating the human likeness of an android by comparing gaze behaviors elicited by the android and a person. Advanced Robotics, 20, 10 2006), 1147.Google ScholarCross Ref
- Cheetham, M., Pavlovic, I., Jordan, N., Suter, P. and Jancke, L. Category Processing and the human likeness dimension of the Uncanny Valley Hypothesis: Eye-Tracking Data. Frontiers in Psychology, 42013).Google Scholar
- Seyama, J. i. and Nagayama, R. Probing the uncanny valley with the eye size aftereffect. Presence, 18, 5 2009), 321--339. Google ScholarDigital Library
- Cheetham, M., Suter, P. and Jancke, L. The human likeness dimension of the "Uncanny Valley Hypothesis": Behavioral and functional MRI findings. Front Hum Neurosci, 52011), 126.Google Scholar
- Urgen, B. A., Plank, M., Ishiguro, H., Poizner, H. and Saygin, A. P. EEG theta and Mu oscillations during perception of human and robot actions. Frontiers in neurorobotics, 72013), 19.Google Scholar
- Saygin, A. P., Chaminade, T., Ishiguro, H., Driver, J. and Frith, C. The thing that should not be: predictive coding and the uncanny valley in perceiving human and humanoid robot actions. Soc Cogn Affect Neurosci, 7, 4 (Apr 2012), 413--422.Google ScholarCross Ref
- Tinwell, A., Grimshaw, M. and Williams, A. Uncanny behaviour in survival horror games. Journal of Gaming & Virtual Worlds, 2, 1 2010), 3--25.Google ScholarCross Ref
- Birmingham, E. and Kingstone, A. Human social attention. Prog Brain Res, 1762009), 309--320.Google Scholar
- Wiese, E., Wykowska, A., Zwickel, J. and Müller, H. J. I see what you mean: how attentional selection is shaped by ascribing intentions to others. PloS one, 7, 9 2012), e45391.Google Scholar
- Mutlu, B., Shiwa, T., Kanda, T., Ishiguro, H. and Hagita, N. Footing in human-robot conversations: how robots might shape participant roles using gaze cues. ACM, City, 2009. Google ScholarDigital Library
- Mutlu, B., Forlizzi, J. and Hodgins, J. A storytelling robot: Modeling and evaluation of human-like gaze behavior. IEEE, City, 2006.Google ScholarCross Ref
- Friesen, C. K. and Kingstone, A. The eyes have it! Reflexive orienting is triggered by nonpredictive gaze. Psychonomic Bulletin & Review, 51998), 490--495.Google Scholar
- Driver, J., Davis, G., Riccardelli, P., Kidd, P., Maxwell, E. and Baron-Cohen, S. Gaze perception triggers reflexive visuospatial orienting. Visual Cognition, 6, 5 1999), 509--540.Google ScholarCross Ref
- Posner, M. I. Orienting of attention. Quarterly Journal of Experimental Psychology, 321980), 3--25.Google Scholar
- Hietanen, J. K. Social attention orienting integrates visual information from head and body orientation. Psychol Res, 66, 3 (Aug 2002), 174--179.Google ScholarCross Ref
- Hietanen, J. K. Does your gaze direction and head orientation shift my visual attention? Neuroreport, 10, 16 (Nov 8 1999), 3443--3447.Google ScholarCross Ref
- Admoni, H., Bank, C., Tan, J., Toneva, M., & Scassellati, B. Robot gaze does not reflexively cue human attention. . Proceedings of the 33rd Annual Conference of the Cognitive Science Society, Boston, MA, USA 2011), 1983--1988.Google Scholar
- Chaminade, T. and Okka, M. M. Comparing the effect of humanoid and human face for the spatial orientation of attention. Frontiers in neurorobotics, 72013), 12.Google Scholar
- Thompson, J. C., Trafton, J. G. and McKnight, P. The perception of humanness from the movements of synthetic agents. Perception, 40, 6 2011), 695--704.Google ScholarCross Ref
- Frischen, A., Bayliss, A. P. and Tipper, S. P. Gaze cueing of attention: visual attention, social cognition, and individual differences. Psychol Bull, 133, 4 (Jul 2007), 694--724.Google ScholarCross Ref
- Quadflieg, S., Mason, M. F. and Macrae, C. N. The owl and the pussycat: gaze cues and visuospatial orienting. Psychon Bull Rev, 11, 5 (Oct 2004), 826--831.Google ScholarCross Ref
- Langton, S. R. and Bruce, V. You must see the point: automatic processing of cues to the direction of social attention. J Exp Psychol Hum Percept Perform, 26, 2 (Apr 2000), 747--757.Google ScholarCross Ref
- Langton, S. R. The mutual influence of gaze and head orientation in the analysis of social attention direction. The Quarterly journal of experimental psychology. A, Human experimental psychology, 53, 3 (Aug 2000), 825--845.Google Scholar
- Hietanen, J. K. and Leppanen, J. M. Does facial expression affect attention orienting by gaze direction cues? J Exp Psychol Hum Percept Perform, 29, 6 (Dec 2003), 12281243.Google ScholarCross Ref
- Farroni, T., Massaccesi, S., Pividori, D. and Johnson, M. H. Gaze following in newborns. Infancy, 5, 1 2004), 39--60.Google ScholarCross Ref
- Moore, C., Angelopoulos, M. and Bennett, P. The role of movement in the development of joint visual attention. Infant Behav Dev, 20, 1 (Jan-Mar 1997), 83--92.Google ScholarCross Ref
- Rohlfing, K. J., Longo, M. R. and Bertenthal, B. I. Dynamic pointing triggers shifts of visual attention in young infants. Developmental Science, 15, 3 (May 2012), 426--435.Google ScholarCross Ref
- Naccache, L., Blandin, E. and Dehaene, S. Unconscious masked priming depends on temporal attention. Psychological Science, 13, 5 2002), 416--424.Google ScholarCross Ref
- Wiese, E., Wykowska, A. and Müller, H. J. What we observe is biased by what other people tell us: Beliefs about the reliability of gaze behavior modulate attentional orienting to gaze cues. PloS one, 9, 4 2014), e94529.Google Scholar
- Abrams, R. A. and Christ, S. E. Motion onset captures attention: A rejoinder to Franconeri and Simons (2005). Perception & psychophysics, 68, 1 2006), 114--117.Google Scholar
- Parkhurst, D., Law, K. and Niebur, E. Modeling the role of salience in the allocation of overt visual attention. Vision Research, 42, 1 (Jan 2002), 107--123.Google ScholarCross Ref
- Langton, S. R. H., Law, A. S., Burton, A. M. and Schweinberger, S. R. Attention capture by faces. Cognition, 107, 1 (Apr 2008), 330--342.Google ScholarCross Ref
Index Terms
- Robot Form and Motion Influences Social Attention
Recommendations
The subthalamic nucleus influences visuospatial attention in humans
Spatial attention is a lateralized feature of the human brain. Whereas the role of cortical areas of the nondominant hemisphere on spatial attention has been investigated in detail, the impact of the BG, and more precisely the subthalamic nucleus, on ...
Form-From-Motion: MEG Evidence for Time Course and Processing Sequence
The neural mechanisms and role of attention in the processing of visual form defined by luminance or motion cues were studied using magnetoencephalography. Subjects viewed bilateral stimuli composed of moving random dots and were instructed to covertly ...
Dorsal and ventral attention systems underlie social and symbolic cueing
Eye gaze is a powerful cue for orienting attention in space. Studies examining whether gaze and symbolic cues recruit the same neural mechanisms have found mixed results. We tested whether there is a specialized attentional mechanism for social cues. We ...
Comments