Weitere Kapitel dieses Buchs durch Wischen aufrufen
Head motion occurs naturally and in synchrony with speech during human dialogue communication and may carry paralinguistic information such as intentions, attitudes, and emotions. Therefore, natural-looking head motion by a robot is important for smooth human–robot interaction. Based on rules inferred from analyses of the relationship between head motion and dialogue acts, we proposed a model for generating nodding and head tilting and evaluated for different types of humanoid robot. Analysis of subjective scores showed that the proposed model including head tilting and nodding can generate head motion with increased naturalness compared to nodding only or directly mapping people’s original motions without gaze information. We also found that an upward motion of the face can be used by robots that do not have a mouth in order to provide the appearance that an utterance is taking place. Finally, we conducted an experiment in which participants act as visitors to an information desk attended by robots. Evaluation results indicated that our model is equally effective as directly mapping people’s original motions with gaze information in terms of perceived naturalness.
Bitte loggen Sie sich ein, um Zugang zu diesem Inhalt zu erhalten
Sie möchten Zugang zu diesem Inhalt erhalten? Dann informieren Sie sich jetzt über unsere Produkte:
Ishi, C.T., C. Liu, H. Ishiguro, and N. Hagita. 2010. Head motion during dialogue speech and nod timing control in humanoid robots. In Proceedings of the 5th ACM/IEEE international conference on human-robot interaction (HRI 2010), 293–300.
Liu, C., C. T. Ishi, H. Ishiguro, and N. Hagita. 2012. Generation of nodding, head tilting and eye gazing for human-robot dialogue interaction. In Proceedings of the 7th ACM/IEEE international conference on human-robot interaction (HRI 2012), 285–292.
Ishi, C.T., H. Ishiguro, and N. Hagita. 2013. Analysis of relationship between head motion events and speech in dialogue conversations. Speech Communication 57 (2014): 233–243.
Ishi, C.T., C. Liu, H. Ishiguro, and N. Hagita. 2010. Head motion during dialogue speech and nod timing control in humanoid robots. In Proceedings of IEEE/RSJ human robot interaction (HRI 2010), 293–300.
Liu, C., C. Ishi, H. Ishiguro, and N. Hagita. 2013. Generation of nodding, head tilting and gazing for human-robot speech interaction. International Journal of Humanoid Robotics (IJHR) 10(1). CrossRef
Sidner, C., C. Lee, L.-P. Morency, and C. Forlines. 2006. The effect of head-nod recognition in human-robot conversation. In Proceedings of IEEE/RSJ human robot interaction (HRI 2006), 290–296.
Morency, L.-P., C. Sidner, C. Lee, and T. Darrell. 2007. Head gestures for perceptual interfaces: The role of context in improving recognition. Artificial Intelligence 171(8–9): 568–585. CrossRef
Yehia, H.C., T. Kuratate, and E. Vatikiotis-Bateson. 2002. Linking facial animation, head motion and speech acoustics. Journal of Phonetics 30: 555–568. CrossRef
Sargin, M.E., O. Aran, A. Karpov, F. Ofli, Y. Yasinnik, S. Wilson, E. Erzin, Y. Yemez, and A.M. Tekalp. 2006. Combined gesture-speech analysis and speech driven gesture synthesis. In Proceedings of IEEE international conference on multimedia.
Munhall, K.G., J.A. Jones, D.E. Callan, T. Kuratate, and E. Vatikiotis-Bateson. 2004. Visual prosody and speech intelligibility—Head movement improves auditory speech perception. Psychological Science 15(2): 133–137. CrossRef
Graf, H.P., E. Cosatto, V. Strom, and F.J. Huang. 2002. Visual prosody: Facial movements accompanying speech. In Proceedings of the IEEE international conference on automatic face and gesture recognition (FGR’02).
Beskow, J., B. Granstrom, D. House. 2006. Visual correlates to prominence in several expressive modes. In Proceedings of interspeech 2006—ICSLP, 1272–1275.
Busso, C., Z. Deng, M. Grimm, U. Neumann, and S. Narayanan. 2007. Rigid head motion in expressive speech animation: Analysis and synthesis. IEEE Transactions on Audio, Speech and Language Processing.
Iwano, Y., S. Kageyama, E. Morikawa, S. Nakazato, K. Shirai. 1996. Analysis of head movements and its role in spoken dialogue. In Proceedings of the international conference on spoken language processing (ICSLP’96), 2167–2170.
Foster, M.E., J. Oberlander. 2007. Corpus-based generation of head and eyebrow motion for an embodied conversational agent. Language Resources and Evaluation 41(3–4): 305–323. CrossRef
Ishi, C.T., J. Haas, F.P. Wilbers, H. Ishiguro, and N. Hagita. 2007. Analysis of head motions and speech, and head motion control in an android. In Proceedings of IEEE/RSJ international conference on intelligent robots and systems (IROS 2007), 548–553.
Ishi, C.T., H. Ishiguro, and N. Hagita. 2006. Analysis of prosodic and linguistic cues of phrase finals for turn-taking and dialog acts. In Proceedings of the interspeech’2006—ICSLP, 2006–2009.
Minato, T., M. Shimada, H. Ishiguro, and S. Itakura. 2004. Development of an android robot for studying human-robot interaction. Innovations in applied artificial intelligence, 424–434. Springer. CrossRef
DeBoer, M., and A.M. Boxer. 1979. Signal functions of infant facial expression and gaze direction during mother-infant face-to-face play. Child Development 50(4): 1215–1218. CrossRef
Langton, S.R.H., R.J. Watt, and V. Bruce. 2000. Do the eyes have it? Cues to the direction of social attention. Trends in Cognitive Sciences 4(2): 50–59. CrossRef
Kaplan, F., and V. Hafner. 2004. The challenges of joint attention. Interaction Studies 67–74. http://cogprints.org/4067/.
Nagai, Y., M. Asada, and K. Hosoda. 2006. Learning for joint attention helped by functional development. Advanced Robotics 20: 1165–1181(17). CrossRef
Ishi, C.T., C. Liu, H. Ishiguro, and N. Hagita. 2011. Speech-driven lip motion generation for tele-operated humanoid robots. In International Conference on Auditory-Visual Speech Processing.
- Generation of Head Motion During Dialogue Speech, and Evaluation in Humanoid Robots
Carlos T. Ishi
- Springer Singapore
- Chapter 7