Weitere Kapitel dieses Buchs durch Wischen aufrufen
With the aim of automatically generating head motions during speech utterances, analyses are conducted to verify the relations between head motions and linguistic and paralinguistic information carried by speech utterances. Motion-captured data are recorded during natural dialogue, and the rotation angles are estimated from the head marker data. Analysis results show that nods frequently occur during speech utterances, not only for expressing specific dialogue acts such as agreement and affirmation, but also to indicate syntactic or semantic units, which appear at the last syllable of the phrases, in strong phrase boundaries. The dependence on linguistic, prosodic and voice quality information of other head motions, including shakes and tilts, is also analyzed, and the potential for using this to automatically generate head motions is discussed. Intra-speaker variability and inter-speaker variability on the relations between head motion and dialogue acts are also analyzed. Finally, a method for controlling the head actuators of an android based on the rotation angles is proposed, and the mapping from human head motions is evaluated.
Bitte loggen Sie sich ein, um Zugang zu diesem Inhalt zu erhalten
Sie möchten Zugang zu diesem Inhalt erhalten? Dann informieren Sie sich jetzt über unsere Produkte:
Ishi, C.T., J. Haas, F. P. Wilbers, H. Ishiguro, N. Hagita. 2007. Analysis of head motions and speech, and head motion control in an android. In Proceedings of the 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2007), 548–553.
Ishi, C.T., C. Liu, H. Ishiguro, N. Hagita. 2010. Head motions during dialogue speech and nod timing control in humanoid robots. In Proceedings of the 5th ACM/IEEE International Conference on Human-robot interaction (HRI 2010), 293–300.
Sidner, C., C. Lee, L.-P. Morency, C. Forlines. 2006. The effect of head-nod recognition in human-robot conversation. In Proceedings of the HRI 2006, 290–296.
Morency, L.-P., C. Sidner, C. Lee, and T. Darrell. 2007. Head gestures for perceptual interfaces: The role of context in improving recognition. Artificial Intelligence 171 (8–9): 568–585. CrossRef
Yehia, H.C., T. Kuratate, and E. Vatikiotis-Bateson. 2002. Linking facial animation, head motion and speech acoustics. Journal of Phonetics 30: 555–568. CrossRef
Sargin, M.E., O. Aran, A. Karpov, F. Ofli, Y. Yasinnik, S. Wilson, E. Erzin, Y. Yemez, and A.M. Tekalp. 2006. Combined gesture-speech analysis and speech driven gesture synthesis. In Proceedings of the IEEE international conference on multimedia.
Munhall, K.G., J.A. Jones, D.E. Callan, T. Kuratate, and E. Vatikiotis-Bateson. 2004. Visual prosody and speech intelligibility—Head movement improves auditory speech perception. Psychological Science 15 (2): 133–137. CrossRef
Graf, H.P., E. Cosatto, V. Strom, and F.J. Huang. 2002. Visual prosody: Facial movements accompanying speech. In Proceedings of the IEEE international conference on automatic face and gesture recognition (FGR’02).
Beskow, J., B. Granstrom, and D. House. 2006. Visual correlates to prominence in several expressive modes. In Proceedings of the interspeech 2006—ICSLP, 1272–1275.
Busso, C., Z. Deng, M. Grimm, U. Neumann, and S. Narayanan. 2007. Rigid head motion in expressive speech animation: analysis and synthesis. IEEE Transactions on Audio, Speech and Language Processing.
Iwano, Y., S. Kageyama, E. Morikawa, S. Nakazato, and K. Shirai. 1996. Analysis of head movements and its role in spoken dialogue. In Proceedings of the ICSLP’96, 2167–2170.
Foster, M.E., and J. Oberlander. 2007. Corpus-based generation of head and eyebrow motion for an embodied conversational agent. Language Resources and Evaluation 41 (3–4): 305–323. CrossRef
Ishi, C.T., J. Haas, F.P. Wilbers, H. Ishiguro, and N. Hagita. 2007. Analysis of head motions and speech, and head motion control in an android. In Proceedings of the IROS 2007, 548–553.
Ishi, C.T., C. Liu, H. Ishiguro, and N. Hagita. Head motion during dialogue speech and nod timing control in humanoid robots. In Proceedings of 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI 2010), 293–300, 2010.
Stegmann, M.B., D.D. Gomez. 2002. A brief introduction to statistical shape analysis, Published online.
Ishi, C.T., H. Ishiguro, N. Hagita. 2006. Analysis of prosodic and linguistic cues of phrase finals for turn-taking and dialog acts. In Proceedings of the interspeech’2006—ICSLP, 2006–2009.
Ishi, C.T. 2005. Perceptually-related F0 parameters for automatic classification of phrase final tones. Transactions on Information and Systems E88-D(3), 481–488. CrossRef
Ishi, C.T., H. Ishiguro, and N. Hagita. 2008. Automatic extraction of paralinguistic information using prosodic features related to F0, duration and voice quality. Speech Communication 50 (6): 531–543. CrossRef
Ishi, C.T., H. Ishiguro, and N. Hagita. 2013. Analysis of relationship between head motion events and speech in dialogue conversations. Speech Communication 57 (2014): 233–243.
Minato, T., M. Shimada, H. Ishiguro, and S. Itakura. 2004. Development of an android robot for studying human-robot interaction, 424–434. Springer Verlag: Innovations in applied artificial intelligence.
- Analysis of Head Motions and Speech, and Head Motion Control in an Android Robot
Carlos Toshinori Ishi
- Springer Singapore
- Chapter 6
Neuer Inhalt/© Filograph | Getty Images | iStock