Skip to main content
Top

2017 | OriginalPaper | Chapter

A Body Emotion-Based Human-Robot Interaction

Authors : Tehao Zhu, Qunfei Zhao, Jing Xiong

Published in: Computer Vision Systems

Publisher: Springer International Publishing

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

In order to achieve reasonable and natural interaction when facing vague human actions, a body emotion-based human-robot interaction (BEHRI) algorithm was developed in this paper. Laban movement analysis and fuzzy logic inference was used to extract the movement emotion and torso pose emotion. A finite state machine model was constructed to describe the paradigm of the robot emotion, and then the interactive strategy was designed to generate suitable interactive behaviors. The algorithm was evaluated on UTD-MHAD, and the overall system was tested via questionnaire. The experimental results indicated that the proposed BEHRI algorithm was able to analyze the body emotion precisely, and the interactive behaviors were accessible and satisfying. BEHRI was shown to have good application potentials.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literature
1.
go back to reference Reddy, K.K., Shah, M.: Recognizing 50 human action categories of web videos. Mach. Vis. Appl. 24(5), 971–981 (2013)CrossRef Reddy, K.K., Shah, M.: Recognizing 50 human action categories of web videos. Mach. Vis. Appl. 24(5), 971–981 (2013)CrossRef
2.
go back to reference Alonso Martín, F., Ramey, A., Salichs, M.A.: Speaker identification using three signal voice domains during human-robot interaction. In: Proceedings of 2014 ACM/IEEE International Conference on Human-Robot Interaction, pp. 114–115. ACM (2014) Alonso Martín, F., Ramey, A., Salichs, M.A.: Speaker identification using three signal voice domains during human-robot interaction. In: Proceedings of 2014 ACM/IEEE International Conference on Human-Robot Interaction, pp. 114–115. ACM (2014)
3.
go back to reference Chaaraoui, A.A., Padilla-López, J.R., Climent-Pérez, P., Flórez-Revuelta, F.: Evolutionary joint selection to improve human action recognition with RGB-D devices. Expert Syst. Appl. 41(3), 786–794 (2014)CrossRef Chaaraoui, A.A., Padilla-López, J.R., Climent-Pérez, P., Flórez-Revuelta, F.: Evolutionary joint selection to improve human action recognition with RGB-D devices. Expert Syst. Appl. 41(3), 786–794 (2014)CrossRef
4.
go back to reference Venkataraman, V., Turaga, P., Lehrer, N., Baran, M., Rikakis, T., Wolf, S.L.: Attractor-shape for dynamical analysis of human movement: applications in stroke rehabilitation and action recognition. In: 2013 IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 514–520. IEEE Press (2013) Venkataraman, V., Turaga, P., Lehrer, N., Baran, M., Rikakis, T., Wolf, S.L.: Attractor-shape for dynamical analysis of human movement: applications in stroke rehabilitation and action recognition. In: 2013 IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 514–520. IEEE Press (2013)
5.
go back to reference Siddiqi, M.H., Ali, R., Khan, A.M., Park, Y.-T., Lee, S.: Human facial expression recognition using stepwise linear discriminant analysis and hidden conditional random fields. IEEE T Image Process 24(4), 1386–1398 (2015)MathSciNetCrossRef Siddiqi, M.H., Ali, R., Khan, A.M., Park, Y.-T., Lee, S.: Human facial expression recognition using stepwise linear discriminant analysis and hidden conditional random fields. IEEE T Image Process 24(4), 1386–1398 (2015)MathSciNetCrossRef
6.
go back to reference Yildiz, I.B., von Kriegstein, K., Kiebel, S.J.: From birdsong to human speech recognition: Bayesian inference on a hierarchy of nonlinear dynamical systems. PLoS Comput. Biol. 9(9), 1–16 (2013)CrossRef Yildiz, I.B., von Kriegstein, K., Kiebel, S.J.: From birdsong to human speech recognition: Bayesian inference on a hierarchy of nonlinear dynamical systems. PLoS Comput. Biol. 9(9), 1–16 (2013)CrossRef
7.
go back to reference Chatterjee, M., Peng, S.-C.: Processing F0 with cochlear implants: modulation frequency discrimination and speech intonation recognition. Hear. Res. 235(1), 143–156 (2008)CrossRef Chatterjee, M., Peng, S.-C.: Processing F0 with cochlear implants: modulation frequency discrimination and speech intonation recognition. Hear. Res. 235(1), 143–156 (2008)CrossRef
8.
go back to reference Lichtenstern, M., Frassl, M., Perun, B., Angermann, M.: A prototyping environment for interaction between a human and a robotic multi-agent system. In: 2012 7th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 185–186. IEEE Press (2012) Lichtenstern, M., Frassl, M., Perun, B., Angermann, M.: A prototyping environment for interaction between a human and a robotic multi-agent system. In: 2012 7th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 185–186. IEEE Press (2012)
9.
go back to reference Yamada, T., Murata, S., Arie, H., Ogata, T.: Dynamical integration of language and behavior in a recurrent neural network for human-robot interaction. Front. Neurorobot. 10(5), 1–17 (2016) Yamada, T., Murata, S., Arie, H., Ogata, T.: Dynamical integration of language and behavior in a recurrent neural network for human-robot interaction. Front. Neurorobot. 10(5), 1–17 (2016)
10.
go back to reference Palm, R., Chadalavada, R., Lilienthal, A.: Fuzzy modeling and control for intention recognition in human-robot systems. In: 8th International Conference on Computational Intelligence (IJCCI), Porto, Portugal, pp. 67–74. SciTePress (2016) Palm, R., Chadalavada, R., Lilienthal, A.: Fuzzy modeling and control for intention recognition in human-robot systems. In: 8th International Conference on Computational Intelligence (IJCCI), Porto, Portugal, pp. 67–74. SciTePress (2016)
11.
go back to reference Liu, P., Glas, D.F., Kanda, T., Ishiguro, H.: Data-driven HRI: learning social behaviors by example from human-human interaction. IEEE Trans. Robot. 32(4), 988–1008 (2016)CrossRef Liu, P., Glas, D.F., Kanda, T., Ishiguro, H.: Data-driven HRI: learning social behaviors by example from human-human interaction. IEEE Trans. Robot. 32(4), 988–1008 (2016)CrossRef
12.
go back to reference Bohus, D., Horvitz, E.: Managing human-robot engagement with forecasts and… um… hesitations. In: Proceedings of 16th International Conference on Multimodal Interaction, pp. 2–9. ACM (2014) Bohus, D., Horvitz, E.: Managing human-robot engagement with forecasts and… um… hesitations. In: Proceedings of 16th International Conference on Multimodal Interaction, pp. 2–9. ACM (2014)
13.
go back to reference Aly, A., Tapus, A.: A model for synthesizing a combined verbal and nonverbal behavior based on personality traits in human-robot interaction. In: Proceedings of 8th ACM/IEEE International Conference on Human-Robot Interaction, pp. 325–332. IEEE Press (2013) Aly, A., Tapus, A.: A model for synthesizing a combined verbal and nonverbal behavior based on personality traits in human-robot interaction. In: Proceedings of 8th ACM/IEEE International Conference on Human-Robot Interaction, pp. 325–332. IEEE Press (2013)
14.
go back to reference Liu, Z., Wu, M., Li, D., Chen, L., Dong, F., Yamazaki, Y., Hirota, K.: Communication atmosphere in humans and robots interaction based on the concept of fuzzy atmosfield generated by emotional states of humans and robots. J. Automat. Mob. Robot. Intell. Syst. 7(2), 52–63 (2013) Liu, Z., Wu, M., Li, D., Chen, L., Dong, F., Yamazaki, Y., Hirota, K.: Communication atmosphere in humans and robots interaction based on the concept of fuzzy atmosfield generated by emotional states of humans and robots. J. Automat. Mob. Robot. Intell. Syst. 7(2), 52–63 (2013)
15.
go back to reference Dautenhahn, K.: Socially intelligent robots: dimensions of human–robot interaction. Philos. Trans. Roy. Soc. Lond. B 362(1480), 679–704 (2007)CrossRef Dautenhahn, K.: Socially intelligent robots: dimensions of human–robot interaction. Philos. Trans. Roy. Soc. Lond. B 362(1480), 679–704 (2007)CrossRef
16.
go back to reference Laban, R.: The Language of Movement: A Guidebook to Choreutics. Plays, Boston (1974) Laban, R.: The Language of Movement: A Guidebook to Choreutics. Plays, Boston (1974)
17.
go back to reference Hsieh, C., Wang, Y.: Digitalize emotions to improve the quality life-analyzing movement for emotion application. J. Aesthet. Educ. 168, 64–69 (2009) Hsieh, C., Wang, Y.: Digitalize emotions to improve the quality life-analyzing movement for emotion application. J. Aesthet. Educ. 168, 64–69 (2009)
18.
go back to reference Ku, M.-S., Chen, Y.: From movement to emotion - a basic research of upper body (analysis foundation of body movement in the digital world 3 of 3). J. Aesthet. Educ. 164, 38–43 (2008) Ku, M.-S., Chen, Y.: From movement to emotion - a basic research of upper body (analysis foundation of body movement in the digital world 3 of 3). J. Aesthet. Educ. 164, 38–43 (2008)
20.
go back to reference Xia, G., Tay, J., Dannenberg, R., Veloso, M.: Autonomous robot dancing driven by beats and emotions of music. In: Proceedings of 11th International Conference on Autonomous Agents and Multiagent Systems, vol. 1, pp. 205–212. International Foundation for Autonomous Agents and Multiagent Systems (2012) Xia, G., Tay, J., Dannenberg, R., Veloso, M.: Autonomous robot dancing driven by beats and emotions of music. In: Proceedings of 11th International Conference on Autonomous Agents and Multiagent Systems, vol. 1, pp. 205–212. International Foundation for Autonomous Agents and Multiagent Systems (2012)
21.
go back to reference Chen, C., Jafari, R., Kehtarnavaz, N.: UTD-MHAD: a multimodal dataset for human action recognition utilizing a depth camera and a wearable inertial sensor. In: 2015 IEEE International Conference on Image Processing (ICIP), pp. 168–172. IEEE Press (2015) Chen, C., Jafari, R., Kehtarnavaz, N.: UTD-MHAD: a multimodal dataset for human action recognition utilizing a depth camera and a wearable inertial sensor. In: 2015 IEEE International Conference on Image Processing (ICIP), pp. 168–172. IEEE Press (2015)
Metadata
Title
A Body Emotion-Based Human-Robot Interaction
Authors
Tehao Zhu
Qunfei Zhao
Jing Xiong
Copyright Year
2017
DOI
https://doi.org/10.1007/978-3-319-68345-4_24

Premium Partner