Skip to main content
Erschienen in: Autonomous Robots 3/2018

05.07.2017

Generic method for generating blended gestures and affective functional behaviors for social robots

verfasst von: Greet Van de Perre, Hoang-Long Cao, Albert De Beir, Pablo Gómez Esteban, Dirk Lefeber, Bram Vanderborght

Erschienen in: Autonomous Robots | Ausgabe 3/2018

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Gesturing is an important modality in human–robot interaction. Up to date, gestures are often implemented for a specific robot configuration and therefore not easily transferable to other robots. To cope with this issue, we presented a generic method to calculate gestures for social robots. The method was designed to work in two modes to allow the calculation of different types of gestures. In this paper, we present the new developments of the method. We discuss how the two working modes can be combined to generate blended emotional expressions and deictic gestures. In certain situations, it is desirable to express an emotional condition through an ongoing functional behavior. Therefore, we implemented the possibility of modulating a pointing or reaching gesture into an affective gesture by influencing the motion speed and amplitude of the posture. The new implementations were validated on virtual models with different configurations, including those of the robots NAO and Justin.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Literatur
Zurück zum Zitat Alissandrakis, A., Nehaniv, C. L., & Dautenhahn, K. (2002). Imitation with alice: Learning to imitate corresponding actions across dissimilar embodiments. IEEE Transactions on Systems, Man and Cybernetics, Part A: Systems and Humans, 32(4), 482–496.CrossRef Alissandrakis, A., Nehaniv, C. L., & Dautenhahn, K. (2002). Imitation with alice: Learning to imitate corresponding actions across dissimilar embodiments. IEEE Transactions on Systems, Man and Cybernetics, Part A: Systems and Humans, 32(4), 482–496.CrossRef
Zurück zum Zitat Amaya, K., Bruderlin, A., & Calvert, T. (1996). Emotion from motion. Graphics Interface, 96, 222–229. Amaya, K., Bruderlin, A., & Calvert, T. (1996). Emotion from motion. Graphics Interface, 96, 222–229.
Zurück zum Zitat Andry, P., Gaussier, P., Moga, S., Banquet, J., & Nadel, J. (2001). Learning and communication via imitation: An autonomous robot perspective. IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans, 31, 431–442.CrossRef Andry, P., Gaussier, P., Moga, S., Banquet, J., & Nadel, J. (2001). Learning and communication via imitation: An autonomous robot perspective. IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans, 31, 431–442.CrossRef
Zurück zum Zitat Ascher, U. M., & Petzold, L. R. (1998). Computer methods for ordinary differential equations and differential-algebraic equations. Philadelphia: Siam.CrossRefMATH Ascher, U. M., & Petzold, L. R. (1998). Computer methods for ordinary differential equations and differential-algebraic equations. Philadelphia: Siam.CrossRefMATH
Zurück zum Zitat Atkinson, A. P., Dittrich, W. H., Gemmell, A. J., Young, A. W., et al. (2004). Emotion perception from dynamic and static body expressions in point-light and full-light displays. Perception, 33, 717–746.CrossRef Atkinson, A. P., Dittrich, W. H., Gemmell, A. J., Young, A. W., et al. (2004). Emotion perception from dynamic and static body expressions in point-light and full-light displays. Perception, 33, 717–746.CrossRef
Zurück zum Zitat Azad, P., Asfour, T., & Dillmann, R. (2007). Toward an unified representation for imitation of human motion on humanoids. In: IEEE International Conference on Robotics and Automation, (pp. 2558–2563), IEEE. Azad, P., Asfour, T., & Dillmann, R. (2007). Toward an unified representation for imitation of human motion on humanoids. In: IEEE International Conference on Robotics and Automation, (pp. 2558–2563), IEEE.
Zurück zum Zitat Balit, E., Vaufreydaz, D., & Reignier, P. (2016). Integrating animation artists into the animation design of social robots. In: ACM/IEEE Human–Robot Interaction 2016 (pp 417–418) Balit, E., Vaufreydaz, D., & Reignier, P. (2016). Integrating animation artists into the animation design of social robots. In: ACM/IEEE Human–Robot Interaction 2016 (pp 417–418)
Zurück zum Zitat Belpaeme, T., Baxter, P. E., Read, R., Wood, R., Cuayáhuitl, H., Kiefer, B., et al. (2012). Multimodal child-robot interaction: Building social bonds. Journal of Human–Robot Interaction, 1(2), 33–53. Belpaeme, T., Baxter, P. E., Read, R., Wood, R., Cuayáhuitl, H., Kiefer, B., et al. (2012). Multimodal child-robot interaction: Building social bonds. Journal of Human–Robot Interaction, 1(2), 33–53.
Zurück zum Zitat Breazeal, C., Kidd, C. D., Thomaz, A. L., Hoffman, G., & Berlin, M. (2005). Effects of nonverbal communication on efficiency and robustness in human–robot teamwork. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2005), (pp 708–713). Breazeal, C., Kidd, C. D., Thomaz, A. L., Hoffman, G., & Berlin, M. (2005). Effects of nonverbal communication on efficiency and robustness in human–robot teamwork. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2005), (pp 708–713).
Zurück zum Zitat Castellano, G., Villalba, S. D., & Camurri, A. (2007). Recognising human emotions from body movement and gesture dynamics. In: A.C.R. Paiva, R. Prada, R.W. Picard (Eds.), Affective computing and intelligent interaction (pp .71–82), Springer. Castellano, G., Villalba, S. D., & Camurri, A. (2007). Recognising human emotions from body movement and gesture dynamics. In: A.C.R. Paiva, R. Prada, R.W. Picard (Eds.), Affective computing and intelligent interaction (pp .71–82), Springer.
Zurück zum Zitat Coulson, M. (2004). Attributing emotion to static body postures: Recognition accuracy, confusions, and viewpoint dependence. Journal of Nonverbal Behavior, 28(2), 117–139.MathSciNetCrossRef Coulson, M. (2004). Attributing emotion to static body postures: Recognition accuracy, confusions, and viewpoint dependence. Journal of Nonverbal Behavior, 28(2), 117–139.MathSciNetCrossRef
Zurück zum Zitat Crane, E., & Gross, M. (2007), Motion capture and emotion: Affect detection in whole body movement. In: A.C.R. Paiva, R. Prada, R.W. Picard (Eds.), Affective computing and intelligent interaction (pp. 95–101), Springer. Crane, E., & Gross, M. (2007), Motion capture and emotion: Affect detection in whole body movement. In: A.C.R. Paiva, R. Prada, R.W. Picard (Eds.), Affective computing and intelligent interaction (pp. 95–101), Springer.
Zurück zum Zitat Dautenhahn, K., & Nehaniv, C. L. (2002). The correspondence problem. Cambridge: MIT Press.MATH Dautenhahn, K., & Nehaniv, C. L. (2002). The correspondence problem. Cambridge: MIT Press.MATH
Zurück zum Zitat De Meijer, M. (1989). The contribution of general features of body movement to the attribution of emotions. Journal of Nonverbal Behavior, 13(4), 247–268.CrossRef De Meijer, M. (1989). The contribution of general features of body movement to the attribution of emotions. Journal of Nonverbal Behavior, 13(4), 247–268.CrossRef
Zurück zum Zitat Dittrich, W. H., Troscianko, T., Lea, S. E., & Morgan, D. (1996). Perception of emotion from dynamic point-light displays represented in dance. Perception, 25(6), 727–738.CrossRef Dittrich, W. H., Troscianko, T., Lea, S. E., & Morgan, D. (1996). Perception of emotion from dynamic point-light displays represented in dance. Perception, 25(6), 727–738.CrossRef
Zurück zum Zitat Do, M., Azad, P., Asfour, T., & Dillmann, R. (2008). Imitation of human motion on a humanoid robot using non-linear optimization. 8th IEEE-RAS International Conference on Humanoid Robots, Humanoid (pp. 545–552). Do, M., Azad, P., Asfour, T., & Dillmann, R. (2008). Imitation of human motion on a humanoid robot using non-linear optimization. 8th IEEE-RAS International Conference on Humanoid Robots, Humanoid (pp. 545–552).
Zurück zum Zitat Gienger M., Janssen H., & Goerick C. (2005). Task-oriented whole body motion for humanoid robots. In: 2005 5th IEEE-RAS International Conference on Humanoid Robots (pp. 238–244), IEEE. Gienger M., Janssen H., & Goerick C. (2005). Task-oriented whole body motion for humanoid robots. In: 2005 5th IEEE-RAS International Conference on Humanoid Robots (pp. 238–244), IEEE.
Zurück zum Zitat Hild, M., Siedel, T., Benckendorff, C., Thiele, C., & Spranger, M. (2012). Myon, a new humanoid. In: L. Steels, M. Hild (Eds.), Language grounding in robots (pp. 25–44), Springer. Hild, M., Siedel, T., Benckendorff, C., Thiele, C., & Spranger, M. (2012). Myon, a new humanoid. In: L. Steels, M. Hild (Eds.), Language grounding in robots (pp. 25–44), Springer.
Zurück zum Zitat Hirukawaa, H., Kanehiroa, F., Kanekoa, K., Kajitaa, S., Fujiwaraa, K., Kawaia, Y., et al. (2004). Humanoid robotics platforms developed in HRP. Robotics and Autonomous Systems, 48(4), 165–175.CrossRef Hirukawaa, H., Kanehiroa, F., Kanekoa, K., Kajitaa, S., Fujiwaraa, K., Kawaia, Y., et al. (2004). Humanoid robotics platforms developed in HRP. Robotics and Autonomous Systems, 48(4), 165–175.CrossRef
Zurück zum Zitat Ido, J., Matsumoto, Y., Ogasawara, T., & Nisimura, R. (2006). Humanoid with interaction ability using vision and speech information. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2006) (pp 1316–1321). Ido, J., Matsumoto, Y., Ogasawara, T., & Nisimura, R. (2006). Humanoid with interaction ability using vision and speech information. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2006) (pp 1316–1321).
Zurück zum Zitat Itoh, K., Miwa, H., Matsumoto, M., Zecca, M., Takanobu, H., Roccella S., et al. (2004). Various emotional expressions with emotion expression humanoid robot WE-4RII. In: IEEE Technical Exhibition Based Conference on Robotics and Automation (pp. 35–36). Itoh, K., Miwa, H., Matsumoto, M., Zecca, M., Takanobu, H., Roccella S., et al. (2004). Various emotional expressions with emotion expression humanoid robot WE-4RII. In: IEEE Technical Exhibition Based Conference on Robotics and Automation (pp. 35–36).
Zurück zum Zitat James, W. T. (1932). A study of the expression of bodily posture. The Journal of General Psychology, 7(2), 405–437.CrossRef James, W. T. (1932). A study of the expression of bodily posture. The Journal of General Psychology, 7(2), 405–437.CrossRef
Zurück zum Zitat Jung, E. S., Kee, D., & Chung, M. K. (1995). Upper body reach posture prediction for ergonomic evaluation models. International Journal of Industrial Ergonomics, 16(2), 95–107.CrossRef Jung, E. S., Kee, D., & Chung, M. K. (1995). Upper body reach posture prediction for ergonomic evaluation models. International Journal of Industrial Ergonomics, 16(2), 95–107.CrossRef
Zurück zum Zitat Kadaba, M. P., Ramakrishnan, H., & Wootten, M. (1990). Measurement of lower extremity kinematics during level walking. Journal of Orthopaedic Research, 8(3), 383–392.CrossRef Kadaba, M. P., Ramakrishnan, H., & Wootten, M. (1990). Measurement of lower extremity kinematics during level walking. Journal of Orthopaedic Research, 8(3), 383–392.CrossRef
Zurück zum Zitat Koga, Y., Kondo, K., Kuffner, J., & Latombe, J. C. (1994). Planning motions with intentions. In: Proceedings of the 21st Annual Conference on Computer Graphics and Interactive Techniques (pp. 395–408), ACM. Koga, Y., Kondo, K., Kuffner, J., & Latombe, J. C. (1994). Planning motions with intentions. In: Proceedings of the 21st Annual Conference on Computer Graphics and Interactive Techniques (pp. 395–408), ACM.
Zurück zum Zitat Le, Q. A., Hanoune, S., & Pelachaud, C. (2011). Design and implementation of an expressive gesture model for a humanoid robot. In: 11th IEEE-RAS International Conference on Humanoid Robots (pp. 134–140), IEEE. Le, Q. A., Hanoune, S., & Pelachaud, C. (2011). Design and implementation of an expressive gesture model for a humanoid robot. In: 11th IEEE-RAS International Conference on Humanoid Robots (pp. 134–140), IEEE.
Zurück zum Zitat Lin, Y. H., Liu, C. Y., Lee, H. W., Huang, S. L., & Li, T. Y. (2009). Evaluating emotive character animations created with procedural animation. In: Z. Ruttkay, M. Kipp, A. Nijholt, H. H. Vilhjálmsson (Eds.), Intelligent virtual agents (pp. 308–315), Springer. Lin, Y. H., Liu, C. Y., Lee, H. W., Huang, S. L., & Li, T. Y. (2009). Evaluating emotive character animations created with procedural animation. In: Z. Ruttkay, M. Kipp, A. Nijholt, H. H. Vilhjálmsson (Eds.), Intelligent virtual agents (pp. 308–315), Springer.
Zurück zum Zitat Matsui, D., Minato, T., MacDorman, K., & Ishiguro, H. (2005). Generating natural motion in an android by mapping human motion. In: IROS (pp. 3301–3308). Matsui, D., Minato, T., MacDorman, K., & Ishiguro, H. (2005). Generating natural motion in an android by mapping human motion. In: IROS (pp. 3301–3308).
Zurück zum Zitat Montepare, J. M., Goldstein, S. B., & Clausen, A. (1987). The identification of emotions from gait information. Journal of Nonverbal Behavior, 11(1), 33–42.CrossRef Montepare, J. M., Goldstein, S. B., & Clausen, A. (1987). The identification of emotions from gait information. Journal of Nonverbal Behavior, 11(1), 33–42.CrossRef
Zurück zum Zitat Mühlig, M., Gienger, M., & Steil, J. J. (2012). Interactive imitation learning of object movement skills. Autonomous Robots, 32(2), 97–114.CrossRef Mühlig, M., Gienger, M., & Steil, J. J. (2012). Interactive imitation learning of object movement skills. Autonomous Robots, 32(2), 97–114.CrossRef
Zurück zum Zitat Park, E., Kim, K. J., & del Pobil, A. P. (2011). The effects of robot body gesture and gender in human–robot interaction. Human-Computer Interaction, 6, 91–96. Park, E., Kim, K. J., & del Pobil, A. P. (2011). The effects of robot body gesture and gender in human–robot interaction. Human-Computer Interaction, 6, 91–96.
Zurück zum Zitat Pelachaud, C. (2009). Studies on gesture expressivity for a virtual agent. Speech Communication, 51(7), 630–639.CrossRef Pelachaud, C. (2009). Studies on gesture expressivity for a virtual agent. Speech Communication, 51(7), 630–639.CrossRef
Zurück zum Zitat Pollick, F. E., Paterson, H. M., Bruderlin, A., & Sanford, A. J. (2001). Perceiving affect from arm movement. Cognition, 82(2), B51–B61.CrossRef Pollick, F. E., Paterson, H. M., Bruderlin, A., & Sanford, A. J. (2001). Perceiving affect from arm movement. Cognition, 82(2), B51–B61.CrossRef
Zurück zum Zitat Posner, J., Russell, J., & Peterson, B. (2005). The circumplex model of affect: An integrative approach to affective neuroscience, cognitive development, and psychopathology. Development and Psychopathology, 17, 715–734.CrossRef Posner, J., Russell, J., & Peterson, B. (2005). The circumplex model of affect: An integrative approach to affective neuroscience, cognitive development, and psychopathology. Development and Psychopathology, 17, 715–734.CrossRef
Zurück zum Zitat Salem, M., Kopp, S., Wachsmuth, I., & Joublin, F. (2009). Towards meaningful robot gesture. In: H. Ritter, G. Sagerer, R. Dillmann, M. Buss (Eds.), Human centered robot systems (pp. 173–182), Springer. Salem, M., Kopp, S., Wachsmuth, I., & Joublin, F. (2009). Towards meaningful robot gesture. In: H. Ritter, G. Sagerer, R. Dillmann, M. Buss (Eds.), Human centered robot systems (pp. 173–182), Springer.
Zurück zum Zitat Salem, M., Kopp, S., Wachsmuth, I., & Joublin, F. (2010). Generating multi-modal robot behavior based on a virtual agent framework. In: Proceedings of the ICRA 2010 Workshop on Interactive Communication for Autonomous Intelligent Robots (ICAIR). Salem, M., Kopp, S., Wachsmuth, I., & Joublin, F. (2010). Generating multi-modal robot behavior based on a virtual agent framework. In: Proceedings of the ICRA 2010 Workshop on Interactive Communication for Autonomous Intelligent Robots (ICAIR).
Zurück zum Zitat Salem, M., Eyssel, F., Rohlfing, K., Kopp, S., & Joublin, F. (2013). To err is human (-like): Effects of robot gesture on perceived anthropomorphism and likability. International Journal of Social Robotics, 5(3), 313–323.CrossRef Salem, M., Eyssel, F., Rohlfing, K., Kopp, S., & Joublin, F. (2013). To err is human (-like): Effects of robot gesture on perceived anthropomorphism and likability. International Journal of Social Robotics, 5(3), 313–323.CrossRef
Zurück zum Zitat Scheutz, M., Schermerhorn, P., Kramer, J., & Anderson, D. (2007). First steps toward natural human-like hri. Autonomous Robots, 22(4), 411–423.CrossRef Scheutz, M., Schermerhorn, P., Kramer, J., & Anderson, D. (2007). First steps toward natural human-like hri. Autonomous Robots, 22(4), 411–423.CrossRef
Zurück zum Zitat Sciavicco, L. (2009). Robotics: Modelling, planning and control. Berlin: Springer. Sciavicco, L. (2009). Robotics: Modelling, planning and control. Berlin: Springer.
Zurück zum Zitat Soechting, J. F., & Flanders, M. (1989). Errors in pointing are due to approximations in sensorimotor transformations. Journal of Neurophysiology, 62(2), 595–608.CrossRef Soechting, J. F., & Flanders, M. (1989). Errors in pointing are due to approximations in sensorimotor transformations. Journal of Neurophysiology, 62(2), 595–608.CrossRef
Zurück zum Zitat Stanton, C., Bogdanovych, A., Ratanasena, E. (2012). Teleoperation of a humanoid robot using full-body motion capture, example movements, and machine learning. In: Proceedings of Australasian Conference on Robotics and Automation. Stanton, C., Bogdanovych, A., Ratanasena, E. (2012). Teleoperation of a humanoid robot using full-body motion capture, example movements, and machine learning. In: Proceedings of Australasian Conference on Robotics and Automation.
Zurück zum Zitat Sugiyama, O., Kanda, T., Imai, M., Ishiguro, H., & Hagita, N. (2007). Natural deictic communication with humanoid robots. IROS, 2007, 1441–1448. Sugiyama, O., Kanda, T., Imai, M., Ishiguro, H., & Hagita, N. (2007). Natural deictic communication with humanoid robots. IROS, 2007, 1441–1448.
Zurück zum Zitat Tapus, A., Peca, A., Aly, A., Pop, C., Jisa, L., Pintea, S., et al. (2012). Children with autism social engagement in interaction with nao, an imitative robot. a series of single case experiments. Interaction Studies, 13(3), 315–347.CrossRef Tapus, A., Peca, A., Aly, A., Pop, C., Jisa, L., Pintea, S., et al. (2012). Children with autism social engagement in interaction with nao, an imitative robot. a series of single case experiments. Interaction Studies, 13(3), 315–347.CrossRef
Zurück zum Zitat Terlemez, O., Ulbrich, S., Mandery, C., Do, M., Vahrenkamp, N., & Asfour, T. (2014). Master Motor Map (MMM)—Framework and toolkit for capturing, representing, and reproducing human motion on humanoid robots. In: 14th IEEE-RAS International Conference on Humanoid Robots (Humanoids) (pp. 894–901), IEEE. Terlemez, O., Ulbrich, S., Mandery, C., Do, M., Vahrenkamp, N., & Asfour, T. (2014). Master Motor Map (MMM)—Framework and toolkit for capturing, representing, and reproducing human motion on humanoid robots. In: 14th IEEE-RAS International Conference on Humanoid Robots (Humanoids) (pp. 894–901), IEEE.
Zurück zum Zitat Van de Perre, G., De Beir, A., Cao, H. L., Esteban, P. G., Lefeber, D., & Vanderborght, B. (2016). Reaching an pointing gestures calculated by a generic gesture system for social robots. Robotics and Autonomous Systems, 83, 32–43. Van de Perre, G., De Beir, A., Cao, H. L., Esteban, P. G., Lefeber, D., & Vanderborght, B. (2016). Reaching an pointing gestures calculated by a generic gesture system for social robots. Robotics and Autonomous Systems, 83, 32–43.
Zurück zum Zitat Van de Perre, G., Van Damme, M., Lefeber, D., & Vanderborght, B. (2015). Development of a generic method to generate upper-body emotional expressions for different social robots. Advanced Robotics, 29(9), 59–609. Van de Perre, G., Van Damme, M., Lefeber, D., & Vanderborght, B. (2015). Development of a generic method to generate upper-body emotional expressions for different social robots. Advanced Robotics, 29(9), 59–609.
Zurück zum Zitat Wallbott, H. G. (1998). Bodily expression of emotion. European Journal of Social Psychology, 28(6), 879–896.CrossRef Wallbott, H. G. (1998). Bodily expression of emotion. European Journal of Social Psychology, 28(6), 879–896.CrossRef
Zurück zum Zitat Xu, J., Broekens, J., Hindriks, K., & Neerincx, M. A. (2013a). Mood expression through parameterized functional behavior of robots. In: IEEE International Sympossium on Robot and Human Interactive Communication (RO-MAN 2013) (pp. 533–540). Xu, J., Broekens, J., Hindriks, K., & Neerincx, M. A. (2013a). Mood expression through parameterized functional behavior of robots. In: IEEE International Sympossium on Robot and Human Interactive Communication (RO-MAN 2013) (pp. 533–540).
Zurück zum Zitat Xu, J., Broekens, J., Hindriks, K., & Neerincx, M. A. (2013b). The relative importance and interrelations between behavior parameters for robots’ mood expression. In: 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction (ACII) (pp. 558–563), IEEE. Xu, J., Broekens, J., Hindriks, K., & Neerincx, M. A. (2013b). The relative importance and interrelations between behavior parameters for robots’ mood expression. In: 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction (ACII) (pp. 558–563), IEEE.
Zurück zum Zitat Yamaguchi, A., Yano, Y., Doki, S., & Okuma, S. (2006). A study of emotional motion description by motion modification and adjectival expressions. IEEE Conference on Cybernetics and Intelligent Systems, 2006, 1–6. Yamaguchi, A., Yano, Y., Doki, S., & Okuma, S. (2006). A study of emotional motion description by motion modification and adjectival expressions. IEEE Conference on Cybernetics and Intelligent Systems, 2006, 1–6.
Zurück zum Zitat Zecca, M., Mizoguchi, Y., Endo, K., Iida, F., Kawabata, Y., Endo, N., et al. (2009). Whole body emotion expressions for KOBIAN humanoid robot: Preliminary experiments with different emotional patterns. In: The 18th IEEE International Sympossium on Robot and Human Interactive Communication. RO-MAN 2009 (pp. 381–386). Zecca, M., Mizoguchi, Y., Endo, K., Iida, F., Kawabata, Y., Endo, N., et al. (2009). Whole body emotion expressions for KOBIAN humanoid robot: Preliminary experiments with different emotional patterns. In: The 18th IEEE International Sympossium on Robot and Human Interactive Communication. RO-MAN 2009 (pp. 381–386).
Metadaten
Titel
Generic method for generating blended gestures and affective functional behaviors for social robots
verfasst von
Greet Van de Perre
Hoang-Long Cao
Albert De Beir
Pablo Gómez Esteban
Dirk Lefeber
Bram Vanderborght
Publikationsdatum
05.07.2017
Verlag
Springer US
Erschienen in
Autonomous Robots / Ausgabe 3/2018
Print ISSN: 0929-5593
Elektronische ISSN: 1573-7527
DOI
https://doi.org/10.1007/s10514-017-9650-0

Weitere Artikel der Ausgabe 3/2018

Autonomous Robots 3/2018 Zur Ausgabe

Neuer Inhalt