Skip to main content
Erschienen in: International Journal of Social Robotics 4/2013

01.11.2013

Representing Affective Facial Expressions for Robots and Embodied Conversational Agents by Facial Landmarks

verfasst von: Caixia Liu, Jaap Ham, Eric Postma, Cees Midden, Bart Joosten, Martijn Goudbeek

Erschienen in: International Journal of Social Robotics | Ausgabe 4/2013

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Affective robots and embodied conversational agents require convincing facial expressions to make them socially acceptable. To be able to virtually generate facial expressions, we need to investigate the relationship between technology and human perception of affective and social signals. Facial landmarks, the locations of the crucial parts of a face, are important for perception of the affective and social signals conveyed by facial expressions. Earlier research did not use that kind of technology, but rather used analogue technology to generate point-light faces. The goal of our study is to investigate whether digitally extracted facial landmarks contain sufficient information to enable the facial expressions to be recognized by humans. This study presented participants with facial expressions encoded in moving landmarks, while these facial landmarks correspond to the facial-landmark videos that were extracted by face analysis software from full-face videos of acted emotions. The facial-landmark videos were presented to 16 participants who were instructed to classify the sequences according to the emotion represented. Results revealed that for three out of five facial-landmark videos (happiness, sadness and anger), participants were able to recognize emotions accurately, but for the other two facial-landmark videos (fear and disgust), their recognition accuracy was below chance, suggesting that landmarks contain information about the expressed emotions. Results also show that emotions with high levels of arousal and valence are better recognized than those with low levels of arousal and valence. We argue that the question of whether these digitally extracted facial landmarks are a basis for representing facial expressions of emotions is crucial for the development of successful human-robot interaction in the future. We conclude by stating that landmarks provide a basis for the virtual generation of emotions in humanoid agents, and discuss how additional facial information might be included to provide a sufficient basis for faithful emotion identification.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Vinciarelli A, Pantic M, Bourlard H (2009) Social signal processing: survey of an emerging domain. Image Vis Comput 27(12):1743–1759 CrossRef Vinciarelli A, Pantic M, Bourlard H (2009) Social signal processing: survey of an emerging domain. Image Vis Comput 27(12):1743–1759 CrossRef
2.
Zurück zum Zitat Russell JA (1997) Reading emotions from and into faces: resurrecting a dimensional-contextual perspective. In: Russell JA, Fernandez-Dols JM (eds) The psychology of facial expressions. Cambridge University, New York, pp 295–320 CrossRef Russell JA (1997) Reading emotions from and into faces: resurrecting a dimensional-contextual perspective. In: Russell JA, Fernandez-Dols JM (eds) The psychology of facial expressions. Cambridge University, New York, pp 295–320 CrossRef
3.
Zurück zum Zitat Mondloch CJ (2012) Sad or fearful? The influence of body posture on adults and childrens perception of facial displays of emotion. J Exp Child Psychol 111:180–196 CrossRef Mondloch CJ (2012) Sad or fearful? The influence of body posture on adults and childrens perception of facial displays of emotion. J Exp Child Psychol 111:180–196 CrossRef
4.
Zurück zum Zitat Aviezer H, Hassin R, Bentin S, Trope Y (2008) Putting facial expressions back in context. In: Ambady N, Skowronski JJ (eds) First impressions. Guilford, New York, pp 255–286 Aviezer H, Hassin R, Bentin S, Trope Y (2008) Putting facial expressions back in context. In: Ambady N, Skowronski JJ (eds) First impressions. Guilford, New York, pp 255–286
5.
Zurück zum Zitat Breazeal CL Designing social robots. Personal Robots Group in MIT Media Lab, Cambridge Breazeal CL Designing social robots. Personal Robots Group in MIT Media Lab, Cambridge
6.
Zurück zum Zitat Breazeal CL (2000) Sociable machines: expressive social exchange between humans and robots. Diss Massachusetts Institute of Technology, pp 178–184 Breazeal CL (2000) Sociable machines: expressive social exchange between humans and robots. Diss Massachusetts Institute of Technology, pp 178–184
7.
Zurück zum Zitat Breazeal CL (2003) Emotion and sociable humanoid robots. Int J Hum-Comput Stud 59(1):119–155 CrossRef Breazeal CL (2003) Emotion and sociable humanoid robots. Int J Hum-Comput Stud 59(1):119–155 CrossRef
8.
Zurück zum Zitat Saragih JM, Lucey S, Cohn JF, Court T (2011) Real-time avatar animation from a single image. In: Automatic face & gesture Saragih JM, Lucey S, Cohn JF, Court T (2011) Real-time avatar animation from a single image. In: Automatic face & gesture
9.
10.
Zurück zum Zitat Bassili JN (1978) Facial motion in the perception of faces and of emotional expression. J Exp Psychol Hum Percept Perform 4:373–379 CrossRef Bassili JN (1978) Facial motion in the perception of faces and of emotional expression. J Exp Psychol Hum Percept Perform 4:373–379 CrossRef
11.
Zurück zum Zitat Tomlinson EK, Jones CA, Johnston RA, Meaden A, Wink B (2006) Facial emotion recognition from moving and static point-light images in schizophrenia. Schizophr Res 85(1–3):96–105 CrossRef Tomlinson EK, Jones CA, Johnston RA, Meaden A, Wink B (2006) Facial emotion recognition from moving and static point-light images in schizophrenia. Schizophr Res 85(1–3):96–105 CrossRef
12.
Zurück zum Zitat Saragih J, Lucey S, Cohn J (2011) Deformable model fitting by regularized landmark mean-shift. Int J Comput Vis 91:200–215 MathSciNetCrossRefMATH Saragih J, Lucey S, Cohn J (2011) Deformable model fitting by regularized landmark mean-shift. Int J Comput Vis 91:200–215 MathSciNetCrossRefMATH
13.
Zurück zum Zitat Lucey P, Lucey S, Cohn JF (2010) Registration invariant representations for expression detection. In: International conference on digital image computing: techniques and applications. I, pp 255–261 CrossRef Lucey P, Lucey S, Cohn JF (2010) Registration invariant representations for expression detection. In: International conference on digital image computing: techniques and applications. I, pp 255–261 CrossRef
14.
Zurück zum Zitat Alexander O, Rogers M, Lambeth W, Chiang M, Debevec P (2009) Creating a photoreal digital actor: the digital Emily project. In: Conference for visual media production, pp 176–187 Alexander O, Rogers M, Lambeth W, Chiang M, Debevec P (2009) Creating a photoreal digital actor: the digital Emily project. In: Conference for visual media production, pp 176–187
15.
Zurück zum Zitat Yang C, Chiang W (2007) An interactive facial expression generation system. Springer, Berlin Yang C, Chiang W (2007) An interactive facial expression generation system. Springer, Berlin
16.
Zurück zum Zitat Bänziger T, Scherer KR (2010) Introducing the Geneva Multimodal Emotion Portrayal (GEMEP) corpus. In: Scherer KR, Bänziger T, Roesch EB (eds) Blueprint for affective computing: a sourcebook. Oxford University Press, Oxford, pp 271–294 Bänziger T, Scherer KR (2010) Introducing the Geneva Multimodal Emotion Portrayal (GEMEP) corpus. In: Scherer KR, Bänziger T, Roesch EB (eds) Blueprint for affective computing: a sourcebook. Oxford University Press, Oxford, pp 271–294
17.
Zurück zum Zitat Bänziger T, Mortillaro M, Scherer KR (2011) Introducing the Geneva multimodal expression corpus for experimental research on emotion perception. Emotion. doi:10.137/a0025827 Bänziger T, Mortillaro M, Scherer KR (2011) Introducing the Geneva multimodal expression corpus for experimental research on emotion perception. Emotion. doi:10.​137/​a0025827
18.
Zurück zum Zitat Sinha P, Balas B, Ostrovsky Y, Russell R (2006) Face recognition by humans: nineteen results all computer vision researchers should know about. Proc IEEE 94(11):1948–1962 CrossRef Sinha P, Balas B, Ostrovsky Y, Russell R (2006) Face recognition by humans: nineteen results all computer vision researchers should know about. Proc IEEE 94(11):1948–1962 CrossRef
19.
Zurück zum Zitat Cheng L, Lin C, Huang C (2012) Visualization of facial expression deformation applied to the mechanism improvement of face robot. Int J Soc Robot. doi:10.1007/s12369-012-0168-5 Cheng L, Lin C, Huang C (2012) Visualization of facial expression deformation applied to the mechanism improvement of face robot. Int J Soc Robot. doi:10.​1007/​s12369-012-0168-5
20.
Zurück zum Zitat Kedzierski J, Muszynski R, Zoll C, Oleksy A, Frontkiewicz M (2013) EMYS—Emotive head of a social robot. Int J Soc Robot 5(2):237–249 CrossRef Kedzierski J, Muszynski R, Zoll C, Oleksy A, Frontkiewicz M (2013) EMYS—Emotive head of a social robot. Int J Soc Robot 5(2):237–249 CrossRef
Metadaten
Titel
Representing Affective Facial Expressions for Robots and Embodied Conversational Agents by Facial Landmarks
verfasst von
Caixia Liu
Jaap Ham
Eric Postma
Cees Midden
Bart Joosten
Martijn Goudbeek
Publikationsdatum
01.11.2013
Verlag
Springer Netherlands
Erschienen in
International Journal of Social Robotics / Ausgabe 4/2013
Print ISSN: 1875-4791
Elektronische ISSN: 1875-4805
DOI
https://doi.org/10.1007/s12369-013-0208-9

Weitere Artikel der Ausgabe 4/2013

International Journal of Social Robotics 4/2013 Zur Ausgabe

    Marktübersichten

    Die im Laufe eines Jahres in der „adhäsion“ veröffentlichten Marktübersichten helfen Anwendern verschiedenster Branchen, sich einen gezielten Überblick über Lieferantenangebote zu verschaffen.