Skip to main content
Log in

Affect recognition for interactive companions: challenges and design in real world scenarios

  • Original Paper
  • Published:
Journal on Multimodal User Interfaces Aims and scope Submit manuscript

Abstract

Affect sensitivity is an important requirement for artificial companions to be capable of engaging in social interaction with human users. This paper provides a general overview of some of the issues arising from the design of an affect recognition framework for artificial companions. Limitations and challenges are discussed with respect to other capabilities of companions and a real world scenario where an iCat robot plays chess with children is presented. In this scenario, affective states that a robot companion should be able to recognise are identified and the non-verbal behaviours that are affected by the occurrence of these states in the children are investigated. The experimental results aim to provide the foundation for the design of an affect recognition system for a game companion: in this interaction scenario children tend to look at the iCat and smile more when they experience a positive feeling and they are engaged with the iCat.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Breazeal C (2003) Emotion and sociable humanoid robots. Int J Hum-Comput Stud 59(1–2):119–155

    Article  Google Scholar 

  2. Gratch J, Wang N, Gerten J, Fast E, Duffy R (2007) Creating rapport with virtual agents. In: 7th international conference on intelligent virtual agents, Paris, France

  3. Dautenhahn K (2007) Socially intelligent robots: dimensions of human-robot interaction. Philos Trans R Soc B Biol Sci 362(1480):679–704

    Article  Google Scholar 

  4. Zeng Z, Pantic M, Roisman GI, Huang TS (2009) A survey of affect recognition methods: audio, visual, and spontaneous expressions. IEEE Trans Pattern Anal Mach Intell 31(1):39–58

    Article  Google Scholar 

  5. Kapoor A, Burleson W, Picard RW (2007) Automatic prediction of frustration. Int J Hum-Comput Stud 65(8):724–736

    Article  Google Scholar 

  6. El Kaliouby R, Robinson P (2005) Generalization of a vision-based computational model of mind-reading. In: 1st international conference on affective computing and intelligent interaction, Beijing, China

  7. Yeasin M, Bullot B, Sharma R (2006) Recognition of facial expressions and measurement of levels of interest from video. IEEE Trans Multimedia 8(3):500–507

    Article  Google Scholar 

  8. Peters C, Asteriadis S, Karpouzis K, de Sevin E (2008) Towards a real-time gaze-based shared attention for a virtual agent. In: Workshop on affective interaction in natural environments (AFFINE), ACM international conference on multimodal interfaces (ICMI’08), Chania, Crete, Greece

  9. Castellano G, Pereira A, Leite I, Paiva A, McOwan PW (2009) Detecting user engagement with a robot companion using task and social interaction-based features. In: International conference on multimodal interfaces and workshop on machine learning for multimodal interaction (ICMI-MLMI’09). ACM Press, Cambridge

    Google Scholar 

  10. Caridakis G, Malatesta L, Kessous L, Amir N, Raouzaiou A, Karpouzis K (2006) Modeling naturalistic affective states via facial and vocal expression recognition. In: International conference on multimodal interfaces, pp 146–154

  11. Castellano G, Mortillaro M, Camurri A, Volpe G, Scherer K (2008) Automated analysis of body movement in emotionally expressive piano performances. Music Percept 26(2):103–119

    Article  Google Scholar 

  12. Littlewort GC, Bartlett MS, Lee K (2007) Faces of pain: automated measurement of spontaneous facial expressions of genuine and posed pain. In: International conference of multimodal interfaces, pp 15–21

  13. Scherer KR (2000) Psychological models of emotion. In: Borod J (ed) The neuropsychology of emotion. Oxford University Press, Oxford, pp 137–162

    Google Scholar 

  14. Bänziger T, Scherer K (2007) Using actor portrayals to systematically study multimodal emotion expression: the GEMEP corpus. In: 2nd international conference on affective computing and intelligent interaction, Lisbon

  15. Ioannou S, Raouzaiou A, Tzouvaras V, Mailis T, Karpouzis K, Kollias S (2005) Emotion recognition through facial expression analysis based on a neurofuzzy method. Neural Netw 18:423–435

    Article  Google Scholar 

  16. Devillers L, Vasilescu I (2006) Real-life emotions detection with lexical and paralinguistic cues on human-human call center dialogs. In: International conference on spoken language processing

  17. Valstar MF, Gunes H, Pantic M (2007) How to distinguish posed from spontaneous smiles using geometric features. In: ACM international conference on multimodal interfaces (ICMI’07), Nagoya, Japan, pp 38–45

  18. Gunes H, Piccardi M (2009) Automatic temporal segment detection and affect recognition from face and body display. IEEE Trans Syst Man Cybern, Part B 39(1):64–84

    Article  Google Scholar 

  19. Kim J, Andre E, Rehm M, Vogt T, Wagner J (2005) Integrating information from speech and physiological signals to achieve emotional sensitivity. In: 9th European conference on speech communication and technology

  20. Castellano G, Kessous L, Caridakis G (2008) Emotion recognition through multiple modalities: face, body gesture, speech. In: Peter C, Beale R (eds) Affect and Emotion in Human-Computer Interaction. LNCS, vol 4868. Springer, Heidelberg

    Chapter  Google Scholar 

  21. Meeren H, Heijnsbergen C, Gelder B (2005) Rapid perceptual integration of facial expression and emotional body language. Proc Natl Acad Sci USA 102(45):16518–16523

    Article  Google Scholar 

  22. Stein B, Meredith MA (1993) The merging of senses. MIT Press, Cambridge

    Google Scholar 

  23. Shan C, Gong S, McOwan PW (2007) Beyond facial expressions: learning human emotion from body gestures. In: Proceedings of British machine vision conference (BMVC’07), Warwick, UK

  24. Zeng Z, Hu Y, Liu M, Fu Y, Huang TS (2006) Training combination strategy of multi-stream fused hidden Markov model for audio-visual affect recognition. In: ACM international conference on multimedia, pp 65–68

  25. Pantic M, Patras I (2006) Dynamics of facial expression: recognition of facial actions and their temporal segments from face profile image sequences. IEEE Trans Syst Man Cybern, Part B 36(2):433–449

    Article  Google Scholar 

  26. Anderson K, McOwan PW (2006) A real-time automated system for recognition of human facial expressions. IEEE Trans Syst Man Cybern, Part B 36(1):96–105

    Article  Google Scholar 

  27. Castellano G, Villalba SD, Camurri A (2007) Recognising human emotions from body movement and gesture dynamics. In: 2nd international conference on affective computing and intelligent interaction, Lisbon

  28. Scherer KR (1984) On the nature and function of emotion: a component process approach. In: Scherer KR, Ekman P (eds) Approaches to emotion. Erlbaum, Hillsdale, pp 293–317

    Google Scholar 

  29. Kapoor A, Picard RW (2005) Multimodal affect recognition in learning environments. In: ACM international conference on multimedia, pp 677–682

  30. You Z-J, Shen C-Y, Chang C-W, Liu B-J, Chen G-D (2006) A robot as a teaching assistant in an English class. In: Sixth international conference on advanced learning technologies

  31. Dias J, Paiva A (2005) Feeling and reasoning: a computational model for emotional characters. In: Bento C, Cardoso A, Dias G (eds) Progress in artificial intelligence, EPIA’2005. LNAI, vol 3808. Springer, Berlin

    Google Scholar 

  32. Castellano G (2008) Movement expressivity analysis in affective computers: from recognition to expression of emotion, PhD thesis, Department of Communication, Computer and System Sciences, University of Genova, Italy

  33. Graham S, Weiner B (1996) Theories and principles of motivation. In: Berliner DC, Calfee RC (eds) Handbook of educational psychology. Macmillan, New York, pp 63–84

    Google Scholar 

  34. Arroyo I, Cooper DG, Burleson W, Woolf BP, Muldner K, Christopherson R (2009) Emotions sensors go to school. In: International conference on artificial intelligence in education, Brighton, UK, pp 17–24

  35. Horgan DD, Morgan D (1990) Chess expertise in children. Appl Cogn Psychol 4(2):109–128

    Article  Google Scholar 

  36. Waters AJ, Gobet F, Leyden G (2002) Visuospatial abilities of chess players. Br J Psychol 93(4):557–565

    Article  Google Scholar 

  37. Linder I (1990) Chess, a subject taught at school. Sputnik: Digest of the Soviet Press, pp 164–166

  38. Breemen A, Yan X, Meerbeek B (2005) iCat: An animated user-interface robot with personality. In: Pechoucek, Steiner, Thompson (eds) Proceedings of autonomous agents and multiagent systems conference, AAMAS’05. ACM Press, New York, pp 143–144

    Chapter  Google Scholar 

  39. Leite I, Pereira A, Martinho C, Paiva A (2008) Are emotional robots more fun to play with? In: Proceedings of IEEE RO-MAN 2008 conference, Munich, Germany

  40. Russell JA (1980) A circumplex model of affect. J Pers Soc Psychol 39:1161–1178

    Article  Google Scholar 

  41. Poggi I (2007) Mind, hands, face and body. A goal and belief view of multimodal communication. Weidler, Berlin

    Google Scholar 

  42. Kipp M (2008) Spatiotemporal coding in ANVIL. In: Proceedings of the 6th international conference on language resources and evaluation (LREC-08)

  43. Castellano G, Leite I, Pereira A, Martinho C, Paiva A, McOwan PW (2009) It’s all in the game: towards an affect sensitive and context aware game companion. In: International conference on affective computing and intelligent interaction, Amsterdam, The Netherlands. IEEE Press, New York

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ginevra Castellano.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Castellano, G., Leite, I., Pereira, A. et al. Affect recognition for interactive companions: challenges and design in real world scenarios. J Multimodal User Interfaces 3, 89–98 (2010). https://doi.org/10.1007/s12193-009-0033-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12193-009-0033-5

Navigation