ABSTRACT
We present a novel framework for the recognition of body expressions using human postures. Proposed system is based on analyzing the spectral difference between an expressive and a neutral animation. Second problem that has been addressed in this paper is formalization of neutral animation. Formalization of neutral animation has not been tackled before and it can be very useful for the domain of synthesis of animation, recognition of expressions, etc. In this article, we proposed a cost function to synthesize a neutral motion from expressive motion. The cost function formalizes a neutral motion by computing the distance and by combining it with acceleration of each body joints during a motion. We have evaluated our approach on several databases with heterogeneous movements and body expressions. Our body expression recognition results exceeds state of the art on evaluated databases.
- Kenji Amaya, Armin Bruderlin, and Tom Calvert. 1996. Emotion from motion. In Graphics interface, Vol. 96. Toronto, Canada, 222–229. http://www. graphicsinterface.org/wp-content/uploads/gi1996-26.pdf Google ScholarDigital Library
- Andreas Aristidou, Yiorgos Chrysanthou, and Joan Lasenby. 2016. Extending FABRIK with Model Constraints. Comput. Animat. Virtual Worlds 27, 1 (Jan. 2016), 35–57. Google ScholarDigital Library
- Andreas Aristidou and Joan Lasenby. 2011. FABRIK: A fast, iterative solver for the Inverse Kinematics problem. Graphical Models 73, 5 (Sept. 2011), 243–260. Google ScholarDigital Library
- Stephen W. Bailey and Bobby Bodenheimer. 2012. A Comparison of Motion Capture Data Recorded from a Vicon System and a Microsoft Kinect Sensor. In Proceedings of the ACM Symposium on Applied Perception (SAP ’12). ACM, New York, NY, USA, 121–121. Google ScholarDigital Library
- Avi Barliya, Lars Omlor, Martin A. Giese, Alain Berthoz, and Tamar Flash. 2013. Expression of emotion in the kinematics of locomotion. Experimental brain research 225, 2 (2013), 159–176. http://link.springer.com/article/10.1007/ s00221-012-3357-4Google Scholar
- Daniel Bernhardt and Peter Robinson. 2007. Detecting affect from non-stylised body motions. In Affective Computing and Intelligent Interaction. Springer, 59–70. http://link.springer.com/chapter/10.1007/978-3-540-74889-2_6 Google ScholarDigital Library
- Vinay Bettadapura. 2012. Face expression recognition and analysis: the state of the art. arXiv preprint arXiv:1203.6722 (2012). http://arxiv.org/abs/1203.6722Google Scholar
- Armin Bruderlin and Lance Williams. 1995. Motion signal processing. In Proceedings of the 22nd annual conference on Computer graphics and interactive techniques. ACM, 97–104. http://dl.acm.org/citation.cfm?id=218421 Google ScholarDigital Library
- Arthur Crenn, Rizwan Ahmed Khan, Alexandre Meyer, and Saida Bouakaz. 2016. Body expression recognition from animated 3D skeleton. IEEE, 1–7. org/10.1109/IC3D.2016.7823448Google Scholar
- Simon Fothergill, Helena Mentis, Pushmeet Kohli, and Sebastian Nowozin. 2012. Instructing People for Training Gestural Interactive Systems. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’12). ACM, New York, NY, USA, 1737–1746. Google ScholarDigital Library
- M. Melissa Gross, Elizabeth A. Crane, and Barbara L. Fredrickson. 2010. Methodology for Assessing Bodily Expression of Emotion. Journal of Nonverbal Behavior 34, 4 (Dec. 2010), 223–248.Google ScholarCross Ref
- Eugene Hsu, Kari Pulli, and Jovan Popović. 2005. Style translation for human motion. ACM Transactions on Graphics (TOG) 24, 3 (2005), 1082–1089. http: //dl.acm.org/citation.cfm?id=1073315 Google ScholarDigital Library
- Heechul Jung, Sihaeng Lee, Junho Yim, Sunjeong Park, and Junmo Kim. 2015. Joint Fine-Tuning in Deep Neural Networks for Facial Expression Recognition. In The IEEE International Conference on Computer Vision (ICCV). Google ScholarDigital Library
- Michelle Karg, Kolja Kühnlenz, and Martin Buss. 2010. Recognition of Affect Based on Gait Patterns. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) 40, 4 (Aug. 2010), 1050–1061. Google ScholarDigital Library
- 2010.2044040Google Scholar
- R. A. Khan, A. Meyer, H. Konik, and S. Bouakaz. 2012. Human vision inspired framework for facial expressions recognition. In 2012 19th IEEE International Conference on Image Processing. 2593–2596.Google Scholar
- 6467429Google Scholar
- Rizwan Ahmed Khan, Alexandre Meyer, Hubert Konik, and Saida Bouakaz. 2013. Framework for reliable, real-time facial expression recognition for low resolution images. Pattern Recognition Letters 34, 10 (2013), 1159 – 1168. Google ScholarDigital Library
- Andrea Kleinsmith and Nadia Bianchi-Berthouze. 2013. Affective body expression perception and recognition: A survey. Affective Computing, IEEE Transactions on 4, 1 (2013), 15–33. http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6212434 Google ScholarDigital Library
- Andrea Kleinsmith, P. Ravindra De Silva, and Nadia Bianchi-Berthouze. 2006. Cross-cultural differences in recognizing affect from body posture. Interacting with Computers 18, 6 (2006), 1371–1389. http://www.sciencedirect.com/science/ article/pii/S0953543806000634 Google ScholarDigital Library
- Andrea Kleinsmith, Tsuyoshi Fushimi, and Nadia Bianchi-Berthouze. 2005. An incremental and interactive affective posture recognition system. In International Workshop on Adapting the Interaction Style to Affective Factors. 378–387. http: //www0.cs.ucl.ac.uk/staff/n.berthouze/paper/KleinsmithFushimi.pdfGoogle Scholar
- Yingliang Ma, Helena M. Paterson, and Frank E. Pollick. 2006. A motion capture library for the study of identity, gender, and emotion perception from biological motion. Behavior research methods 38, 1 (2006), 134–141. http://link.springer. com/article/10.3758/BF03192758Google Scholar
- Albert Mehrabian and John T Friar. 1969. Encoding of attitude by a seated communicator via posture and position cues. Journal of Consulting and Clinical Psychology 33, 3 (1969), 330.Google ScholarCross Ref
- Microsoft. 2017. Kinect. (2017). https://developer.microsoft.com/en-us/windows/ kinectGoogle Scholar
- Lars Omlor and Martin A. Giese. 2007. Extraction of spatio-temporal primitives of emotional body expressions. Neurocomputing 70, 10 (2007), 1938–1942. http: //www.sciencedirect.com/science/article/pii/S0925231206004309 Google ScholarDigital Library
- C.L. Roether, Lars Omlor, Andrea Christensen, and Martin A. Giese. 2009. Critical features for the perception of emotion from gait. Journal of Vision 9, 6 (06 2009), 1–32. reviewed.Google ScholarCross Ref
- Simon Senecal, Louis Cuel, Andreas Aristidou, and Nadia Magnenat-Thalmann. 2016. Continuous body emotion recognition system during theater performances: Continuous body emotion recognition. Computer Animation and Virtual Worlds 27, 3-4 (May 2016), 311–320. Google ScholarDigital Library
- Jochen Tautges, Arno Zinke, Björn Krüger, Jan Baumann, Andreas Weber, Thomas Helten, Meinard Müller, Hans-Peter Seidel, and Bernd Eberhardt. 2011. Motion Reconstruction Using Sparse Accelerometer Data. ACM Trans. Graph. 30, 3, Article 18 (May 2011), 12 pages. Google ScholarDigital Library
- Arthur Truong, Hugo Boujut, and Titus Zaharia. 2016. Laban descriptors for gesture recognition and emotional analysis. The Visual Computer 32, 1 (Jan. 2016), 83–98. Google ScholarDigital Library
- Munetoshi Unuma, Ken Anjyo, and Ryozo Takeuchi. 1995. Fourier principles for emotion-based human figure animation. In Proceedings of the 22nd annual conference on Computer graphics and interactive techniques. ACM, 91–96. http: //dl.acm.org/citation.cfm?id=218419 Google ScholarDigital Library
- Ekaterina Volkova, Stephan de la Rosa, Heinrich H. Bülthoff, and Betty Mohler. 2014. The MPI Emotional Body Expressions Database for Narrative Scenarios. PLoS ONE 9, 12 (Dec. 2014), e113647.Google ScholarCross Ref
- R. von Laban and L. Ullmann. 1971. The mastery of movement. Number vol. 1971,ptie. 1 in The Mastery of Movement. Macdonald & Evans. https://books. google.fr/books?id=-RYIAQAAMAAJGoogle Scholar
- Weiyi Wang, Valentin Enescu, and Hichem Sahli. 2015. Adaptive Real-Time Emotion Recognition from Body Movements. ACM Transactions on Interactive Intelligent Systems 5, 4 (Dec. 2015), 1–21. Google ScholarDigital Library
- Shihong Xia, Congyi Wang, Jinxiang Chai, and Jessica Hodgins. 2015. Realtime style transfer for unlabeled heterogeneous human motion. ACM Transactions on Graphics (TOG) 34, 4 (2015), 119. Google ScholarDigital Library
- M. Ersin Yumer and Niloy J. Mitra. 2016. Spectral style transfer for human motion between independent actions. ACM Transactions on Graphics 35, 4 (July 2016), 1–8. Abstract 1 Introduction 2 Related Work 3 Proposed Method 3.1 Neutral Animation Synthesis 3.2 Residue Between Neutral and Expressive Motion 4 Results And Analysis 5 Discussions and Conclusion References Google ScholarDigital Library
Index Terms
- Toward an efficient body expression recognition based on the synthesis of a neutral movement
Recommendations
Body-Focused Expression Analysis: A Conceptual Framework
Universal Access in Human-Computer InteractionAbstractHumans are prepared to comprehend others’ emotional expressions from subtle body movements or facial expressions, and they change the way they communicate in the function of those interactions/responses. Emotions influence sentiment and sentiment ...
Facial expression recognition with Convolutional Neural Networks
Facial expression recognition has been an active research area in the past 10 years, with growing application areas including avatar animation, neuromarketing and sociable robots. The recognition of facial expressions is not an easy problem for machine ...
Recognizing Action Units for Facial Expression Analysis
Most automatic expression analysis systems attempt to recognize a small set of prototypic expressions, such as happiness, anger, surprise, and fear. Such prototypic expressions, however, occur rather infrequently. Human emotions and intentions are more ...
Comments