skip to main content
10.1145/3136755.3136763acmconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
research-article

Toward an efficient body expression recognition based on the synthesis of a neutral movement

Published:03 November 2017Publication History

ABSTRACT

We present a novel framework for the recognition of body expressions using human postures. Proposed system is based on analyzing the spectral difference between an expressive and a neutral animation. Second problem that has been addressed in this paper is formalization of neutral animation. Formalization of neutral animation has not been tackled before and it can be very useful for the domain of synthesis of animation, recognition of expressions, etc. In this article, we proposed a cost function to synthesize a neutral motion from expressive motion. The cost function formalizes a neutral motion by computing the distance and by combining it with acceleration of each body joints during a motion. We have evaluated our approach on several databases with heterogeneous movements and body expressions. Our body expression recognition results exceeds state of the art on evaluated databases.

References

  1. Kenji Amaya, Armin Bruderlin, and Tom Calvert. 1996. Emotion from motion. In Graphics interface, Vol. 96. Toronto, Canada, 222–229. http://www. graphicsinterface.org/wp-content/uploads/gi1996-26.pdf Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Andreas Aristidou, Yiorgos Chrysanthou, and Joan Lasenby. 2016. Extending FABRIK with Model Constraints. Comput. Animat. Virtual Worlds 27, 1 (Jan. 2016), 35–57. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Andreas Aristidou and Joan Lasenby. 2011. FABRIK: A fast, iterative solver for the Inverse Kinematics problem. Graphical Models 73, 5 (Sept. 2011), 243–260. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Stephen W. Bailey and Bobby Bodenheimer. 2012. A Comparison of Motion Capture Data Recorded from a Vicon System and a Microsoft Kinect Sensor. In Proceedings of the ACM Symposium on Applied Perception (SAP ’12). ACM, New York, NY, USA, 121–121. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Avi Barliya, Lars Omlor, Martin A. Giese, Alain Berthoz, and Tamar Flash. 2013. Expression of emotion in the kinematics of locomotion. Experimental brain research 225, 2 (2013), 159–176. http://link.springer.com/article/10.1007/ s00221-012-3357-4Google ScholarGoogle Scholar
  6. Daniel Bernhardt and Peter Robinson. 2007. Detecting affect from non-stylised body motions. In Affective Computing and Intelligent Interaction. Springer, 59–70. http://link.springer.com/chapter/10.1007/978-3-540-74889-2_6 Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Vinay Bettadapura. 2012. Face expression recognition and analysis: the state of the art. arXiv preprint arXiv:1203.6722 (2012). http://arxiv.org/abs/1203.6722Google ScholarGoogle Scholar
  8. Armin Bruderlin and Lance Williams. 1995. Motion signal processing. In Proceedings of the 22nd annual conference on Computer graphics and interactive techniques. ACM, 97–104. http://dl.acm.org/citation.cfm?id=218421 Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Arthur Crenn, Rizwan Ahmed Khan, Alexandre Meyer, and Saida Bouakaz. 2016. Body expression recognition from animated 3D skeleton. IEEE, 1–7. org/10.1109/IC3D.2016.7823448Google ScholarGoogle Scholar
  10. Simon Fothergill, Helena Mentis, Pushmeet Kohli, and Sebastian Nowozin. 2012. Instructing People for Training Gestural Interactive Systems. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’12). ACM, New York, NY, USA, 1737–1746. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. M. Melissa Gross, Elizabeth A. Crane, and Barbara L. Fredrickson. 2010. Methodology for Assessing Bodily Expression of Emotion. Journal of Nonverbal Behavior 34, 4 (Dec. 2010), 223–248.Google ScholarGoogle ScholarCross RefCross Ref
  12. Eugene Hsu, Kari Pulli, and Jovan Popović. 2005. Style translation for human motion. ACM Transactions on Graphics (TOG) 24, 3 (2005), 1082–1089. http: //dl.acm.org/citation.cfm?id=1073315 Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Heechul Jung, Sihaeng Lee, Junho Yim, Sunjeong Park, and Junmo Kim. 2015. Joint Fine-Tuning in Deep Neural Networks for Facial Expression Recognition. In The IEEE International Conference on Computer Vision (ICCV). Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Michelle Karg, Kolja Kühnlenz, and Martin Buss. 2010. Recognition of Affect Based on Gait Patterns. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) 40, 4 (Aug. 2010), 1050–1061. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. 2010.2044040Google ScholarGoogle Scholar
  16. R. A. Khan, A. Meyer, H. Konik, and S. Bouakaz. 2012. Human vision inspired framework for facial expressions recognition. In 2012 19th IEEE International Conference on Image Processing. 2593–2596.Google ScholarGoogle Scholar
  17. 6467429Google ScholarGoogle Scholar
  18. Rizwan Ahmed Khan, Alexandre Meyer, Hubert Konik, and Saida Bouakaz. 2013. Framework for reliable, real-time facial expression recognition for low resolution images. Pattern Recognition Letters 34, 10 (2013), 1159 – 1168. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Andrea Kleinsmith and Nadia Bianchi-Berthouze. 2013. Affective body expression perception and recognition: A survey. Affective Computing, IEEE Transactions on 4, 1 (2013), 15–33. http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6212434 Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Andrea Kleinsmith, P. Ravindra De Silva, and Nadia Bianchi-Berthouze. 2006. Cross-cultural differences in recognizing affect from body posture. Interacting with Computers 18, 6 (2006), 1371–1389. http://www.sciencedirect.com/science/ article/pii/S0953543806000634 Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Andrea Kleinsmith, Tsuyoshi Fushimi, and Nadia Bianchi-Berthouze. 2005. An incremental and interactive affective posture recognition system. In International Workshop on Adapting the Interaction Style to Affective Factors. 378–387. http: //www0.cs.ucl.ac.uk/staff/n.berthouze/paper/KleinsmithFushimi.pdfGoogle ScholarGoogle Scholar
  22. Yingliang Ma, Helena M. Paterson, and Frank E. Pollick. 2006. A motion capture library for the study of identity, gender, and emotion perception from biological motion. Behavior research methods 38, 1 (2006), 134–141. http://link.springer. com/article/10.3758/BF03192758Google ScholarGoogle Scholar
  23. Albert Mehrabian and John T Friar. 1969. Encoding of attitude by a seated communicator via posture and position cues. Journal of Consulting and Clinical Psychology 33, 3 (1969), 330.Google ScholarGoogle ScholarCross RefCross Ref
  24. Microsoft. 2017. Kinect. (2017). https://developer.microsoft.com/en-us/windows/ kinectGoogle ScholarGoogle Scholar
  25. Lars Omlor and Martin A. Giese. 2007. Extraction of spatio-temporal primitives of emotional body expressions. Neurocomputing 70, 10 (2007), 1938–1942. http: //www.sciencedirect.com/science/article/pii/S0925231206004309 Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. C.L. Roether, Lars Omlor, Andrea Christensen, and Martin A. Giese. 2009. Critical features for the perception of emotion from gait. Journal of Vision 9, 6 (06 2009), 1–32. reviewed.Google ScholarGoogle ScholarCross RefCross Ref
  27. Simon Senecal, Louis Cuel, Andreas Aristidou, and Nadia Magnenat-Thalmann. 2016. Continuous body emotion recognition system during theater performances: Continuous body emotion recognition. Computer Animation and Virtual Worlds 27, 3-4 (May 2016), 311–320. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Jochen Tautges, Arno Zinke, Björn Krüger, Jan Baumann, Andreas Weber, Thomas Helten, Meinard Müller, Hans-Peter Seidel, and Bernd Eberhardt. 2011. Motion Reconstruction Using Sparse Accelerometer Data. ACM Trans. Graph. 30, 3, Article 18 (May 2011), 12 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Arthur Truong, Hugo Boujut, and Titus Zaharia. 2016. Laban descriptors for gesture recognition and emotional analysis. The Visual Computer 32, 1 (Jan. 2016), 83–98. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Munetoshi Unuma, Ken Anjyo, and Ryozo Takeuchi. 1995. Fourier principles for emotion-based human figure animation. In Proceedings of the 22nd annual conference on Computer graphics and interactive techniques. ACM, 91–96. http: //dl.acm.org/citation.cfm?id=218419 Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Ekaterina Volkova, Stephan de la Rosa, Heinrich H. Bülthoff, and Betty Mohler. 2014. The MPI Emotional Body Expressions Database for Narrative Scenarios. PLoS ONE 9, 12 (Dec. 2014), e113647.Google ScholarGoogle ScholarCross RefCross Ref
  32. R. von Laban and L. Ullmann. 1971. The mastery of movement. Number vol. 1971,ptie. 1 in The Mastery of Movement. Macdonald & Evans. https://books. google.fr/books?id=-RYIAQAAMAAJGoogle ScholarGoogle Scholar
  33. Weiyi Wang, Valentin Enescu, and Hichem Sahli. 2015. Adaptive Real-Time Emotion Recognition from Body Movements. ACM Transactions on Interactive Intelligent Systems 5, 4 (Dec. 2015), 1–21. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Shihong Xia, Congyi Wang, Jinxiang Chai, and Jessica Hodgins. 2015. Realtime style transfer for unlabeled heterogeneous human motion. ACM Transactions on Graphics (TOG) 34, 4 (2015), 119. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. M. Ersin Yumer and Niloy J. Mitra. 2016. Spectral style transfer for human motion between independent actions. ACM Transactions on Graphics 35, 4 (July 2016), 1–8. Abstract 1 Introduction 2 Related Work 3 Proposed Method 3.1 Neutral Animation Synthesis 3.2 Residue Between Neutral and Expressive Motion 4 Results And Analysis 5 Discussions and Conclusion References Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Toward an efficient body expression recognition based on the synthesis of a neutral movement

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        ICMI '17: Proceedings of the 19th ACM International Conference on Multimodal Interaction
        November 2017
        676 pages
        ISBN:9781450355438
        DOI:10.1145/3136755

        Copyright © 2017 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 3 November 2017

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

        Acceptance Rates

        ICMI '17 Paper Acceptance Rate65of149submissions,44%Overall Acceptance Rate453of1,080submissions,42%

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader