skip to main content
10.1145/2663204.2666278acmconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
research-article

Improved Spatiotemporal Local Monogenic Binary Pattern for Emotion Recognition in The Wild

Published:12 November 2014Publication History

ABSTRACT

Local binary pattern from three orthogonal planes (LBP-TOP) has been widely used in emotion recognition in the wild. However, it suffers from illumination and pose changes. This paper mainly focuses on the robustness of LBP-TOP to unconstrained environment. Recent proposed method, spatiotemporal local monogenic binary pattern (STLMBP), was verified to work promisingly in different illumination conditions. Thus this paper proposes an improved spatiotemporal feature descriptor based on STLMBP. The improved descriptor uses not only magnitude and orientation, but also the phase information, which provide complementary information. In detail, the magnitude, orientation and phase images are obtained by using an effective monogenic filter, and multiple feature vectors are finally fused by multiple kernel learning. STLMBP and the proposed method are evaluated in the Acted Facial Expression in the Wild as part of the 2014 Emotion Recognition in the Wild Challenge. They achieve competitive results, with an accuracy gain of 6.35% and 7.65% above the challenge baseline (LBP-TOP) over video.

References

  1. T. Almaev and M. Valstar. Local Gabor binary patterns from three orthogonal planes for automatic facial expression recognition. In Humaine Association Conference on Affective Computing and Intelligent Interaction, pages 356--361, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. T. Almaev, A. Yüce, A. Ghitulescu, and M. Valstar. Distribution-based interative parirwise classification of emotions in the wild using LGBP-TOP. In ACM International Conference on Multimodal Interaction, pages 535--541, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. C. Chang and C. Lin. Libsvm: a library for support vector machines. ACM Transactions on Intelligent Systems and Technology, 2(27):1--27, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. J. Daugman. High confidence visual recognition of persons by a test of statistical independence. IEEE Transaction on Pattern Analysis and Machine Intelligence, 15(11):1148--1161, 1993. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. A. Dhall, R. Goecke, J. Joshi, K. Sikka, and T. Gedeon. Emotion recognition in the wild challenge 2014: Baseline, data and protocol. In ACM International Conference on Multimodal Interaction, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. A. Dhall, R. Goecke, J. Joshi, M. Wagner, and T. Gedeon. Emotion recognition in the wild challenge (EmotiW) challenge and workshop summary. In ACM International Conference on Multimodal Interaction, pages 371--372, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. A. Dhall, R. Goecke, S. Lucey, and T. Gedeon. Collecting large, richly annotated facial-expression databases from movies. IEEE Multimedia, 19(3):34--41, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. newblock Distribution-based interative parirwise classification of emotions in the wild using {LGBP-TOP}.Google ScholarGoogle Scholar
  9. P. Ekman and W. Friesen. Facial action coding system: a technique for the measurement of facial movement. Consulting Psychologists Press, 1978. ACM International Conference on Multimodal Interaction}, pages 535--541, 2013.Google ScholarGoogle Scholar
  10. M. Felsberg and G. Sommer. The monogenic signal. IEEE Transactions on Signal Processings, 49(12):3136--3144, 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. D. Field. Relations between the statistics of natural images and the response properties of cortical cells. Journal of the Optical Society of America A, 4(12):2379--2394, 1987.Google ScholarGoogle ScholarCross RefCross Ref
  12. J. Hong and K. Song. Facial expression recognition under illumination variation. In Advanced Robotics and Its Social Impacts, pages 1--6, 2007.Google ScholarGoogle Scholar
  13. X. Huang, G. Zhao, W. Zheng, and M. Pietikäinen. Spatiotemporal local monogenic binary patterns for facial expression recognition. IEEE Signal Processing Letters, 19(5):243--246, May 2012.Google ScholarGoogle ScholarCross RefCross Ref
  14. S. Kahou, C. Pal, X. Bouthillier, P. Froumenty, C. Gülcehre, R. Memisevic, P. Vincent, A. Courville, and Y. Bengio. Combining modality specific deep neural networks for emotion recognition in video. In ACM International Conference on Multimodal Interaction, pages 543--550, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. I. Kotsia, I. Buciu, and I. Pitas. An analysis of facial expression recognition under partial facial image occlusion. Image and Vision Computing, 26(7):1052--1067, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. T. krishna, A. Rai, S. Bansal, S. Khandelwal, S. Gupta, and D. Goyal. Emotion recognition using facial and audio features. In International Conference on Language Resources and Evaluation, pages 557--564, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. M. Liu, R. Wang, Z. Huang, S. Shan, and X. Chen. Partial least squares regression on grassmannian manifold for emotion recognition. In ACM International Conference on Multimodal Interaction, pages 525--530, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. P. Lucey, J. Cohn, T. Kanade, J. Saragih, and Z. Ambadar. The extended Cohn-Kanade dataset (CK+): A complete dataset for action unit and emotion-specified expression. In International Conference on Computer Vision and Pattern Recognition, pages 94{101, 2010.Google ScholarGoogle ScholarCross RefCross Ref
  19. M. Lyons, S. Akamatsu, M. Kamachi, and J. Gyoba. Coding facial expressions with Gabor wavelets. In International Conference on Automatic Face and Gesture Recognition, 1998. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. T. Ojala, M. Pietikäinen, and T. Mäenpää. Multiresolution gray-scale and rotation invariant texture classification with local binary pattern. IEEE Transaction on Pattern Analysis and Machine Intelligence, 24(7):971--987, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. F. Orabona and J. Luo. Ultra-fast optimization algorithm for sparse multiple kernel learning. In International Conference on Machine Learning, pages 1--8, 2011.Google ScholarGoogle Scholar
  22. K. Sikka, K. Dykstra, S. Sathyanarayana, G. Littlewort, and M. Bartlett. Multiple kernel learning for emotion recognition in the wild. In ACM International Conference on Multimodal Interaction, pages 517--523, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. G. Tzimiropoulos, S. Zafeiriou, and M. Pantic. Subspace learning from image gradient orientations. IEEE Transaction on Pattern Analysis and Machine Intelligence, 34(12):2454--2466, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. M. Valastar and M. Pantic. Induced disgust, happiness and surprise: an addition to the MMI facial expression database. In International Conference on Language Resources and Evaluation, pages 65--70, 2012.Google ScholarGoogle Scholar
  25. S. Xie, S. Shan, X. Chen, and J. Chen. Fusing local patterns of Gabor magnitude and phase for face recognition. IEEE Transactions on Image Processing, 19(5):1349--1361, January 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. M. Yang, L. Zhang, L. Zhang, and D. Zhang. Monogenic binary pattern (MBP): A novel feature extraction and representation model for face recognition. In International Conference on Pattern Recognition, pages 2683--2690, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Z. Zeng, M. Pantic, G. Roisman, and T. Huang. A survey of affect recognition methods: audio, visual and spontaneous expressions. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(1):39--58, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. L. Zhang, L. Zhang, Z. Guo, and D. Zhang. Monogenic-LBP: A new approach for rotation invariant texture classification. In International Conference on Image Processing, pages 2677--2680, 2010.Google ScholarGoogle ScholarCross RefCross Ref
  29. W. Zhang, S. Shan, X. Chen, and W. Gao. Are Gabor phases really useless for face recognition. Pattern Analysis and Application, 12(3):301--307, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. W. Zhang, S. Shan, X. Chen, and W. Gao. Local Gabor binary pattern based on kullback-leilber divergence for partially occluded face recognition. IEEE Signal Processing Letters, 14(11):875--878, 2010.Google ScholarGoogle ScholarCross RefCross Ref
  31. G. Zhao and M. Pietikäinen. Dynamic texture recognition using local binary pattern with application to facial expressions. IEEE Transaction on Pattern Analysis and Machine Intelligence, 29(6):915--928, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. X. Zhu and D. Ramanan. Face detection, pose estimation and landmark localization in the wild. In Computer Vision and Pattern Recognition, pages 1--8, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Improved Spatiotemporal Local Monogenic Binary Pattern for Emotion Recognition in The Wild

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      ICMI '14: Proceedings of the 16th International Conference on Multimodal Interaction
      November 2014
      558 pages
      ISBN:9781450328852
      DOI:10.1145/2663204

      Copyright © 2014 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 12 November 2014

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      ICMI '14 Paper Acceptance Rate51of127submissions,40%Overall Acceptance Rate453of1,080submissions,42%

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader