skip to main content
10.1145/2663204.2666275acmconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
research-article

Emotion Recognition In The Wild Challenge 2014: Baseline, Data and Protocol

Published:12 November 2014Publication History

ABSTRACT

The Second Emotion Recognition In The Wild Challenge (EmotiW) 2014 consists of an audio-video based emotion classification challenge, which mimics the real-world conditions. Traditionally, emotion recognition has been performed on data captured in constrained lab-controlled like environment. While this data was a good starting point, such lab controlled data poorly represents the environment and conditions faced in real-world situations. With the exponential increase in the number of video clips being uploaded online, it is worthwhile to explore the performance of emotion recognition methods that work `in the wild'. The goal of this Grand Challenge is to carry forward the common platform defined during EmotiW 2013, for evaluation of emotion recognition methods in real-world conditions. The database in the 2014 challenge is the Acted Facial Expression In Wild (AFEW) 4.0, which has been collected from movies showing close-to-real-world conditions. The paper describes the data partitions, the baseline method and the experimental protocol.

References

  1. M. S. Bartlett, G. Littlewort, M. G. Frank, C. Lainscsek, I. R. Fasel, and J. R. Movellan. Automatic recognition of facial actions in spontaneous expressions. Journal of Multimedia, 1(6):22--35, 2006.Google ScholarGoogle ScholarCross RefCross Ref
  2. A. Dhall, R. Goecke, J. Joshi, M. Wagner, and T. Gedeon. Emotion recognition in the wild challenge 2013. In Proceedings of the ACM on International Conference on Multimodal Interaction (ICMI), pages 509--516, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. A. Dhall, R. Goecke, S. Lucey, and T. Gedeon. Acted Facial Expressions in the Wild Database. In Technical Report, 2011.Google ScholarGoogle Scholar
  4. A. Dhall, R. Goecke, S. Lucey, and T. Gedeon. Static Facial Expression Analysis In Tough Conditions: Data, Evaluation Protocol And Benchmark. In Proceedings of the IEEE International Conference on Computer Vision and Workshops BEFIT, pages 2106--2112, 2011.Google ScholarGoogle Scholar
  5. A. Dhall, R. Goecke, S. Lucey, and T. Gedeon. Collecting large, richly annotated facial-expression databases from movies. IEEE Multimedia, 19(3):0034, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. A. Dhall, J. Joshi, I. Radwan, and R. Goecke. Finding Happiest Moments in a Social Context. In Proceedings of the Asian Conference on Computer Vision (ACCV), pages 613--626, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. A. Dhall, K. Sikka, G. Littlewort, R. Goecke, and M. Bartlett. A Discriminative Parts Based Model Approach for Fiducial Points Free and Shape Constrained Head Pose Normalisation In The Wild. In Proceedings of the IEEE Winter Conference on Applications of Computer Vision, pages 1--8, 2014.Google ScholarGoogle ScholarCross RefCross Ref
  8. F. Eyben, M. Wollmer, and B. Schuller. Openear-introducing the munich open-source emotion and affect recognition toolkit. In Proceedings of the International Conference on Affective Computing and Intelligent Interaction (ACII), pages 1--6, 2009.Google ScholarGoogle ScholarCross RefCross Ref
  9. F. Eyben, M. Wöllmer, and B. Schuller. Opensmile: the munich versatile and fast open-source audio feature extractor. In Proceedings of the ACM International Conference on Multimedia (MM), pages 1459--1462, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. P. F. Felzenszwalb and D. P. Huttenlocher. Pictorial Structures for Object Recognition. International Journal on Computer Vision, 61(1):55--79, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. R. Gross, I. Matthews, J. F. Cohn, T. Kanade, and S. Baker. Multi-PIE. In Proceedings of the IEEE International Conference on Automatic Face and Gesture Recognition (FG), pages 1--8, 2008.Google ScholarGoogle Scholar
  12. S. E. Kahou, C. Pal, X. Bouthillier, P. Froumenty, S. Jean, K. R. Konda, P. Vincent, A. Courville, and Y. Bengio. Combining modality specific deep neural networks for emotion recognition in video. In Proceedings of the ACM on International Conference on Multimodal Interaction (ICMI), pages 543--550, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. T. Kanade, J. F. Cohn, and Y. Tian. Comprehensive database for facial expression analysis. In Proceedings of the IEEE International Conference on Automatic Face and Gesture Recognition (FG), pages 46--53, 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. M. Liu, R. Wang, Z. Huang, S. Shan, and X. Chen. Partial least squares regression on grassmannian manifold for emotion recognition. In Proceedings of the ACM on International Conference on Multimodal Interaction (ICMI), pages 525--530, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. S. Nowee. Facial Expression Recognition in the Wild: The Influence of Temporal Information. PhD thesis, University of Amsterdam, 2014.Google ScholarGoogle Scholar
  16. B. Schuller, S. Steidl, A. Batliner, F. Burkhardt, L. Devillers, C. A. Müller, and S. S. Narayanan. The interspeech 2010 paralinguistic challenge. In INTERSPEECH, pages 2794--2797, 2010.Google ScholarGoogle Scholar
  17. B. Schuller, M. Valstar, F. Eyben, G. McKeown, R. Cowie, and M. Pantic. Avec 2011--the first international audio/visual emotion challenge. In Proceedings of the International Conference on Affective Computing and Intelligent Interaction (ACII), pages 415--424, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. K. Sikka, A. Dhall, and M. Bartlett. Weakly supervised pain localization using multiple instance learning. Proceedings of the IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), pages 1--8, 2013.Google ScholarGoogle ScholarCross RefCross Ref
  19. K. Sikka, K. Dykstra, S. Sathyanarayana, G. Littlewort, and M. Bartlett. Multiple kernel learning for emotion recognition in the wild. In Proceedings of the ACM on International Conference on Multimodal Interaction (ICMI), pages 517--524, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. F. Wallhoff. Facial expressions and emotion database, 2006. http://www.mmk.ei.tum.de/ waf/fgnet/feedtum.html.Google ScholarGoogle Scholar
  21. J. Whitehill, G. Littlewort, I. R. Fasel, M. S. Bartlett, and J. R. Movellan. Toward Practical Smile Detection. IEEE Transaction on Pattern Analysis and Machine Intelligence, 31(11):2106--2111, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. X. Xiong and F. De la Torre. Supervised descent method and its applications to face alignment. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 532--539, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. G. Zhao and M. Pietikainen. Dynamic texture recognition using local binary patterns with an application to facial expressions. IEEE Transaction on Pattern Analysis and Machine Intelligence, 29(6):915--928, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. X. Zhu and D. Ramanan. Face detection, pose estimation, and landmark localization in the wild. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2879--2886, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Emotion Recognition In The Wild Challenge 2014: Baseline, Data and Protocol

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        ICMI '14: Proceedings of the 16th International Conference on Multimodal Interaction
        November 2014
        558 pages
        ISBN:9781450328852
        DOI:10.1145/2663204

        Copyright © 2014 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 12 November 2014

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

        Acceptance Rates

        ICMI '14 Paper Acceptance Rate51of127submissions,40%Overall Acceptance Rate453of1,080submissions,42%

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader