Skip to main content
Erschienen in: Journal on Multimodal User Interfaces 2/2016

01.06.2016 | Original Paper

Revisiting the EmotiW challenge: how wild is it really?

Classification of human emotions in movie snippets based on multiple features

verfasst von: Markus Kächele, Martin Schels, Sascha Meudt, Günther Palm, Friedhelm Schwenker

Erschienen in: Journal on Multimodal User Interfaces | Ausgabe 2/2016

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

The focus of this work is emotion recognition in the wild based on a multitude of different audio, visual and meta features. For this, a method is proposed to optimize multi-modal fusion architectures based on evolutionary computing. Extensive uni- and multi-modal experiments show the discriminative power of each computed feature set and fusion architecture. Furthermore, we summarize the EmotiW 2013/2014 challenges and review the conclusions that have been drawn and compare our results with the state-of-the-art on this dataset.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Almaev TR, Yüce A, Ghitulescu A, Valstar MF (2013) Distribution-based iterative pairwise classification of emotions in the wild using LGBP-TOP. In: Proceedings of the 15th ACM on international conference on multimodal interaction, ICMI ’13. ACM, pp 535–542 Almaev TR, Yüce A, Ghitulescu A, Valstar MF (2013) Distribution-based iterative pairwise classification of emotions in the wild using LGBP-TOP. In: Proceedings of the 15th ACM on international conference on multimodal interaction, ICMI ’13. ACM, pp 535–542
2.
Zurück zum Zitat Atal BS, Hanauer SL (1971) Speech analysis and synthesis by linear prediction of the speech wave. J Acoust Soc Am 50(2):637–655CrossRef Atal BS, Hanauer SL (1971) Speech analysis and synthesis by linear prediction of the speech wave. J Acoust Soc Am 50(2):637–655CrossRef
3.
Zurück zum Zitat Bänziger T, Mortillaro M, Scherer KR (2012) Introducing the Geneva multimodal expression corpus for experimental research on emotion perception. Emotion 12:1161–1179CrossRef Bänziger T, Mortillaro M, Scherer KR (2012) Introducing the Geneva multimodal expression corpus for experimental research on emotion perception. Emotion 12:1161–1179CrossRef
4.
Zurück zum Zitat Bosch A, Zisserman A, Munoz X (2007) Representing shape with a spatial pyramid kernel. In: Proceedings of the 6th ACM international conference on Image and video retrieval, CIVR ’07. ACM, pp 401–408 Bosch A, Zisserman A, Munoz X (2007) Representing shape with a spatial pyramid kernel. In: Proceedings of the 6th ACM international conference on Image and video retrieval, CIVR ’07. ACM, pp 401–408
5.
Zurück zum Zitat Cardoso JF, Souloumiac A (1993) Blind beamforming for non-gaussian signals. IEE Proc F (Radar Signal Process) 140:362–370CrossRef Cardoso JF, Souloumiac A (1993) Blind beamforming for non-gaussian signals. IEE Proc F (Radar Signal Process) 140:362–370CrossRef
6.
Zurück zum Zitat Chen J, Chen Z, Chi Z, Fu H (2014) Emotion recognition in the wild with feature fusion and multiple kernel learning. In: Proceedings of the 16th international conference on multimodal interaction, ICMI ’14. ACM, pp 508–513 Chen J, Chen Z, Chi Z, Fu H (2014) Emotion recognition in the wild with feature fusion and multiple kernel learning. In: Proceedings of the 16th international conference on multimodal interaction, ICMI ’14. ACM, pp 508–513
7.
Zurück zum Zitat Clavel C, Vasilescu I, Devillers L, Richard G, Ehrette T (2008) Fear-type emotion recognition for future audio-based surveillance systems. Speech Commun 50(6):487–503CrossRef Clavel C, Vasilescu I, Devillers L, Richard G, Ehrette T (2008) Fear-type emotion recognition for future audio-based surveillance systems. Speech Commun 50(6):487–503CrossRef
8.
Zurück zum Zitat Dalal N, Triggs B (2005) Histograms of oriented gradients for human detection. In: IEEE Computer Society conference on computer vision and pattern recognition, 2005. CVPR 2005, vol 1, pp 886–893 Dalal N, Triggs B (2005) Histograms of oriented gradients for human detection. In: IEEE Computer Society conference on computer vision and pattern recognition, 2005. CVPR 2005, vol 1, pp 886–893
9.
Zurück zum Zitat Davis S, Mermelstein P (1980) Comparison of parametric representations for monosyllabic word recognition in continuously spoken sentences. Acoust Speech Signal Process IEEE Trans 28(4):357–366CrossRef Davis S, Mermelstein P (1980) Comparison of parametric representations for monosyllabic word recognition in continuously spoken sentences. Acoust Speech Signal Process IEEE Trans 28(4):357–366CrossRef
10.
Zurück zum Zitat Day M (2013) Emotion recognition with boosted tree classifiers. In: Proceedings of the 15th ACM on international conference on multimodal interaction, ICMI ’13. ACM, pp 531–534 Day M (2013) Emotion recognition with boosted tree classifiers. In: Proceedings of the 15th ACM on international conference on multimodal interaction, ICMI ’13. ACM, pp 531–534
11.
Zurück zum Zitat Dhall A, Goecke R, Joshi J, Sikka K, Gedeon T (2014) Emotion recognition in the wild challenge 2014: Baseline, data and protocol. In: Proceedings of the 16th international conference on multimodal interaction. ACM, pp 461–466 Dhall A, Goecke R, Joshi J, Sikka K, Gedeon T (2014) Emotion recognition in the wild challenge 2014: Baseline, data and protocol. In: Proceedings of the 16th international conference on multimodal interaction. ACM, pp 461–466
12.
Zurück zum Zitat Dhall A, Goecke R, Joshi J, Wagner M, Gedeon T (2013) Emotion recognition in the wild challenge 2013. In: Proceedings of the 15th ACM on international conference on multimodal interaction. ACM, pp 509–516 Dhall A, Goecke R, Joshi J, Wagner M, Gedeon T (2013) Emotion recognition in the wild challenge 2013. In: Proceedings of the 15th ACM on international conference on multimodal interaction. ACM, pp 509–516
13.
Zurück zum Zitat Dhall A, Goecke R, Lucey S, Gedeon T (2012) Collecting large, richly annotated facial-expression databases from movies. IEEE Multimed 3:34–41CrossRef Dhall A, Goecke R, Lucey S, Gedeon T (2012) Collecting large, richly annotated facial-expression databases from movies. IEEE Multimed 3:34–41CrossRef
14.
Zurück zum Zitat Eerola T, Vuoskoski JK (2011) A comparison of the discrete and dimensional models of emotion in music. Psychol Music 39(1):18–49CrossRef Eerola T, Vuoskoski JK (2011) A comparison of the discrete and dimensional models of emotion in music. Psychol Music 39(1):18–49CrossRef
15.
Zurück zum Zitat Eyben F, Wöllmer M, Schuller B (2009) OpenEAR - introducing the Munich open-source emotion and affect recognition toolkit. In: Affective computing and intelligent interaction and workshops, 2009. ACII 2009, pp 1–6 Eyben F, Wöllmer M, Schuller B (2009) OpenEAR - introducing the Munich open-source emotion and affect recognition toolkit. In: Affective computing and intelligent interaction and workshops, 2009. ACII 2009, pp 1–6
16.
Zurück zum Zitat Gehrig T, Ekenel HK (2013) Why is facial expression analysis in the wild challenging? In: Proceedings of the 2013 on emotion recognition in the wild challenge and workshop, EmotiW ’13. ACM, pp 9–16 Gehrig T, Ekenel HK (2013) Why is facial expression analysis in the wild challenging? In: Proceedings of the 2013 on emotion recognition in the wild challenge and workshop, EmotiW ’13. ACM, pp 9–16
17.
Zurück zum Zitat Gómez Jáuregui DA, Martin JC (2013) Evaluation of vision-based real-time measures for emotions discrimination under uncontrolled conditions. In: Proceedings of the 2013 on emotion recognition in the wild challenge and workshop, EmotiW ’13. ACM, pp 17–22 Gómez Jáuregui DA, Martin JC (2013) Evaluation of vision-based real-time measures for emotions discrimination under uncontrolled conditions. In: Proceedings of the 2013 on emotion recognition in the wild challenge and workshop, EmotiW ’13. ACM, pp 17–22
18.
Zurück zum Zitat Grimm M, Kroschel K, Narayanan S (2008) The Vera am Mittag German audio-visual emotional speech database. In: IEEE international conference on multimedia and expo, pp 865–868 Grimm M, Kroschel K, Narayanan S (2008) The Vera am Mittag German audio-visual emotional speech database. In: IEEE international conference on multimedia and expo, pp 865–868
19.
Zurück zum Zitat Grosicki M (2014) Neural networks for emotion recognition in the wild. In: Proceedings of the 16th international conference on multimodal interaction, ICMI ’14. ACM, pp 467–472 Grosicki M (2014) Neural networks for emotion recognition in the wild. In: Proceedings of the 16th international conference on multimodal interaction, ICMI ’14. ACM, pp 467–472
20.
Zurück zum Zitat Guoying Z, Pietikäinen M (2007) Dynamic texture recognition using local binary patterns with an application to facial expressions. IEEE Trans Pattern Anal Mach Intell 29(6):915–928CrossRef Guoying Z, Pietikäinen M (2007) Dynamic texture recognition using local binary patterns with an application to facial expressions. IEEE Trans Pattern Anal Mach Intell 29(6):915–928CrossRef
21.
Zurück zum Zitat Hermansky H (1990) Perceptual linear predictive (PLP) analysis of speech. J Acoust Soc Am 87(4):1738–1752CrossRef Hermansky H (1990) Perceptual linear predictive (PLP) analysis of speech. J Acoust Soc Am 87(4):1738–1752CrossRef
22.
Zurück zum Zitat Hermansky H (1997) The modulation spectrum in automatic recognition of speech. In: Proceedings of IEEE workshop on automatic speech recognition and understanding Hermansky H (1997) The modulation spectrum in automatic recognition of speech. In: Proceedings of IEEE workshop on automatic speech recognition and understanding
23.
Zurück zum Zitat Hermansky H, Morgan N, Bayya A, Kohn P (1992) RASTA-PLP speech analysis technique. In: IEEE international conference on acoustics, speech, and signal processing (ICASSP-92), vol 1, pp 121–124 Hermansky H, Morgan N, Bayya A, Kohn P (1992) RASTA-PLP speech analysis technique. In: IEEE international conference on acoustics, speech, and signal processing (ICASSP-92), vol 1, pp 121–124
24.
Zurück zum Zitat Huang X, He Q, Hong X, Zhao G, Pietikäinen M (2014) Improved spatiotemporal local monogenic binary pattern for emotion recognition in the wild. In: Proceedings of the 16th international conference on multimodal interaction, ICMI ’14. ACM, pp 514–520 Huang X, He Q, Hong X, Zhao G, Pietikäinen M (2014) Improved spatiotemporal local monogenic binary pattern for emotion recognition in the wild. In: Proceedings of the 16th international conference on multimodal interaction, ICMI ’14. ACM, pp 514–520
25.
Zurück zum Zitat Kächele M, Schels M, Schwenker F (2014) Inferring depression and affect from application dependent meta knowledge. In: Proceedings of the 4th international workshop on audio/visual emotion challenge, AVEC ’14. ACM, pp 41–48 Kächele M, Schels M, Schwenker F (2014) Inferring depression and affect from application dependent meta knowledge. In: Proceedings of the 4th international workshop on audio/visual emotion challenge, AVEC ’14. ACM, pp 41–48
26.
Zurück zum Zitat Kächele M., Thiam P., Palm G., Schwenker F., Schels M (2015) Ensemble methods for continuous affect recognition: multi-modality, temporality, and challenges. In: Proceedings of the 5th international workshop on audio/visual emotion challenge, AVEC ’15. ACM, pp 9–16 Kächele M., Thiam P., Palm G., Schwenker F., Schels M (2015) Ensemble methods for continuous affect recognition: multi-modality, temporality, and challenges. In: Proceedings of the 5th international workshop on audio/visual emotion challenge, AVEC ’15. ACM, pp 9–16
27.
Zurück zum Zitat Kächele M, Zharkov D, Meudt S, Schwenker F (2014) Prosodic, spectral and voice quality feature selection using a long-term stopping criterion for audio-based emotion recognition. In: Proceedings of the international conference on pattern recognition (ICPR), pp 803–808 Kächele M, Zharkov D, Meudt S, Schwenker F (2014) Prosodic, spectral and voice quality feature selection using a long-term stopping criterion for audio-based emotion recognition. In: Proceedings of the international conference on pattern recognition (ICPR), pp 803–808
28.
Zurück zum Zitat Kahou SE, Pal C, Bouthillier X, Froumenty P, Gülçere Ç, et al. (2013) Combining modality specific deep neural networks for emotion recognition in video. In: Proceedings of the 15th ACM on international conference on multimodal interaction, ICMI ’13. ACM, pp 543–550 Kahou SE, Pal C, Bouthillier X, Froumenty P, Gülçere Ç, et al. (2013) Combining modality specific deep neural networks for emotion recognition in video. In: Proceedings of the 15th ACM on international conference on multimodal interaction, ICMI ’13. ACM, pp 543–550
29.
Zurück zum Zitat Kanade T, Cohn J, Tian Y (2000) Comprehensive database for facial expression analysis. Autom Face Gesture Recognit 2000:46–53CrossRef Kanade T, Cohn J, Tian Y (2000) Comprehensive database for facial expression analysis. Autom Face Gesture Recognit 2000:46–53CrossRef
30.
Zurück zum Zitat Kaya H, Salah AA (2014) Combining modality-specific extreme learning machines for emotion recognition in the wild. In: Proceedings of the 16th international conference on multimodal interaction, ICMI ’14. ACM, pp 487–493 Kaya H, Salah AA (2014) Combining modality-specific extreme learning machines for emotion recognition in the wild. In: Proceedings of the 16th international conference on multimodal interaction, ICMI ’14. ACM, pp 487–493
31.
Zurück zum Zitat Krishna T, Rai A, Bansal S, Khandelwal S, Gupta S, Goyal D (2013) Emotion recognition using facial and audio features. In: Proceedings of the 15th ACM on international conference on multimodal interaction, ICMI ’13. ACM, pp 557–564 Krishna T, Rai A, Bansal S, Khandelwal S, Gupta S, Goyal D (2013) Emotion recognition using facial and audio features. In: Proceedings of the 15th ACM on international conference on multimodal interaction, ICMI ’13. ACM, pp 557–564
32.
Zurück zum Zitat Levi K, Weiss Y (2004) Learning object detection from a small number of examples: the importance of good features. In: Proceedings of the IEEE Computer Society conference on computer vision and pattern recognition (CVPR), vol 2, pp II-53–II-60 Levi K, Weiss Y (2004) Learning object detection from a small number of examples: the importance of good features. In: Proceedings of the IEEE Computer Society conference on computer vision and pattern recognition (CVPR), vol 2, pp II-53–II-60
33.
Zurück zum Zitat Liu M, Wang R, Huang Z, Shan S, Chen X (2013) Partial least squares regression on grassmannian manifold for emotion recognition. In: Proceedings of the 15th ACM on international conference on multimodal interaction, ICMI ’13. ACM, pp 525–530 Liu M, Wang R, Huang Z, Shan S, Chen X (2013) Partial least squares regression on grassmannian manifold for emotion recognition. In: Proceedings of the 15th ACM on international conference on multimodal interaction, ICMI ’13. ACM, pp 525–530
34.
Zurück zum Zitat Liu M, Wang R, Li S, Shan S, Huang Z, Chen X (2014) Combining multiple kernel methods on riemannian manifold for emotion recognition in the wild. In: Proceedings of the 16th international conference on multimodal interaction, ICMI ’14. ACM, pp 494–501 Liu M, Wang R, Li S, Shan S, Huang Z, Chen X (2014) Combining multiple kernel methods on riemannian manifold for emotion recognition in the wild. In: Proceedings of the 16th international conference on multimodal interaction, ICMI ’14. ACM, pp 494–501
35.
Zurück zum Zitat McKeown G, Valstar MF, Cowie R, Pantic M (2010) The SEMAINE corpus of emotionally coloured character interactions. In: IEEE international conference on multimedia and expo (ICME). IEEE, pp 1079–1084 McKeown G, Valstar MF, Cowie R, Pantic M (2010) The SEMAINE corpus of emotionally coloured character interactions. In: IEEE international conference on multimedia and expo (ICME). IEEE, pp 1079–1084
36.
Zurück zum Zitat Meng H, Pears N (2009) Descriptive temporal template features for visual motion recognition. Pattern Recognit Lett 30(12):1049–1058CrossRef Meng H, Pears N (2009) Descriptive temporal template features for visual motion recognition. Pattern Recognit Lett 30(12):1049–1058CrossRef
37.
Zurück zum Zitat Meng H, Romera-Paredes B, Bianchi-Berthouze N (2011) Emotion recognition by two view SVM-2K classifier on dynamic facial expression features. In: 2011 IEEE international conference on automatic face gesture recognition and workshops (FG 2011), pp 854–859 Meng H, Romera-Paredes B, Bianchi-Berthouze N (2011) Emotion recognition by two view SVM-2K classifier on dynamic facial expression features. In: 2011 IEEE international conference on automatic face gesture recognition and workshops (FG 2011), pp 854–859
38.
Zurück zum Zitat Meudt S, Schwenker F (2014) Enhanced autocorrelation in real world emotion recognition. In: Proceedings of the 16th international conference on multimodal interaction, ICMI ’14. ACM, pp 502–507 Meudt S, Schwenker F (2014) Enhanced autocorrelation in real world emotion recognition. In: Proceedings of the 16th international conference on multimodal interaction, ICMI ’14. ACM, pp 502–507
39.
Zurück zum Zitat Meudt S, Zharkov D, Kächele M, Schwenker F (2013) Multi classifier systems and forward backward feature selection algorithms to classify emotional coloured speech. In: Proceedings of the international conference on multimodal interaction, ICMI 2013. ACM, pp 551–556 Meudt S, Zharkov D, Kächele M, Schwenker F (2013) Multi classifier systems and forward backward feature selection algorithms to classify emotional coloured speech. In: Proceedings of the international conference on multimodal interaction, ICMI 2013. ACM, pp 551–556
40.
Zurück zum Zitat Ojala T, Pietikäinen M, Maenpaa T (2002) Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. Pattern Anal Mach Intell IEEE Trans 24(7):971–987CrossRefMATH Ojala T, Pietikäinen M, Maenpaa T (2002) Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. Pattern Anal Mach Intell IEEE Trans 24(7):971–987CrossRefMATH
41.
Zurück zum Zitat Oliva A, Torralba A (2001) Modeling the shape of the scene: a holistic representation of the spatial envelope. Int J Comput Vis 42(3):145–175CrossRefMATH Oliva A, Torralba A (2001) Modeling the shape of the scene: a holistic representation of the spatial envelope. Int J Comput Vis 42(3):145–175CrossRefMATH
42.
Zurück zum Zitat Pudil P, Novovičová J, Kittler J (1994) Floating search methods in feature selection. Pattern Recognit Lett 15(11):1119–1125CrossRef Pudil P, Novovičová J, Kittler J (1994) Floating search methods in feature selection. Pattern Recognit Lett 15(11):1119–1125CrossRef
43.
Zurück zum Zitat Ringeval F, Amiriparian S, Eyben F, Scherer K, Schuller B (2014) Emotion recognition in the wild: Incorporating voice and lip activity in multimodal decision-level fusion. In: Proceedings of the 16th international conference on multimodal interaction, ICMI ’14. ACM, pp 473–480 Ringeval F, Amiriparian S, Eyben F, Scherer K, Schuller B (2014) Emotion recognition in the wild: Incorporating voice and lip activity in multimodal decision-level fusion. In: Proceedings of the 16th international conference on multimodal interaction, ICMI ’14. ACM, pp 473–480
44.
Zurück zum Zitat Ringeval F, Sonderegger A, Sauer J, Lalanne D (2013) Introducing the RECOLA multimodal corpus of remote collaborative and affective interactions. In: Proceedings of face and gestures 2013, 2nd IEEE international workshop on emotion representation, analysis and synthesis in continuous time and space (EmoSPACE) Ringeval F, Sonderegger A, Sauer J, Lalanne D (2013) Introducing the RECOLA multimodal corpus of remote collaborative and affective interactions. In: Proceedings of face and gestures 2013, 2nd IEEE international workshop on emotion representation, analysis and synthesis in continuous time and space (EmoSPACE)
45.
Zurück zum Zitat Robinson DW, Dadson RS (1956) A re-determination of the equal-loudness relations for pure tones. Br J Appl Phys 7(5):166–181CrossRef Robinson DW, Dadson RS (1956) A re-determination of the equal-loudness relations for pure tones. Br J Appl Phys 7(5):166–181CrossRef
46.
Zurück zum Zitat Sidorov M, Minker W (2014) Emotion recognition in real-world conditions with acoustic and visual features. In: Proceedings of the 16th international conference on multimodal interaction, ICMI ’14. ACM, pp 521–524 Sidorov M, Minker W (2014) Emotion recognition in real-world conditions with acoustic and visual features. In: Proceedings of the 16th international conference on multimodal interaction, ICMI ’14. ACM, pp 521–524
47.
Zurück zum Zitat Sikka K, Dykstra K, Sathyanarayana S, Littlewort G, Bartlett M (2013) Multiple kernel learning for emotion recognition in the wild. In: Proceedings of the 15th ACM on international conference on multimodal interaction, ICMI ’13. ACM, pp 517–524 Sikka K, Dykstra K, Sathyanarayana S, Littlewort G, Bartlett M (2013) Multiple kernel learning for emotion recognition in the wild. In: Proceedings of the 15th ACM on international conference on multimodal interaction, ICMI ’13. ACM, pp 517–524
48.
Zurück zum Zitat Sun B, Li L, Zuo T, Chen Y, Zhou G, Wu X (2014) Combining multimodal features with hierarchical classifier fusion for emotion recognition in the wild. In: Proceedings of the 16th international conference on multimodal interaction, ICMI ’14. ACM, pp 481–486 Sun B, Li L, Zuo T, Chen Y, Zhou G, Wu X (2014) Combining multimodal features with hierarchical classifier fusion for emotion recognition in the wild. In: Proceedings of the 16th international conference on multimodal interaction, ICMI ’14. ACM, pp 481–486
49.
Zurück zum Zitat Tolonen T, Karjalainen M (2000) A computationally efficient multipitch analysis model. IEEE Trans Speech Audio Process 8(6):708–716CrossRef Tolonen T, Karjalainen M (2000) A computationally efficient multipitch analysis model. IEEE Trans Speech Audio Process 8(6):708–716CrossRef
50.
Zurück zum Zitat Walter S, Scherer S, Schels M, Glodek M, Hrabal D, Schmidt M, Böck R, Limbrecht K, Traue H, Schwenker F (2011) Multimodal emotion classification in naturalistic user behavior, towards mobile and intelligent interaction environments, LNCS. In: Jacko J (ed) Human–computer interaction, vol 6763. Springer, Berlin Heidelberg, pp 603–611 Walter S, Scherer S, Schels M, Glodek M, Hrabal D, Schmidt M, Böck R, Limbrecht K, Traue H, Schwenker F (2011) Multimodal emotion classification in naturalistic user behavior, towards mobile and intelligent interaction environments, LNCS. In: Jacko J (ed) Human–computer interaction, vol 6763. Springer, Berlin Heidelberg, pp 603–611
51.
Zurück zum Zitat Weiss S, Indurkhya N, Zhang T, Damerau F (2005) Text mining: predictive methods for analyzing unstructured information, 1st edn. Springer, New YorkCrossRefMATH Weiss S, Indurkhya N, Zhang T, Damerau F (2005) Text mining: predictive methods for analyzing unstructured information, 1st edn. Springer, New YorkCrossRefMATH
Metadaten
Titel
Revisiting the EmotiW challenge: how wild is it really?
Classification of human emotions in movie snippets based on multiple features
verfasst von
Markus Kächele
Martin Schels
Sascha Meudt
Günther Palm
Friedhelm Schwenker
Publikationsdatum
01.06.2016
Verlag
Springer International Publishing
Erschienen in
Journal on Multimodal User Interfaces / Ausgabe 2/2016
Print ISSN: 1783-7677
Elektronische ISSN: 1783-8738
DOI
https://doi.org/10.1007/s12193-015-0202-7

Weitere Artikel der Ausgabe 2/2016

Journal on Multimodal User Interfaces 2/2016 Zur Ausgabe