Skip to main content
Top
Published in: International Journal of Machine Learning and Cybernetics 4/2020

20-01-2020 | Original Article

Emotion recognition using multimodal deep learning in multiple psychophysiological signals and video

Authors: Zhongmin Wang, Xiaoxiao Zhou, Wenlang Wang, Chen Liang

Published in: International Journal of Machine Learning and Cybernetics | Issue 4/2020

Log in

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Emotion recognition has attracted great interest. Numerous emotion recognition approaches have been proposed, most of which focus on visual, acoustic or psychophysiological information individually. Although more recent research has considered multimodal approaches, individual modalities are often combined only by simple fusion or are directly fused with deep learning networks at the feature level. In this paper, we propose an approach to training several specialist networks that employs deep learning techniques to fuse the features of individual modalities. This approach includes a multimodal deep belief network (MDBN), which optimizes and fuses unified psychophysiological features derived from the features of multiple psychophysiological signals, a bimodal deep belief network (BDBN) that focuses on representative visual features among the features of a video stream, and another BDBN that focuses on the high multimodal features in the unified features obtained from two modalities. Experiments are conducted on the BioVid Emo DB database and 80.89% accuracy is achieved, which outperforms the state-of-the-art approaches. The results demonstrate that the proposed approach can solve the problems of feature redundancy and lack of key features caused by multimodal fusion.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Show more products
Literature
2.
go back to reference Haag A, Goronzy S, Schaich P, Williams J (2004) Emotion recognition using bio-sensors: first steps towards an automatic system. Int J Comput Electric Eng 3068:36–48 Haag A, Goronzy S, Schaich P, Williams J (2004) Emotion recognition using bio-sensors: first steps towards an automatic system. Int J Comput Electric Eng 3068:36–48
3.
go back to reference Busso C, Deng Z, Yildirim S et al (2005) Analysis of emotion recognition using facial expressions, speech and multimodal information. ACM Int Conf Multimodal Interfaces 38(4):205–211 Busso C, Deng Z, Yildirim S et al (2005) Analysis of emotion recognition using facial expressions, speech and multimodal information. ACM Int Conf Multimodal Interfaces 38(4):205–211
4.
go back to reference Poria S, Cambria E, Hussain A et al (2015) Towards an intelligent framework for multimodal affective data analysis. Neural Netw 63:104–116CrossRef Poria S, Cambria E, Hussain A et al (2015) Towards an intelligent framework for multimodal affective data analysis. Neural Netw 63:104–116CrossRef
5.
go back to reference Yang Y, Wu QMJ, Zheng WL et al (2017) EEG-based emotion recognition using hierarchical network with subnetwork nodes. IEEE Trans Cogn Dev Syst 10(2):408–419CrossRef Yang Y, Wu QMJ, Zheng WL et al (2017) EEG-based emotion recognition using hierarchical network with subnetwork nodes. IEEE Trans Cogn Dev Syst 10(2):408–419CrossRef
6.
go back to reference Sun B, Xu Q, He J et al (2016) Audio-video based multimodal emotion recognition using SVMs and deep learning. In: Chinese conference on pattern recognition, pp 621–631 Sun B, Xu Q, He J et al (2016) Audio-video based multimodal emotion recognition using SVMs and deep learning. In: Chinese conference on pattern recognition, pp 621–631
7.
go back to reference Quiros-Ramirez MA, Onisawa T (2015) Considering cross-cultural context in the automatic recognition of emotions. Int J Mach Learn Cybern 6(1):119–127CrossRef Quiros-Ramirez MA, Onisawa T (2015) Considering cross-cultural context in the automatic recognition of emotions. Int J Mach Learn Cybern 6(1):119–127CrossRef
8.
go back to reference Corchs S, Fersini E, Gasparini F (2017) Ensemble learning on visual and textual data for social image emotion classification. Int J Mach Learn Cybern 4:1–14 Corchs S, Fersini E, Gasparini F (2017) Ensemble learning on visual and textual data for social image emotion classification. Int J Mach Learn Cybern 4:1–14
9.
go back to reference Bargal SA, Barsoum E, Ferrer CC et al (2016) Emotion recognition in the wild from videos using images. In: ICMI’16: Proceedings of the 18th ACM international conference on multimodal interaction, pp 433–436 Bargal SA, Barsoum E, Ferrer CC et al (2016) Emotion recognition in the wild from videos using images. In: ICMI’16: Proceedings of the 18th ACM international conference on multimodal interaction, pp 433–436
10.
go back to reference Kahou SE, Bouthillier X, Lamblin P et al (2015) EmoNets: multimodal deep learning approaches for emotion recognition in video. J Multimodal User Interfaces 10(2):1–13 Kahou SE, Bouthillier X, Lamblin P et al (2015) EmoNets: multimodal deep learning approaches for emotion recognition in video. J Multimodal User Interfaces 10(2):1–13
12.
go back to reference Li X, Song D, Zhang P et al (2017) Emotion recognition from multi-channel EEG data through convolutional recurrent neural network. In: 2016 IEEE international conference on bioinformatics and biomedicine (BIBM), pp 352–359 Li X, Song D, Zhang P et al (2017) Emotion recognition from multi-channel EEG data through convolutional recurrent neural network. In: 2016 IEEE international conference on bioinformatics and biomedicine (BIBM), pp 352–359
13.
go back to reference Liu W, Zheng WL, Lu BL (2016) Emotion recognition using multimodal deep learning. In: Hirose A, Ozawa S, Doya K, Ikeda K, Lee M, Liu D (eds) Neural information processing, pp 521–529CrossRef Liu W, Zheng WL, Lu BL (2016) Emotion recognition using multimodal deep learning. In: Hirose A, Ozawa S, Doya K, Ikeda K, Lee M, Liu D (eds) Neural information processing, pp 521–529CrossRef
14.
go back to reference Nguyen D, Nguyen K, Sridharan S et al (2017) Deep spatio-temporal features for multimodal emotion recognition. In: 2017 IEEE winter conference on applications of computer vision, pp 1215–1223 Nguyen D, Nguyen K, Sridharan S et al (2017) Deep spatio-temporal features for multimodal emotion recognition. In: 2017 IEEE winter conference on applications of computer vision, pp 1215–1223
15.
go back to reference Valstar M, Gratch J, Ringeval F et al (2016) AVEC 2016: depression, mood, and emotion recognition workshop and challenge. In: AVEC’16: proceedings of the 6th international workshop on audio/visual emotion challenge, pp 3–110 Valstar M, Gratch J, Ringeval F et al (2016) AVEC 2016: depression, mood, and emotion recognition workshop and challenge. In: AVEC’16: proceedings of the 6th international workshop on audio/visual emotion challenge, pp 3–110
16.
go back to reference Poria S, Chaturvedi I, Cambria E et al (2016) Convolutional MKL based multimodal emotion recognition and sentiment analysis. In: Bonchi F, Domingo-Ferrer J, Baeza-Yates R, Zhou Z-H, Wu X (eds) Proceedings - IEEE 16th international conference on data mining, pp 439–448 Poria S, Chaturvedi I, Cambria E et al (2016) Convolutional MKL based multimodal emotion recognition and sentiment analysis. In: Bonchi F, Domingo-Ferrer J, Baeza-Yates R, Zhou Z-H, Wu X (eds) Proceedings - IEEE 16th international conference on data mining, pp 439–448
17.
go back to reference Soleymani M et al (2012) DEAP: a database for emotion analysis using physiological signals. IEEE Trans Affect Comput 3(1):18–31CrossRef Soleymani M et al (2012) DEAP: a database for emotion analysis using physiological signals. IEEE Trans Affect Comput 3(1):18–31CrossRef
19.
go back to reference Bengio Y (2009) Learning deep architectures for AI. Now Publishers Inc, Boston, pp 121–150CrossRef Bengio Y (2009) Learning deep architectures for AI. Now Publishers Inc, Boston, pp 121–150CrossRef
20.
go back to reference Li W, Abtahi F, Zhu Z (2015) A deep feature based multi-kernel learning approach for video emotion recognition. ACM Int Conf Multimodal Interact:483–490 Li W, Abtahi F, Zhu Z (2015) A deep feature based multi-kernel learning approach for video emotion recognition. ACM Int Conf Multimodal Interact:483–490
21.
go back to reference Kahou SE, Michalski V, Konda K et al (2015) Recurrent neural networks for emotion recognition in video. ACM Int Conf Multimodal Interact:467–474 Kahou SE, Michalski V, Konda K et al (2015) Recurrent neural networks for emotion recognition in video. ACM Int Conf Multimodal Interact:467–474
22.
go back to reference Ranganathan H, Chakraborty S, Panchanathan S (2016) Multimodal emotion recognition using deep learning architectures. Appl Comput Vision:1–9 Ranganathan H, Chakraborty S, Panchanathan S (2016) Multimodal emotion recognition using deep learning architectures. Appl Comput Vision:1–9
23.
go back to reference Szegedy C, Ioffe S, Vanhoucke V et al (2016) Inception-v4, Inception-ResNet and the impact of residual connections on learning. AAAI Conference on Artificial Intelligence. arXiv:1602.07261 Szegedy C, Ioffe S, Vanhoucke V et al (2016) Inception-v4, Inception-ResNet and the impact of residual connections on learning. AAAI Conference on Artificial Intelligence. arXiv:​1602.​07261
24.
go back to reference Chang CC, Lin CJ (2011) LIBSVM: a library for support vector machines. ACM Trans Intell Syst Technol 2(3):1–27CrossRef Chang CC, Lin CJ (2011) LIBSVM: a library for support vector machines. ACM Trans Intell Syst Technol 2(3):1–27CrossRef
Metadata
Title
Emotion recognition using multimodal deep learning in multiple psychophysiological signals and video
Authors
Zhongmin Wang
Xiaoxiao Zhou
Wenlang Wang
Chen Liang
Publication date
20-01-2020
Publisher
Springer Berlin Heidelberg
Published in
International Journal of Machine Learning and Cybernetics / Issue 4/2020
Print ISSN: 1868-8071
Electronic ISSN: 1868-808X
DOI
https://doi.org/10.1007/s13042-019-01056-8

Other articles of this Issue 4/2020

International Journal of Machine Learning and Cybernetics 4/2020 Go to the issue