ISCA Archive Interspeech 2017
ISCA Archive Interspeech 2017

Attentive Convolutional Neural Network Based Speech Emotion Recognition: A Study on the Impact of Input Features, Signal Length, and Acted Speech

Michael Neumann, Ngoc Thang Vu

Speech emotion recognition is an important and challenging task in the realm of human-computer interaction. Prior work proposed a variety of models and feature sets for training a system. In this work, we conduct extensive experiments using an attentive convolutional neural network with multi-view learning objective function. We compare system performance using different lengths of the input signal, different types of acoustic features and different types of emotion speech (improvised/scripted). Our experimental results on the Interactive Emotional Motion Capture (IEMOCAP) database reveal that the recognition performance strongly depends on the type of speech data independent of the choice of input features. Furthermore, we achieved state-of-the-art results on the improvised speech data of IEMOCAP.


doi: 10.21437/Interspeech.2017-917

Cite as: Neumann, M., Vu, N.T. (2017) Attentive Convolutional Neural Network Based Speech Emotion Recognition: A Study on the Impact of Input Features, Signal Length, and Acted Speech. Proc. Interspeech 2017, 1263-1267, doi: 10.21437/Interspeech.2017-917

@inproceedings{neumann17_interspeech,
  author={Michael Neumann and Ngoc Thang Vu},
  title={{Attentive Convolutional Neural Network Based Speech Emotion Recognition: A Study on the Impact of Input Features, Signal Length, and Acted Speech}},
  year=2017,
  booktitle={Proc. Interspeech 2017},
  pages={1263--1267},
  doi={10.21437/Interspeech.2017-917}
}