ISCA Archive Interspeech 2018
ISCA Archive Interspeech 2018

Deep Neural Networks for Emotion Recognition Combining Audio and Transcripts

Jaejin Cho, Raghavendra Pappagari, Purva Kulkarni, Jesús Villalba, Yishay Carmiel, Najim Dehak

In this paper, we propose to improve emotion recognition by combining acoustic information and conversation transcripts. On the one hand, an LSTM network was used to detect emotion from acoustic features like f0, shimmer, jitter, MFCC, etc. On the other hand, a multi-resolution CNN was used to detect emotion from word sequences. This CNN consists of several parallel convolutions with different kernel sizes to exploit contextual information at different levels. A temporal pooling layer aggregates the hidden representations of different words into a unique sequence level embedding, from which we compute the emotion posteriors. We optimized a weighted sum of classification and verification losses. The verification loss tries to bring embeddings from same emotions closer while it separates embeddings for different emotions. We also compared our CNN with state-of-the-art text-based hand-crafted features (e-vector). We evaluated our approach on the USC-IEMOCAP dataset as well as the dataset consisting of US English telephone speech. In the former, we used human transcripts while in the latter, we used ASR transcripts. The results showed fusing audio and transcript information improved unweighted accuracy by relative 24% for IEMOCAP and relative 3.4% for the telephone data compared to a single acoustic system.


doi: 10.21437/Interspeech.2018-2466

Cite as: Cho, J., Pappagari, R., Kulkarni, P., Villalba, J., Carmiel, Y., Dehak, N. (2018) Deep Neural Networks for Emotion Recognition Combining Audio and Transcripts. Proc. Interspeech 2018, 247-251, doi: 10.21437/Interspeech.2018-2466

@inproceedings{cho18_interspeech,
  author={Jaejin Cho and Raghavendra Pappagari and Purva Kulkarni and Jesús Villalba and Yishay Carmiel and Najim Dehak},
  title={{Deep Neural Networks for Emotion Recognition Combining Audio and Transcripts}},
  year=2018,
  booktitle={Proc. Interspeech 2018},
  pages={247--251},
  doi={10.21437/Interspeech.2018-2466}
}