ABSTRACT
Speech Emotion Recognition is one of the challenges in Natural Language Processing (NLP) area. There are many factors used to identify emotions in speech, such as pitch, intensity, frequency, duration, and speakers' nationality. This paper implements a speech emotion recognition model specifically for Thai language by classifying it into 5 emotions: Angry, Frustrated, Neutral, Sad, and Happy. This research uses a dataset from VISTEC-depa AI Research Institute of Thailand. There are 21,562 sounds (scripts) divided into 70% of training data and 30% of test data. We use the Mel spectrogram and Mel-frequency Cepstral Coefficients (MFCC) technique for feature extraction and 1D Convolutional Neural Network (Conv1D) all together with 2D Convolutional Neural Network (Conv2D), to classify emotions. With respect to the result, MFCC with Conv2D provides the highest accuracy at 80.59%, and is higher than the baseline study, which is of 71.35%.
- Schuller, B., Batliner, A., Steidl, S., & Seppi, D. (2011). Recognising realistic emotions and affect in speech: State of the art and lessons learnt from the first challenge. Speech Communication, 53(9–10), 1062–1087. https://doi.org/10.1016/J.SPECOM.2011.01.011Google ScholarDigital Library
- Laukka, P., Thingujam, N. S., Iraki, F. K., Elfenbein, H. A., Rockstuhl, T., Chui, W., & Althoff, J. (2016). The expression and recognition of emotions in the voice across five nations: A lens model analysis based on acoustic features. Journal of Personality and Social Psychology, 111(5). https://doi.org/10.1037/pspi0000066Google ScholarCross Ref
- Human sounds convey emotions clearer and faster than words – ScienceDaily. (n.d.). https://www.sciencedaily.com/releases/2016/01/160118134938.htmGoogle Scholar
- Fasel, B., & Luettin, J. (2003). Automatic facial expression analysis: A survey. In Pattern Recognition (Vol. 36, Issue 1). https://doi.org/10.1016/S0031-3203(02)00052-3Google ScholarCross Ref
- Busso, C., Bulut, M., Lee, C.-C., Kazemzadeh, A., Mower, E., Kim, S., Chang, J. N., Lee, S., & Narayanan, S. S. (2007). IEMOCAP: Interactive emotional dyadic motion capture database.Google Scholar
- Burkhardt, F., Paeschke, A., Rolfes, M., Sendlmeier, W., & Weiss, B. (n.d.). A Database of German Emotional Speech. Retrieved August 7, 2021, from http://www.expressive-speech.net/emodb/Google Scholar
- Ratna Kanth, N., & Saraswathi, S. (2014). A Survey on Speech Emotion Recognition. Advances in Computer Science and Information Technology (ACSIT), 1(3), 135–139. http://www.krishisanskriti.org/acsit.htmlGoogle Scholar
- Murphy, K. P. (1994). Dynamic Bayesian Networks: Representation, Inference and Learning.Google Scholar
- Bahl, L. R., Jelinek, F., & Mercer, R. L. (1983). A Maximum Likelihood Approach to Continuous Speech Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, PAMI-5(2), 179–190.Google ScholarDigital Library
- Pan, Y., Shen, P., & Shen, L. (2012). Speech emotion recognition using support vector machine. International Journal of Smart Home, 6(2), 101–108.Google Scholar
- Dahake, P. P., Shaw, K., & Malathi, P. (2017). Speaker dependent speech emotion recognition using MFCC and Support Vector Machine. International Conference on Automatic Control and Dynamic Optimization Techniques, ICACDOT 2016, 1080–1084. https://doi.org/10.1109/ICACDOT.2016.7877753Google Scholar
- Nwe, T. L., Foo, S. W., & De Silva, L. C. (2003). Speech emotion recognition using hidden Markov models. Speech Communication, 41(4), 603–623. https://doi.org/10.1016/S0167-6393(03)00099-2Google ScholarCross Ref
- Kerkeni, L., Serrestou, Y., Mbarki, M., Raoof, K., & Mahjoub, M. A. (2018). Speech emotion recognition: Methods and cases study. ICAART 2018 - Proceedings of the 10th International Conference on Agents and Artificial Intelligence, 2, 175–182. https://doi.org/10.5220/0006611601750182Google ScholarCross Ref
- Khalil, R. A., Jones, E., Babar, M. I., Jan, T., Zafar, M. H., & Alhussain, T. (2019). Speech Emotion Recognition Using Deep Learning Techniques: A Review. IEEE Access, 7, 117327–117345. https://doi.org/10.1109/ACCESS.2019.2936124Google ScholarCross Ref
- Issa, D., Fatih Demirci, M., & Yazici, A. (2020). Speech emotion recognition with deep convolutional neural networks. Biomedical Signal Processing and Control, 59. https://doi.org/10.1016/J.BSPC.2020.101894Google Scholar
- Zhao, J., Mao, X., & Chen, L. (2019). Speech emotion recognition using deep 1D & 2D CNN LSTM networks. Biomedical Signal Processing and Control, 47, 312–323. https://doi.org/10.1016/J.BSPC.2018.08.035Google ScholarCross Ref
- Kerkeni, L., Serrestou, Y., Mbarki, M., Raoof, K., Ali Mahjoub, M., & Cleder, C. (2019). Automatic Speech Emotion Recognition Using Machine Learning. In Social Media and Machine Learning [Working Title]. IntechOpen. https://doi.org/10.5772/intechopen.84856Google Scholar
- Kasuriya, S., Banchaditt, T., Somboon, N., Teeramunkong, T., & Wutiwiwatchai, C. (2013). DETECTING EMOTIONAL SPEECH IN THAI DRAMA.Google Scholar
- THAI SER: ชุดข้อมูลวิเคราะห์อารมณ์จากเสียงชุดแรกในประเทศไทย https://medium.com/airesearch-in-th/aa8a38b63963Google Scholar
Recommendations
Feature based classification of dysfluent and normal speech
CCSEIT '12: Proceedings of the Second International Conference on Computational Science, Engineering and Information TechnologyThis paper is intended to develop a new approach for identification and classification of dysfluent and normal speech using Mel-Frequency Cepstral Coefficient (MFCC). Stuttering is a fluency disorder in which the fluency of speech is interrupted by ...
Development of simulated emotion speech database for excitation source analysis
The work presented in this paper is focused on the development of a simulated emotion database particularly for the excitation source analysis. The presence of simultaneous electroglottogram (EGG) recordings for each emotion utterance helps to ...
Feature Extraction Techniques with Analysis of Confusing Words for Speech Recognition in the Hindi Language
AbstractThe research work presents experimental work to build a speaker-independent connected word Hindi speech recognition system using different feature extraction techniques with comparative analysis of confusing words. Comparative analysis of ...
Comments