skip to main content
10.1145/3508230.3508238acmotherconferencesArticle/Chapter ViewAbstractPublication PagesnlpirConference Proceedingsconference-collections
research-article

Feature Extraction Technique Based on Conv1D and Conv2D Network for Thai Speech Emotion Recognition

Authors Info & Claims
Published:08 March 2022Publication History

ABSTRACT

Speech Emotion Recognition is one of the challenges in Natural Language Processing (NLP) area. There are many factors used to identify emotions in speech, such as pitch, intensity, frequency, duration, and speakers' nationality. This paper implements a speech emotion recognition model specifically for Thai language by classifying it into 5 emotions: Angry, Frustrated, Neutral, Sad, and Happy. This research uses a dataset from VISTEC-depa AI Research Institute of Thailand. There are 21,562 sounds (scripts) divided into 70% of training data and 30% of test data. We use the Mel spectrogram and Mel-frequency Cepstral Coefficients (MFCC) technique for feature extraction and 1D Convolutional Neural Network (Conv1D) all together with 2D Convolutional Neural Network (Conv2D), to classify emotions. With respect to the result, MFCC with Conv2D provides the highest accuracy at 80.59%, and is higher than the baseline study, which is of 71.35%.

References

  1. Schuller, B., Batliner, A., Steidl, S., & Seppi, D. (2011). Recognising realistic emotions and affect in speech: State of the art and lessons learnt from the first challenge. Speech Communication, 53(9–10), 1062–1087. https://doi.org/10.1016/J.SPECOM.2011.01.011Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Laukka, P., Thingujam, N. S., Iraki, F. K., Elfenbein, H. A., Rockstuhl, T., Chui, W., & Althoff, J. (2016). The expression and recognition of emotions in the voice across five nations: A lens model analysis based on acoustic features. Journal of Personality and Social Psychology, 111(5). https://doi.org/10.1037/pspi0000066Google ScholarGoogle ScholarCross RefCross Ref
  3. Human sounds convey emotions clearer and faster than words – ScienceDaily. (n.d.). https://www.sciencedaily.com/releases/2016/01/160118134938.htmGoogle ScholarGoogle Scholar
  4. Fasel, B., & Luettin, J. (2003). Automatic facial expression analysis: A survey. In Pattern Recognition (Vol. 36, Issue 1). https://doi.org/10.1016/S0031-3203(02)00052-3Google ScholarGoogle ScholarCross RefCross Ref
  5. Busso, C., Bulut, M., Lee, C.-C., Kazemzadeh, A., Mower, E., Kim, S., Chang, J. N., Lee, S., & Narayanan, S. S. (2007). IEMOCAP: Interactive emotional dyadic motion capture database.Google ScholarGoogle Scholar
  6. Burkhardt, F., Paeschke, A., Rolfes, M., Sendlmeier, W., & Weiss, B. (n.d.). A Database of German Emotional Speech. Retrieved August 7, 2021, from http://www.expressive-speech.net/emodb/Google ScholarGoogle Scholar
  7. Ratna Kanth, N., & Saraswathi, S. (2014). A Survey on Speech Emotion Recognition. Advances in Computer Science and Information Technology (ACSIT), 1(3), 135–139. http://www.krishisanskriti.org/acsit.htmlGoogle ScholarGoogle Scholar
  8. Murphy, K. P. (1994). Dynamic Bayesian Networks: Representation, Inference and Learning.Google ScholarGoogle Scholar
  9. Bahl, L. R., Jelinek, F., & Mercer, R. L. (1983). A Maximum Likelihood Approach to Continuous Speech Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, PAMI-5(2), 179–190.Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Pan, Y., Shen, P., & Shen, L. (2012). Speech emotion recognition using support vector machine. International Journal of Smart Home, 6(2), 101–108.Google ScholarGoogle Scholar
  11. Dahake, P. P., Shaw, K., & Malathi, P. (2017). Speaker dependent speech emotion recognition using MFCC and Support Vector Machine. International Conference on Automatic Control and Dynamic Optimization Techniques, ICACDOT 2016, 1080–1084. https://doi.org/10.1109/ICACDOT.2016.7877753Google ScholarGoogle Scholar
  12. Nwe, T. L., Foo, S. W., & De Silva, L. C. (2003). Speech emotion recognition using hidden Markov models. Speech Communication, 41(4), 603–623. https://doi.org/10.1016/S0167-6393(03)00099-2Google ScholarGoogle ScholarCross RefCross Ref
  13. Kerkeni, L., Serrestou, Y., Mbarki, M., Raoof, K., & Mahjoub, M. A. (2018). Speech emotion recognition: Methods and cases study. ICAART 2018 - Proceedings of the 10th International Conference on Agents and Artificial Intelligence, 2, 175–182. https://doi.org/10.5220/0006611601750182Google ScholarGoogle ScholarCross RefCross Ref
  14. Khalil, R. A., Jones, E., Babar, M. I., Jan, T., Zafar, M. H., & Alhussain, T. (2019). Speech Emotion Recognition Using Deep Learning Techniques: A Review. IEEE Access, 7, 117327–117345. https://doi.org/10.1109/ACCESS.2019.2936124Google ScholarGoogle ScholarCross RefCross Ref
  15. Issa, D., Fatih Demirci, M., & Yazici, A. (2020). Speech emotion recognition with deep convolutional neural networks. Biomedical Signal Processing and Control, 59. https://doi.org/10.1016/J.BSPC.2020.101894Google ScholarGoogle Scholar
  16. Zhao, J., Mao, X., & Chen, L. (2019). Speech emotion recognition using deep 1D & 2D CNN LSTM networks. Biomedical Signal Processing and Control, 47, 312–323. https://doi.org/10.1016/J.BSPC.2018.08.035Google ScholarGoogle ScholarCross RefCross Ref
  17. Kerkeni, L., Serrestou, Y., Mbarki, M., Raoof, K., Ali Mahjoub, M., & Cleder, C. (2019). Automatic Speech Emotion Recognition Using Machine Learning. In Social Media and Machine Learning [Working Title]. IntechOpen. https://doi.org/10.5772/intechopen.84856Google ScholarGoogle Scholar
  18. Kasuriya, S., Banchaditt, T., Somboon, N., Teeramunkong, T., & Wutiwiwatchai, C. (2013). DETECTING EMOTIONAL SPEECH IN THAI DRAMA.Google ScholarGoogle Scholar
  19. THAI SER: ชุดข้อมูลวิเคราะห์อารมณ์จากเสียงชุดแรกในประเทศไทย https://medium.com/airesearch-in-th/aa8a38b63963Google ScholarGoogle Scholar

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Other conferences
    NLPIR '21: Proceedings of the 2021 5th International Conference on Natural Language Processing and Information Retrieval
    December 2021
    175 pages
    ISBN:9781450387354
    DOI:10.1145/3508230

    Copyright © 2021 ACM

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 8 March 2022

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • research-article
    • Research
    • Refereed limited

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format .

View HTML Format