Skip to main content

2021 | OriginalPaper | Buchkapitel

Emotion Estimating Method by Using Voice and Facial Expression Parameters

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

This study aimed to propose a method of emotion estimation using facial expression and speech features for machine learning with fewer parameters than in previous studies. In the experiment, emotions were evoked in participants by displaying emotion-activation movies. Facial expressions and voice parameters were then extracted. These parameters were used to estimate emotions by machine learning. In machine learning, six types of learning were performed by combining objective variables and explanatory variables. It was shown that classification can be performed with an accuracy of 93.3% when only voice parameters are used in two-category classification, such as positive and negative.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
2.
Zurück zum Zitat Ito, T., Sugiya, M.: Mihabilly: voice calling robot for rehabilitation according with the emotion estimation with biological information. In: IPSJ Interaction 2018, pp. 1068–1071 (2018) Ito, T., Sugiya, M.: Mihabilly: voice calling robot for rehabilitation according with the emotion estimation with biological information. In: IPSJ Interaction 2018, pp. 1068–1071 (2018)
3.
Zurück zum Zitat Abe, W., Makabe, D., Kosaka, T.: Study on training data for emotion recognition in spontaneous dialogue speech using SVM. IPSJ Tohoku Branch SIG Technical report, 7-A3–3, 13 p. (2016). (in Japanese) Abe, W., Makabe, D., Kosaka, T.: Study on training data for emotion recognition in spontaneous dialogue speech using SVM. IPSJ Tohoku Branch SIG Technical report, 7-A3–3, 13 p. (2016). (in Japanese)
4.
Zurück zum Zitat Neagoe, V., Barar, A., Robitu, P.: A deep learning approach for subject independent emotion recognition from facial expressions. In: Recent Advances in Image, Audio and Signal Processing, pp. 93–98 (2013) Neagoe, V., Barar, A., Robitu, P.: A deep learning approach for subject independent emotion recognition from facial expressions. In: Recent Advances in Image, Audio and Signal Processing, pp. 93–98 (2013)
5.
Zurück zum Zitat Okada, A., Uemura, J., Mera, K., Kurosawa, Y., Takezawa, T.: Construction of real-time emotion recognition system from facial expression, voice, and text information. In: Proceedings of the 31st Annual Conference of the Japanese Society for Artificial Intelligence, 1B1-OS-25a, 4 p. (2017). (in Japanese) Okada, A., Uemura, J., Mera, K., Kurosawa, Y., Takezawa, T.: Construction of real-time emotion recognition system from facial expression, voice, and text information. In: Proceedings of the 31st Annual Conference of the Japanese Society for Artificial Intelligence, 1B1-OS-25a, 4 p. (2017). (in Japanese)
6.
Zurück zum Zitat Crawford, R., Henry, D.: The positive and negative affect schedule (PANAS): construct validity, measurement properties and normative data in a large non-clinical sample. Br. J. Clin. Psychol. 43(3), 245–265 (2004)CrossRef Crawford, R., Henry, D.: The positive and negative affect schedule (PANAS): construct validity, measurement properties and normative data in a large non-clinical sample. Br. J. Clin. Psychol. 43(3), 245–265 (2004)CrossRef
7.
Zurück zum Zitat de Cheveigné, A., Kawahara, H.: YIN, a fundamental frequency estimator for speech and music. J. Acoust. Soc. Am. 111(4), 14 (2002)CrossRef de Cheveigné, A., Kawahara, H.: YIN, a fundamental frequency estimator for speech and music. J. Acoust. Soc. Am. 111(4), 14 (2002)CrossRef
9.
Zurück zum Zitat Claesen, M., De Moor, B.: Hyperparameter search in machine learning. In: Proceedings of the XI Metaheuristics International Conference, 5 p. (2015) Claesen, M., De Moor, B.: Hyperparameter search in machine learning. In: Proceedings of the XI Metaheuristics International Conference, 5 p. (2015)
Metadaten
Titel
Emotion Estimating Method by Using Voice and Facial Expression Parameters
verfasst von
Kimihiro Yamanaka
Copyright-Jahr
2021
DOI
https://doi.org/10.1007/978-3-030-84340-3_18