Skip to main content
Erschienen in: Multimedia Systems 2/2018

15.03.2017 | Regular Paper

Affective content analysis of music emotion through EEG

verfasst von: Jia-Lien Hsu, Yan-Lin Zhen, Tzu-Chieh Lin, Yi-Shiuan Chiu

Erschienen in: Multimedia Systems | Ausgabe 2/2018

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Emotion recognition of music objects is a promising and important research issues in the field of music information retrieval. Usually, music emotion recognition could be considered as a training/classification problem. However, even given a benchmark (a training data with ground truth) and using effective classification algorithms, music emotion recognition remains a challenging problem. Most previous relevant work focuses only on acoustic music content without considering individual difference (i.e., personalization issues). In addition, assessment of emotions is usually self-reported (e.g., emotion tags) which might introduce inaccuracy and inconsistency. Electroencephalography (EEG) is a non-invasive brain-machine interface which allows external machines to sense neurophysiological signals from the brain without surgery. Such unintrusive EEG signals, captured from the central nervous system, have been utilized for exploring emotions. This paper proposes an evidence-based and personalized model for music emotion recognition. In the training phase for model construction and personalized adaption, based on the IADS (the International Affective Digitized Sound system, a set of acoustic emotional stimuli for experimental investigations of emotion and attention), we construct two predictive and generic models \(AN\!N_1\) (“EEG recordings of standardized group vs. emotions”) and \(AN\!N_2\) (“music audio content vs. emotion”). Both models are trained by an artificial neural network. We then collect a subject’s EEG recordings when listening the selected IADS samples, and apply the \(AN\!N_1\) to determine the subject’s emotion vector. With the generic model and the corresponding individual differences, we construct the personalized model H by the projective transformation. In the testing phase, given a music object, the processing steps are: (1) to extract features from the music audio content, (2) to apply \(AN\!N_2\) to calculate the vector in the arousal-valence emotion space, and (3) to apply the transformation matrix H to determine the personalized emotion vector. Moreover, with respect to a moderate music object, we apply a sliding window on the music object to obtain a sequence of personalized emotion vectors, in which those predicted vectors will be fitted and organized as an emotion trail for revealing dynamics in the affective content of music object. Experimental results suggest the proposed approach is effective.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Anhänge
Nur mit Berechtigung zugänglich
Literatur
1.
Zurück zum Zitat Bradley, M.M., Lang, P.J.: Measuring emotion: the self-assessment manikin and the semantic differential. J. Behav. Therapy Exp. Psychiatry 25(1), 49–59 (1994)CrossRef Bradley, M.M., Lang, P.J.: Measuring emotion: the self-assessment manikin and the semantic differential. J. Behav. Therapy Exp. Psychiatry 25(1), 49–59 (1994)CrossRef
2.
Zurück zum Zitat Grimm, I., Kroshel, K.: Evaluation of natural emotions using self assessment manikins. In: Proceedings of IEEE Workshop on Automatic Speech Recognition and Understanding, pp. 381–385 (2005) Grimm, I., Kroshel, K.: Evaluation of natural emotions using self assessment manikins. In: Proceedings of IEEE Workshop on Automatic Speech Recognition and Understanding, pp. 381–385 (2005)
3.
Zurück zum Zitat Ye, R.-C.: Exploring EEG spectral dynamics of music-induced emotions (master’s thesis). Technical report, Master Program of Sound and Music Innovative Technologies, National Chiao Tung University (2010) Ye, R.-C.: Exploring EEG spectral dynamics of music-induced emotions (master’s thesis). Technical report, Master Program of Sound and Music Innovative Technologies, National Chiao Tung University (2010)
4.
Zurück zum Zitat Siegert, I., Böck, R., Vlasenko, B., Philippou-Hübner, D., Wendemu, A.: Appropriate emotional labelling of non-acted speech using basic emotions, geneva emotion wheel and self assessment manikins. In: Proceedings of IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2011) Siegert, I., Böck, R., Vlasenko, B., Philippou-Hübner, D., Wendemu, A.: Appropriate emotional labelling of non-acted speech using basic emotions, geneva emotion wheel and self assessment manikins. In: Proceedings of IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2011)
5.
Zurück zum Zitat Picard, R.W.: Affective Computing. The MIT Press, Cambridge, MA (1995) Picard, R.W.: Affective Computing. The MIT Press, Cambridge, MA (1995)
6.
Zurück zum Zitat Wu, D., Parsons, T.D., Mower, E., Narayanan, S.: Speech emotion estimation in 3d space. In: Proceedings of IEEE International Conference on Multimedia and Expo (ICME), pp. 737–742 (2010) Wu, D., Parsons, T.D., Mower, E., Narayanan, S.: Speech emotion estimation in 3d space. In: Proceedings of IEEE International Conference on Multimedia and Expo (ICME), pp. 737–742 (2010)
7.
Zurück zum Zitat Yang, Y.-H., Lin, Y.-C., Su, Y.-F., Chen, H.H.: A regression approach to music emotion recognition. IEEE Trans. Audio Speech Lang. Process. 16(2), 448–457 (2008)CrossRef Yang, Y.-H., Lin, Y.-C., Su, Y.-F., Chen, H.H.: A regression approach to music emotion recognition. IEEE Trans. Audio Speech Lang. Process. 16(2), 448–457 (2008)CrossRef
8.
Zurück zum Zitat Li, H., Ren, F.: The study on text emotional orientation based on a three-dimensional emotion space model. In: Proceedings of International Conference on Natural Language Processing and Knowledge Engineering, pp. 1–6 (2009) Li, H., Ren, F.: The study on text emotional orientation based on a three-dimensional emotion space model. In: Proceedings of International Conference on Natural Language Processing and Knowledge Engineering, pp. 1–6 (2009)
9.
Zurück zum Zitat Sun, K., Yu, J., Huang, Y., Hu, X.: An improved valence-arousal emotion space for video affective content representation and recognition. In: Proceedings of IEEE International Conference on Multimedia and Expo, pp. 566–569 (2009) Sun, K., Yu, J., Huang, Y., Hu, X.: An improved valence-arousal emotion space for video affective content representation and recognition. In: Proceedings of IEEE International Conference on Multimedia and Expo, pp. 566–569 (2009)
10.
Zurück zum Zitat Morishima, S., Harashima, H.: Emotion space for analysis and synthesis of facial expression. In: Proceedings of IEEE International Workshop on Robot and Human Communication, pp. 188–193 (1993) Morishima, S., Harashima, H.: Emotion space for analysis and synthesis of facial expression. In: Proceedings of IEEE International Workshop on Robot and Human Communication, pp. 188–193 (1993)
11.
Zurück zum Zitat Hsu, J.-L., Zhen, Y.-L., Lin, T.-C., Chiu, Y.-S.: Personalized music emotion recognition using electroencephalography (EEG). In: Proceedings of IEEE International Symposium on Multimedia (ISM), pp. 277–278 (2014). doi:10.1109/ISM.2014.19 Hsu, J.-L., Zhen, Y.-L., Lin, T.-C., Chiu, Y.-S.: Personalized music emotion recognition using electroencephalography (EEG). In: Proceedings of IEEE International Symposium on Multimedia (ISM), pp. 277–278 (2014). doi:10.​1109/​ISM.​2014.​19
12.
Zurück zum Zitat Cabredo, R., Legaspi, R., Numao, M.: Identifying emotion segments in music by discovering motifs in physiological data. In: Proceedings of International Society for Music Information Retrieval (ISMIR), pp. 753–758 (2011) Cabredo, R., Legaspi, R., Numao, M.: Identifying emotion segments in music by discovering motifs in physiological data. In: Proceedings of International Society for Music Information Retrieval (ISMIR), pp. 753–758 (2011)
13.
Zurück zum Zitat Cabredo, R., Legaspi, R., Inventado, P.S., Numao, M.: An emotion model for music using brain waves. In: Proceedings of International Society for Music Information Retrieval (ISMIR), pp. 265–270 (2012) Cabredo, R., Legaspi, R., Inventado, P.S., Numao, M.: An emotion model for music using brain waves. In: Proceedings of International Society for Music Information Retrieval (ISMIR), pp. 265–270 (2012)
14.
Zurück zum Zitat Takahashi, K.: Remarks on computational emotion recognition from vital information. In: Proceedings of International Symposium on Image and Signal Processing and Analysis (ISPA), pp. 299–304 (2009) Takahashi, K.: Remarks on computational emotion recognition from vital information. In: Proceedings of International Symposium on Image and Signal Processing and Analysis (ISPA), pp. 299–304 (2009)
15.
Zurück zum Zitat Tseng, K.C., Wang, Y.-T., Lin, B.-S., Hsieh, P.H.: Brain computer interface-based multimedia controller. In: Proceedings of International Conference on Intelligent Information Hiding and Multimedia Signal Processing, pp. 277–280 (2012) Tseng, K.C., Wang, Y.-T., Lin, B.-S., Hsieh, P.H.: Brain computer interface-based multimedia controller. In: Proceedings of International Conference on Intelligent Information Hiding and Multimedia Signal Processing, pp. 277–280 (2012)
16.
Zurück zum Zitat Wu, M.-H., Wang, C.-J., Yang, Y.-K., Wang, J.-S., Chung, P.-C.: Emotional quality level recognition based on HRV. In: Proceedings of International Joint Conference on Neural Networks (IJCNN), pp. 1–6 (2010) Wu, M.-H., Wang, C.-J., Yang, Y.-K., Wang, J.-S., Chung, P.-C.: Emotional quality level recognition based on HRV. In: Proceedings of International Joint Conference on Neural Networks (IJCNN), pp. 1–6 (2010)
17.
Zurück zum Zitat Yang, Y., Zhou, J.: Recognition and analyses of EEG and ERP signals related to emotion: from the perspective of psychology. In: Proceedings of International Conference on Neural Interface and Control, pp. 96–99 (2005) Yang, Y., Zhou, J.: Recognition and analyses of EEG and ERP signals related to emotion: from the perspective of psychology. In: Proceedings of International Conference on Neural Interface and Control, pp. 96–99 (2005)
18.
Zurück zum Zitat Lee, C.K., Yoo, S.K., Park, Y.J., Kim, N.H., Jeong, K.S., Lee, B.: Using neural network to recognize human emotions from heart rate variability and skin resistance. In: Proceedings of 27th Annual Conference of the International Engineering in Medicine and Biology (2005) Lee, C.K., Yoo, S.K., Park, Y.J., Kim, N.H., Jeong, K.S., Lee, B.: Using neural network to recognize human emotions from heart rate variability and skin resistance. In: Proceedings of 27th Annual Conference of the International Engineering in Medicine and Biology (2005)
19.
Zurück zum Zitat Sharma, T., Bhardwaj, S., Maringanti, H.B.: Emotion estimation using physiological signals. In: Proceedings of TENCON, pp. 1–5 (2008) Sharma, T., Bhardwaj, S., Maringanti, H.B.: Emotion estimation using physiological signals. In: Proceedings of TENCON, pp. 1–5 (2008)
20.
Zurück zum Zitat Lindquist, K.A., Wager, T.D., Kober, H., Bliss-Moreau, E., Barrett, L.F.: The brain basis of emotion: a meta-analytic review. Behav. Brain Sci. 35(3), 121–202 (2012)CrossRef Lindquist, K.A., Wager, T.D., Kober, H., Bliss-Moreau, E., Barrett, L.F.: The brain basis of emotion: a meta-analytic review. Behav. Brain Sci. 35(3), 121–202 (2012)CrossRef
22.
Zurück zum Zitat MacKinnon, D.P., Lockwood, C.M., Hoffman, J.M., West, S.G., Sheets, V.: A comparison of methods to test mediation and other intervening variable effects. Psychol. Methods 7(1), 83–104 (2002)CrossRef MacKinnon, D.P., Lockwood, C.M., Hoffman, J.M., West, S.G., Sheets, V.: A comparison of methods to test mediation and other intervening variable effects. Psychol. Methods 7(1), 83–104 (2002)CrossRef
25.
Zurück zum Zitat Eerola, T., Vuoskoski, J.K.: A comparison of the discrete and dimensional models of emotion in music. Psychol. Music 39(1), 18–49 (2011)CrossRef Eerola, T., Vuoskoski, J.K.: A comparison of the discrete and dimensional models of emotion in music. Psychol. Music 39(1), 18–49 (2011)CrossRef
26.
Zurück zum Zitat Russell, J.A.: A circumplex model of affect. J. Person. Soc. Psychol. 39(6), 1161–1178 (1980)CrossRef Russell, J.A.: A circumplex model of affect. J. Person. Soc. Psychol. 39(6), 1161–1178 (1980)CrossRef
27.
Zurück zum Zitat Robert, E.: Thayer: Biopsychology of Mood and Arousal. Oxford Univ. Press, New York, USA (1989) Robert, E.: Thayer: Biopsychology of Mood and Arousal. Oxford Univ. Press, New York, USA (1989)
28.
Zurück zum Zitat Tsai, C.-G.: The Cognitive Psychology of Music. National Taiwan University Press, Taipei (2013). (in Chinese) Tsai, C.-G.: The Cognitive Psychology of Music. National Taiwan University Press, Taipei (2013). (in Chinese)
29.
Zurück zum Zitat Hunter, P.G., Schellenberg, E.G., Schimmack, U.: Feelings and perceptions of happiness and sadness induced by music: similarities, differences, and mixed emotions. Psychol. Aesth. Creat. Arts 4(1), 47–56 (2010)CrossRef Hunter, P.G., Schellenberg, E.G., Schimmack, U.: Feelings and perceptions of happiness and sadness induced by music: similarities, differences, and mixed emotions. Psychol. Aesth. Creat. Arts 4(1), 47–56 (2010)CrossRef
30.
Zurück zum Zitat Koelstra, S., Mühl, C., Soleymani, M., Lee, J.-S., Yazdani, A., Ebrahimi, T., Pun, T., Nijholt, A., Patras, I.: DEAP: a database for emotion analysis using physiological signals. IEEE Trans. Affect. Comput. 3(1), 18–31 (2012)CrossRef Koelstra, S., Mühl, C., Soleymani, M., Lee, J.-S., Yazdani, A., Ebrahimi, T., Pun, T., Nijholt, A., Patras, I.: DEAP: a database for emotion analysis using physiological signals. IEEE Trans. Affect. Comput. 3(1), 18–31 (2012)CrossRef
31.
Zurück zum Zitat Wu, W., Lee, J.: Improvement of HRV methodology for positive/negative emotion assessment. In: Proceedings of International ICST Conference on Collaborative Computing: Networking, Applications and Worksharing, pp. 1–6 (2009). doi:10.4108/ICST.COLLABORATECOM2009.8296 Wu, W., Lee, J.: Improvement of HRV methodology for positive/negative emotion assessment. In: Proceedings of International ICST Conference on Collaborative Computing: Networking, Applications and Worksharing, pp. 1–6 (2009). doi:10.​4108/​ICST.​COLLABORATECOM20​09.​8296
32.
Zurück zum Zitat Oude Bos, D.: EEG-based emotion recognition—the influence of visual and auditory stimuli. In: Capita Selecta (MSc Course). University of Twente (2006) Oude Bos, D.: EEG-based emotion recognition—the influence of visual and auditory stimuli. In: Capita Selecta (MSc Course). University of Twente (2006)
34.
Zurück zum Zitat Ishino, K., Masafumi, H.: A feeling estimation system using a simple electroencephalograph. Proc. IEEE Int. Conf. Syst. Man Cybern. 5, 4204–4209 (2003) Ishino, K., Masafumi, H.: A feeling estimation system using a simple electroencephalograph. Proc. IEEE Int. Conf. Syst. Man Cybern. 5, 4204–4209 (2003)
35.
Zurück zum Zitat Lin, Y.-P., Wang, C.-H., Wu, T.-L., Jeng, S.-K., Chen, J.-H.: Support vector machine for EEG signal classification during listening to emotional music. In: Proceedings of IEEE 10th Workshop on Multimedia Signal Processing, pp. 127–130 (2008). doi:10.1109/MMSP.2008.4665061 Lin, Y.-P., Wang, C.-H., Wu, T.-L., Jeng, S.-K., Chen, J.-H.: Support vector machine for EEG signal classification during listening to emotional music. In: Proceedings of IEEE 10th Workshop on Multimedia Signal Processing, pp. 127–130 (2008). doi:10.​1109/​MMSP.​2008.​4665061
36.
Zurück zum Zitat Lin, Y.-P., Wang, C.-H., Wu, T.-L., Jeng, S.-K., Chen, J.-H.: EEG-based emotion recognition in music listening: A comparison of schemes for multiclass support vector machine. In: Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing. ICASSP ’09, pp. 489–492. IEEE Computer Society, Washington, DC, USA (2009). doi:10.1109/ICASSP.2009.4959627 Lin, Y.-P., Wang, C.-H., Wu, T.-L., Jeng, S.-K., Chen, J.-H.: EEG-based emotion recognition in music listening: A comparison of schemes for multiclass support vector machine. In: Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing. ICASSP ’09, pp. 489–492. IEEE Computer Society, Washington, DC, USA (2009). doi:10.​1109/​ICASSP.​2009.​4959627
37.
Zurück zum Zitat Lin, Y.-P., Wang, C.-H., Jung, T.-P., Wu, T.-L., Jeng, S.-K., Duann, J.-R., Chen, J.-H.: EEG-based emotion recognition in music listening. IEEE Trans. Biomed. Eng. 57(7), 1798–1806 (2010)CrossRef Lin, Y.-P., Wang, C.-H., Jung, T.-P., Wu, T.-L., Jeng, S.-K., Duann, J.-R., Chen, J.-H.: EEG-based emotion recognition in music listening. IEEE Trans. Biomed. Eng. 57(7), 1798–1806 (2010)CrossRef
38.
Zurück zum Zitat Wang, J.-C., Yang, Y.-H., Wang, H.-M., Jeng, S.-K.: Personalized music emotion recognition via model adaptation. In: Proceedings of Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), pp. 1–7 (2012) Wang, J.-C., Yang, Y.-H., Wang, H.-M., Jeng, S.-K.: Personalized music emotion recognition via model adaptation. In: Proceedings of Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), pp. 1–7 (2012)
40.
Zurück zum Zitat Yazdani, A., Skodras, E., Fakotakis, N., Ebrahimi, T.: Multimedia content analysis for emotional characterization of music video clips. EURASIP J. Image Video Process. (26) (2013). doi:10.1186/1687-5281-2013-26 Yazdani, A., Skodras, E., Fakotakis, N., Ebrahimi, T.: Multimedia content analysis for emotional characterization of music video clips. EURASIP J. Image Video Process. (26) (2013). doi:10.​1186/​1687-5281-2013-26
41.
Zurück zum Zitat Wu, Z.-A.: Diagnostic Studies in Neurology. Yang-Chih Book Co., Ltd (in Chinese), Taipei (1998) Wu, Z.-A.: Diagnostic Studies in Neurology. Yang-Chih Book Co., Ltd (in Chinese), Taipei (1998)
42.
Zurück zum Zitat Levy, M., Sandler, M.: Music information retrieval using social tags and audio. IEEE Trans. Multimed. 11(3), 383–395 (2009)CrossRef Levy, M., Sandler, M.: Music information retrieval using social tags and audio. IEEE Trans. Multimed. 11(3), 383–395 (2009)CrossRef
44.
Zurück zum Zitat Hsu, J.-L., Huang, C.-C.: Designing a graph-based framework to support a multi-modal approach for music information retrieval. Multimed. Tools Appl. 1–27 (2014). doi:10.1007/s11042-014-1860-2 Hsu, J.-L., Huang, C.-C.: Designing a graph-based framework to support a multi-modal approach for music information retrieval. Multimed. Tools Appl. 1–27 (2014). doi:10.​1007/​s11042-014-1860-2
46.
Zurück zum Zitat Lerch, A.: An Introduction to Audio Content Analysis: Applications in Signal Processing and Music Informatics. Wiley, Hoboken, New Jersey (2012)CrossRef Lerch, A.: An Introduction to Audio Content Analysis: Applications in Signal Processing and Music Informatics. Wiley, Hoboken, New Jersey (2012)CrossRef
48.
Zurück zum Zitat Bradley, M.M., Lang, P.J.: International affective digitized sounds (IADS): Stimuli, instruction manual and affective ratings (Tech. Rep. No. B-2). Technical report, The Center for Research in Psychophysiology, University of Florida, Gainesville, FL (1999) Bradley, M.M., Lang, P.J.: International affective digitized sounds (IADS): Stimuli, instruction manual and affective ratings (Tech. Rep. No. B-2). Technical report, The Center for Research in Psychophysiology, University of Florida, Gainesville, FL (1999)
49.
Zurück zum Zitat Hartley, R., Zisserman, A.: Multiple View Geometry in Computer Vision. Cambridge University Press, New York (2003)MATH Hartley, R., Zisserman, A.: Multiple View Geometry in Computer Vision. Cambridge University Press, New York (2003)MATH
50.
Zurück zum Zitat Wong, T., Kovesi, P., Datta, A.: Projective transformations for image transition animations. In: Proceedings of International Conference on Image Analysis and Processing, pp. 493–500 (2007) Wong, T., Kovesi, P., Datta, A.: Projective transformations for image transition animations. In: Proceedings of International Conference on Image Analysis and Processing, pp. 493–500 (2007)
51.
Zurück zum Zitat Hagan, M.T., Demuth, H.B., Beale, M.H.: Neural Network Design. PWS Publishing, Boston, MA (1996) Hagan, M.T., Demuth, H.B., Beale, M.H.: Neural Network Design. PWS Publishing, Boston, MA (1996)
52.
Zurück zum Zitat Delorme, A., Makeig, S.: EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics. J. Neurosci. Methods 134, 9–21 (2004)CrossRef Delorme, A., Makeig, S.: EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics. J. Neurosci. Methods 134, 9–21 (2004)CrossRef
Metadaten
Titel
Affective content analysis of music emotion through EEG
verfasst von
Jia-Lien Hsu
Yan-Lin Zhen
Tzu-Chieh Lin
Yi-Shiuan Chiu
Publikationsdatum
15.03.2017
Verlag
Springer Berlin Heidelberg
Erschienen in
Multimedia Systems / Ausgabe 2/2018
Print ISSN: 0942-4962
Elektronische ISSN: 1432-1882
DOI
https://doi.org/10.1007/s00530-017-0542-0

Weitere Artikel der Ausgabe 2/2018

Multimedia Systems 2/2018 Zur Ausgabe

Neuer Inhalt