Skip to main content
Erschienen in: Multimedia Systems 4/2018

03.08.2017 | Regular Paper

Review of data features-based music emotion recognition methods

verfasst von: Xinyu Yang, Yizhuo Dong, Juan Li

Erschienen in: Multimedia Systems | Ausgabe 4/2018

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

The ability of music to induce or convey emotions ensures the importance of its role in human life. Consequently, research on methods for identifying the high-level emotion states of a music segment from its low-level features has attracted attention. This paper offers new insights on music emotion recognition methods based on different combinations of data features that they use during the modeling phase from three aspects, music features only, ground-truth data only, and their combination, and provides a comprehensive review of them. Then, focusing on the relatively popular methods in which the two types of data, music features and ground-truth data, are combined, we further subdivide the methods in the literature according to the label- and numerical-type ground-truth data, and analyze the development of music emotion recognition with the cue of modeling methods and time sequence. Three current important research directions are then summarized. Although much has been achieved in the area of music emotion recognition, many issues remain. We review these issues and put forward some suggestions for future work.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Anhänge
Nur mit Berechtigung zugänglich
Literatur
1.
Zurück zum Zitat Ahsan, H., Kumar, V., Jawahar, C.: Multi-label annotation of music. 2015 Eighth International Conference on Advances in Pattern Recognition (ICAPR), pp. 1–5. IEEE, New York (2015) Ahsan, H., Kumar, V., Jawahar, C.: Multi-label annotation of music. 2015 Eighth International Conference on Advances in Pattern Recognition (ICAPR), pp. 1–5. IEEE, New York (2015)
2.
Zurück zum Zitat Balkwill, L.L., Thompson, W.F.: A cross-cultural investigation of the perception of emotion in music: psychophysical and cultural cues. Music Percept. Interdiscip. J. 17(1), 43–64 (1999)CrossRef Balkwill, L.L., Thompson, W.F.: A cross-cultural investigation of the perception of emotion in music: psychophysical and cultural cues. Music Percept. Interdiscip. J. 17(1), 43–64 (1999)CrossRef
3.
Zurück zum Zitat Barrington, L., O’Malley, D., Turnbull, D., Lanckriet, G.: User-centered design of a social game to tag music. In: Proceedings of the ACM SIGKDD Workshop on Human Computation, pp. 7–10. ACM, New York (2009) Barrington, L., O’Malley, D., Turnbull, D., Lanckriet, G.: User-centered design of a social game to tag music. In: Proceedings of the ACM SIGKDD Workshop on Human Computation, pp. 7–10. ACM, New York (2009)
4.
Zurück zum Zitat Barthet, M., Fazekas, G., Sandler, M.: Music emotion recognition: from content-to context-based models. In: International Symposium on Computer Music Modeling and Retrieval, pp. 228–252. Springer, Berlin (2012) Barthet, M., Fazekas, G., Sandler, M.: Music emotion recognition: from content-to context-based models. In: International Symposium on Computer Music Modeling and Retrieval, pp. 228–252. Springer, Berlin (2012)
5.
Zurück zum Zitat Bartoszewski, M., Kwasnicka, H., Markowska-Kaczmar, U., Myszkowski, P.B.: Extraction of emotional content from music data. In: Computer Information Systems and Industrial Management Applications, 2008. CISIM’08. 7th, pp. 293–299. IEEE, New York (2008) Bartoszewski, M., Kwasnicka, H., Markowska-Kaczmar, U., Myszkowski, P.B.: Extraction of emotional content from music data. In: Computer Information Systems and Industrial Management Applications, 2008. CISIM’08. 7th, pp. 293–299. IEEE, New York (2008)
6.
Zurück zum Zitat Bernatzky, G., Presch, M., Anderson, M., Panksepp, J.: Emotional foundations of music as a non-pharmacological pain management tool in modern medicine. Neurosci. Biobehav. Rev. 35(9), 1989–1999 (2011)CrossRef Bernatzky, G., Presch, M., Anderson, M., Panksepp, J.: Emotional foundations of music as a non-pharmacological pain management tool in modern medicine. Neurosci. Biobehav. Rev. 35(9), 1989–1999 (2011)CrossRef
7.
Zurück zum Zitat Chapaneri, S., Lopes, R., Jayaswal, D.: Evaluation of music features for PUK kernel based genre classification. Procedia Comput. Sci. 45, 186–196 (2015)CrossRef Chapaneri, S., Lopes, R., Jayaswal, D.: Evaluation of music features for PUK kernel based genre classification. Procedia Comput. Sci. 45, 186–196 (2015)CrossRef
8.
Zurück zum Zitat Chen, S.H., Lee, Y.S., Hsieh, W.C., Wang, J.C.: Music emotion recognition using deep gaussian process. In: 2015 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA), pp. 495–498. IEEE, New York (2015) Chen, S.H., Lee, Y.S., Hsieh, W.C., Wang, J.C.: Music emotion recognition using deep gaussian process. In: 2015 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA), pp. 495–498. IEEE, New York (2015)
9.
Zurück zum Zitat Chen, Y.A., Wang, J.C., Yang, Y.H., Chen, H.: Linear regression-based adaptation of music emotion recognition models for personalization. In: 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2149–2153. IEEE, New York (2014) Chen, Y.A., Wang, J.C., Yang, Y.H., Chen, H.: Linear regression-based adaptation of music emotion recognition models for personalization. In: 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2149–2153. IEEE, New York (2014)
10.
Zurück zum Zitat Chen, Y.A., Yang, Y.H., Wang, J.C., Chen, H.: The amg1608 dataset for music emotion recognition. In: 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 693–697. IEEE, New York (2015) Chen, Y.A., Yang, Y.H., Wang, J.C., Chen, H.: The amg1608 dataset for music emotion recognition. In: 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 693–697. IEEE, New York (2015)
11.
Zurück zum Zitat Chin, Y.H., Lin, C.H., Siahaan, E., Wang, I.C., Wang, J.C.: Music emotion classification using double-layer support vector machines. In: 2013 International Conference on Orange Technologies (ICOT), pp. 193–196. IEEE, New York (2013) Chin, Y.H., Lin, C.H., Siahaan, E., Wang, I.C., Wang, J.C.: Music emotion classification using double-layer support vector machines. In: 2013 International Conference on Orange Technologies (ICOT), pp. 193–196. IEEE, New York (2013)
12.
Zurück zum Zitat Chin, Y.H., Lin, P.C., Tai, T.C., Wang, J.C.: Genre based emotion annotation for music in noisy environment. In: Affective Computing and Intelligent Interaction (ACII), 2015 International Conference on, pp. 863–866. IEEE, New York (2015) Chin, Y.H., Lin, P.C., Tai, T.C., Wang, J.C.: Genre based emotion annotation for music in noisy environment. In: Affective Computing and Intelligent Interaction (ACII), 2015 International Conference on, pp. 863–866. IEEE, New York (2015)
13.
Zurück zum Zitat Corrêa, D.C., Rodrigues, F.A.: A survey on symbolic data-based music genre classification. Expert Syst. Appl. 60, 190–210 (2016)CrossRef Corrêa, D.C., Rodrigues, F.A.: A survey on symbolic data-based music genre classification. Expert Syst. Appl. 60, 190–210 (2016)CrossRef
14.
Zurück zum Zitat Deng, J.J., Leung, C.H., Milani, A., Chen, L.: Emotional states associated with music: classification, prediction of changes, and consideration in recommendation. ACM Trans. Interact. Intell. Syst. (TiiS) 5(1), 1–36 (2015)CrossRef Deng, J.J., Leung, C.H., Milani, A., Chen, L.: Emotional states associated with music: classification, prediction of changes, and consideration in recommendation. ACM Trans. Interact. Intell. Syst. (TiiS) 5(1), 1–36 (2015)CrossRef
15.
Zurück zum Zitat Dewi, K.C., Harjoko, A.: Kid’s song classification based on mood parameters using k-nearest neighbor classification method and self organizing map. In: 2010 International Conference on Distributed Framework and Applications (DFmA), pp. 1–5. IEEE, New York (2010) Dewi, K.C., Harjoko, A.: Kid’s song classification based on mood parameters using k-nearest neighbor classification method and self organizing map. In: 2010 International Conference on Distributed Framework and Applications (DFmA), pp. 1–5. IEEE, New York (2010)
16.
Zurück zum Zitat Dingle, G.A., Kelly, P.J., Flynn, L.M., Baker, F.A.: The influence of music on emotions and cravings in clients in addiction treatment: a study of two clinical samples. Arts Psychother. 45, 18–25 (2015)CrossRef Dingle, G.A., Kelly, P.J., Flynn, L.M., Baker, F.A.: The influence of music on emotions and cravings in clients in addiction treatment: a study of two clinical samples. Arts Psychother. 45, 18–25 (2015)CrossRef
17.
Zurück zum Zitat Dobashi, A., Ikemiya, Y., Itoyama, K., Yoshii, K.: A music performance assistance system based on vocal, harmonic, and percussive source separation and content visualization for music audio signals. In: Proceedings of SMC, pp. 99–104 (2015) Dobashi, A., Ikemiya, Y., Itoyama, K., Yoshii, K.: A music performance assistance system based on vocal, harmonic, and percussive source separation and content visualization for music audio signals. In: Proceedings of SMC, pp. 99–104 (2015)
18.
Zurück zum Zitat Dornbush, S., Fisher, K., McKay, K., Prikhodko, A., Segall, Z.: Xpod-a human activity and emotion aware mobile music player. In: 2005 2nd Asia Pacific Conference on Mobile Technology, Applications and Systems, pp. 1–6. IEEE, New York (2005) Dornbush, S., Fisher, K., McKay, K., Prikhodko, A., Segall, Z.: Xpod-a human activity and emotion aware mobile music player. In: 2005 2nd Asia Pacific Conference on Mobile Technology, Applications and Systems, pp. 1–6. IEEE, New York (2005)
19.
Zurück zum Zitat Downie, J.S.: The music information retrieval evaluation exchange (2005–2007): a window into music information retrieval research. Acoust. Sci. Technol. 29(4), 247–255 (2008)CrossRef Downie, J.S.: The music information retrieval evaluation exchange (2005–2007): a window into music information retrieval research. Acoust. Sci. Technol. 29(4), 247–255 (2008)CrossRef
20.
Zurück zum Zitat Downie, X., Laurier, C., Ehmann, M.: The 2007 mirex audio mood classification task: Lessons learned. In: Proc. 9th Int. Conf. Music Inf. Retrieval, pp. 462–467 (2008) Downie, X., Laurier, C., Ehmann, M.: The 2007 mirex audio mood classification task: Lessons learned. In: Proc. 9th Int. Conf. Music Inf. Retrieval, pp. 462–467 (2008)
21.
Zurück zum Zitat Eerola, T., Vuoskoski, J.K.: A comparison of the discrete and dimensional models of emotion in music. Psychol. Music 39(1), 18–49 (2011)CrossRef Eerola, T., Vuoskoski, J.K.: A comparison of the discrete and dimensional models of emotion in music. Psychol. Music 39(1), 18–49 (2011)CrossRef
22.
Zurück zum Zitat Fan, S., Tan, C., Fan, X., Su, H., Zhang, J.: Heartplayer: a smart music player involving emotion recognition, expression and recommendation. In: International Conference on Multimedia Modeling, pp. 483–485. Springer, Berlin (2011) Fan, S., Tan, C., Fan, X., Su, H., Zhang, J.: Heartplayer: a smart music player involving emotion recognition, expression and recommendation. In: International Conference on Multimedia Modeling, pp. 483–485. Springer, Berlin (2011)
23.
Zurück zum Zitat Feng, Y., Zhuang, Y., Pan, Y.: Popular music retrieval by detecting mood. In: Proceedings of the 26th Annual International ACM SIGIR Conference on Research and Development in Informaion Retrieval, pp. 375–376. ACM, New York (2003) Feng, Y., Zhuang, Y., Pan, Y.: Popular music retrieval by detecting mood. In: Proceedings of the 26th Annual International ACM SIGIR Conference on Research and Development in Informaion Retrieval, pp. 375–376. ACM, New York (2003)
24.
Zurück zum Zitat Fritz, T., Jentschke, S., Gosselin, N., Sammler, D., Peretz, I., Turner, R., Friederici, A.D., Koelsch, S.: Universal recognition of three basic emotions in music. Curr. Biol. 19(7), 573–576 (2009)CrossRef Fritz, T., Jentschke, S., Gosselin, N., Sammler, D., Peretz, I., Turner, R., Friederici, A.D., Koelsch, S.: Universal recognition of three basic emotions in music. Curr. Biol. 19(7), 573–576 (2009)CrossRef
25.
Zurück zum Zitat Fu, Z., Lu, G., Ting, K.M., Zhang, D.: A survey of audio-based music classification and annotation. IEEE Trans. Multimed. 13(2), 303–319 (2011)CrossRef Fu, Z., Lu, G., Ting, K.M., Zhang, D.: A survey of audio-based music classification and annotation. IEEE Trans. Multimed. 13(2), 303–319 (2011)CrossRef
26.
Zurück zum Zitat Gabrielsson, A.: Emotion perceived and emotion felt: same or different? Musicae Scientiae 5(1 suppl), 123–147 (2002) Gabrielsson, A.: Emotion perceived and emotion felt: same or different? Musicae Scientiae 5(1 suppl), 123–147 (2002)
27.
Zurück zum Zitat Gabrielsson, A., Lindström, E.: The influence of musical structure on emotional expression. Oxford University Press (2001) Gabrielsson, A., Lindström, E.: The influence of musical structure on emotional expression. Oxford University Press (2001)
28.
Zurück zum Zitat Goyal, S., Kim, E.: Application of fuzzy relational interval computing for emotional classification of music. In: 2014 IEEE Conference on Norbert Wiener in the 21st Century (21CW), pp. 1–8. IEEE, New York (2014) Goyal, S., Kim, E.: Application of fuzzy relational interval computing for emotional classification of music. In: 2014 IEEE Conference on Norbert Wiener in the 21st Century (21CW), pp. 1–8. IEEE, New York (2014)
29.
Zurück zum Zitat Grimaldi, M., Cunningham, P.D., Kokaram, A.: Discrete wavelet packet transform and ensembles of lazy and eager learners for music genre classification. Multimed. Syst. 11(5), 422–437 (2006)CrossRef Grimaldi, M., Cunningham, P.D., Kokaram, A.: Discrete wavelet packet transform and ensembles of lazy and eager learners for music genre classification. Multimed. Syst. 11(5), 422–437 (2006)CrossRef
30.
Zurück zum Zitat He, H., Chen, B., Guo, J.: Emotion recognition of pop music based on maximum entropy with priors. In: Pacific-Asia Conference on Knowledge Discovery and Data Mining, pp. 788–795. Springer, Berlin (2009) He, H., Chen, B., Guo, J.: Emotion recognition of pop music based on maximum entropy with priors. In: Pacific-Asia Conference on Knowledge Discovery and Data Mining, pp. 788–795. Springer, Berlin (2009)
31.
Zurück zum Zitat Hevner, K.: Experimental studies of the elements of expression in music. Am. J. Psychol. 48(2), 246–268 (1936)CrossRef Hevner, K.: Experimental studies of the elements of expression in music. Am. J. Psychol. 48(2), 246–268 (1936)CrossRef
32.
Zurück zum Zitat Hu, X., Yang, Y.H.: Cross-cultural mood regression for music digital libraries. In: Proceedings of the 14th ACM/IEEE-CS Joint Conference on Digital Libraries, pp. 471–472. IEEE Press, New York (2014) Hu, X., Yang, Y.H.: Cross-cultural mood regression for music digital libraries. In: Proceedings of the 14th ACM/IEEE-CS Joint Conference on Digital Libraries, pp. 471–472. IEEE Press, New York (2014)
33.
Zurück zum Zitat Hu, X., Yang, Y.H.: Cross-dataset and cross-cultural music mood prediction: a case on western and Chinese pop songs. IEEE Trans. Affect. Comput. 8(2), 228–240 (2017)CrossRef Hu, X., Yang, Y.H.: Cross-dataset and cross-cultural music mood prediction: a case on western and Chinese pop songs. IEEE Trans. Affect. Comput. 8(2), 228–240 (2017)CrossRef
34.
Zurück zum Zitat Imbrasaitė, V., Baltrušaitis, T., Robinson, P.: Emotion tracking in music using continuous conditional random fields and relative feature representation. In: 2013 IEEE International Conference on Multimedia and Expo Workshops (ICMEW), pp. 1–6. IEEE, New York (2013) Imbrasaitė, V., Baltrušaitis, T., Robinson, P.: Emotion tracking in music using continuous conditional random fields and relative feature representation. In: 2013 IEEE International Conference on Multimedia and Expo Workshops (ICMEW), pp. 1–6. IEEE, New York (2013)
35.
Zurück zum Zitat Janssen, J.H., van den Broek, E.L., Westerink, J.H.: Tune in to your emotions: a robust personalized affective music player. User Model. User-Adapt. Interact. 22(3), 255–279 (2012)CrossRef Janssen, J.H., van den Broek, E.L., Westerink, J.H.: Tune in to your emotions: a robust personalized affective music player. User Model. User-Adapt. Interact. 22(3), 255–279 (2012)CrossRef
36.
Zurück zum Zitat Jun, S., Rho, S., Han, B.J., Hwang, E.: A fuzzy inference-based music emotion recognition system. In: 5th International Conference on Visual Information Engineering, 2008. VIE 2008, pp. 673–677. IET, Stevenage (2008) Jun, S., Rho, S., Han, B.J., Hwang, E.: A fuzzy inference-based music emotion recognition system. In: 5th International Conference on Visual Information Engineering, 2008. VIE 2008, pp. 673–677. IET, Stevenage (2008)
37.
Zurück zum Zitat Juslin, P.N.: Cue utilization in communication of emotion in music performance: relating performance to perception. J. Exp. Psychol. Hum. Percept. Perform. 26(6), 1797–1812 (2000)CrossRef Juslin, P.N.: Cue utilization in communication of emotion in music performance: relating performance to perception. J. Exp. Psychol. Hum. Percept. Perform. 26(6), 1797–1812 (2000)CrossRef
38.
Zurück zum Zitat Juslin, P.N., Sloboda, J.A.: Music and emotion: theory and research. Oxford University Press (2001) Juslin, P.N., Sloboda, J.A.: Music and emotion: theory and research. Oxford University Press (2001)
39.
Zurück zum Zitat Katayose, H., Imai, M., Inokuchi, S.: Sentiment extraction in music. In: 9th International Conference on Pattern Recognition, 1988, pp. 1083–1087. IEEE, New York (1988) Katayose, H., Imai, M., Inokuchi, S.: Sentiment extraction in music. In: 9th International Conference on Pattern Recognition, 1988, pp. 1083–1087. IEEE, New York (1988)
40.
Zurück zum Zitat Kim, J., Lee, S., Kim, S., Yoo, W.Y.: Music mood classification model based on arousal–valence values. In: 2011 13th International Conference on Advanced Communication Technology (ICACT), pp. 292–295. IEEE, New York (2011) Kim, J., Lee, S., Kim, S., Yoo, W.Y.: Music mood classification model based on arousal–valence values. In: 2011 13th International Conference on Advanced Communication Technology (ICACT), pp. 292–295. IEEE, New York (2011)
41.
Zurück zum Zitat Kim, M., Kwon, H.C.: Lyrics-based emotion classification using feature selection by partial syntactic analysis. In: 2011 IEEE 23rd International Conference on Tools with Artificial Intelligence, pp. 960–964. IEEE, New York (2011) Kim, M., Kwon, H.C.: Lyrics-based emotion classification using feature selection by partial syntactic analysis. In: 2011 IEEE 23rd International Conference on Tools with Artificial Intelligence, pp. 960–964. IEEE, New York (2011)
42.
Zurück zum Zitat Kim, Y.E., Schmidt, E.M., Emelle, L.: Moodswings: a collaborative game for music mood label collection. ISMIR 8, 231–236 (2008) Kim, Y.E., Schmidt, E.M., Emelle, L.: Moodswings: a collaborative game for music mood label collection. ISMIR 8, 231–236 (2008)
43.
Zurück zum Zitat Kim, Y.E., Schmidt, E.M., Migneco, R., Morton, B.G., Richardson, P., Scott, J., Speck, J.A., Turnbull, D.: Music emotion recognition: a state of the art review. In: Proc. ISMIR, pp. 255–266. Citeseer (2010) Kim, Y.E., Schmidt, E.M., Migneco, R., Morton, B.G., Richardson, P., Scott, J., Speck, J.A., Turnbull, D.: Music emotion recognition: a state of the art review. In: Proc. ISMIR, pp. 255–266. Citeseer (2010)
44.
Zurück zum Zitat Koelstra, S., Muhl, C., Soleymani, M., Lee, J.S., Yazdani, A., Ebrahimi, T., Pun, T., Nijholt, A., Patras, I.: Deap: a database for emotion analysis; using physiological signals. IEEE Trans. Affect. Comput. 3(1), 18–31 (2012)CrossRef Koelstra, S., Muhl, C., Soleymani, M., Lee, J.S., Yazdani, A., Ebrahimi, T., Pun, T., Nijholt, A., Patras, I.: Deap: a database for emotion analysis; using physiological signals. IEEE Trans. Affect. Comput. 3(1), 18–31 (2012)CrossRef
45.
Zurück zum Zitat Lafferty, J., McCallum, A., Pereira, F.: Conditional random fields: probabilistic models for segmenting and labeling sequence data. Proc. Eighteenth Int. Conf. Mach. Learn. ICML 1, 282–289 (2001) Lafferty, J., McCallum, A., Pereira, F.: Conditional random fields: probabilistic models for segmenting and labeling sequence data. Proc. Eighteenth Int. Conf. Mach. Learn. ICML 1, 282–289 (2001)
46.
Zurück zum Zitat Laurier, C., Herrera, P., Mandel, M., Ellis, D.: Audio music mood classification using support vector machine. MIREX Task Audio Mood Classif. 2–4 (2007) Laurier, C., Herrera, P., Mandel, M., Ellis, D.: Audio music mood classification using support vector machine. MIREX Task Audio Mood Classif. 2–4 (2007)
47.
Zurück zum Zitat Law, E.L., Von Ahn, L., Dannenberg, R.B., Crawford, M.: Tagatune: a game for music and sound annotation. In: ISMIR, vol. 3, p. 2 (2007) Law, E.L., Von Ahn, L., Dannenberg, R.B., Crawford, M.: Tagatune: a game for music and sound annotation. In: ISMIR, vol. 3, p. 2 (2007)
48.
Zurück zum Zitat Lee, C.C., Mower, E., Busso, C., Lee, S., Narayanan, S.: Emotion recognition using a hierarchical binary decision tree approach. Speech Commun. 53(9), 1162–1171 (2011)CrossRef Lee, C.C., Mower, E., Busso, C., Lee, S., Narayanan, S.: Emotion recognition using a hierarchical binary decision tree approach. Speech Commun. 53(9), 1162–1171 (2011)CrossRef
49.
Zurück zum Zitat Li, J., Gao, S., Han, N., Fang, Z., Liao, J.: Music mood classification via deep belief network. In: 2015 IEEE International Conference on Data Mining Workshop (ICDMW), pp. 1241–1245. IEEE, New York (2015) Li, J., Gao, S., Han, N., Fang, Z., Liao, J.: Music mood classification via deep belief network. In: 2015 IEEE International Conference on Data Mining Workshop (ICDMW), pp. 1241–1245. IEEE, New York (2015)
50.
Zurück zum Zitat Li, T., Ogihara, M.: Detecting emotion in music. ISMIR 3, 239–240 (2003) Li, T., Ogihara, M.: Detecting emotion in music. ISMIR 3, 239–240 (2003)
51.
Zurück zum Zitat Liu, D., Lu, L., Zhang, H.J.: Automatic mood detection from acoustic music data. In: Proceedings of Ismir, pp. 81–87 (2003) Liu, D., Lu, L., Zhang, H.J.: Automatic mood detection from acoustic music data. In: Proceedings of Ismir, pp. 81–87 (2003)
52.
Zurück zum Zitat Liu, J.Y., Liu, S.Y., Yang, Y.H.: Lj2m dataset: toward better understanding of music listening behavior and user mood. In: 2014 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6. IEEE, New York (2014) Liu, J.Y., Liu, S.Y., Yang, Y.H.: Lj2m dataset: toward better understanding of music listening behavior and user mood. In: 2014 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6. IEEE, New York (2014)
53.
Zurück zum Zitat Lu, L., Liu, D., Zhang, H.J.: Automatic mood detection and tracking of music audio signals. IEEE Trans. Audio Speech Lang. Process. 14(1), 5–18 (2006)CrossRef Lu, L., Liu, D., Zhang, H.J.: Automatic mood detection and tracking of music audio signals. IEEE Trans. Audio Speech Lang. Process. 14(1), 5–18 (2006)CrossRef
54.
Zurück zum Zitat Ma, A., Sethi, I., Patel, N.: Multimedia content tagging using multilabel decision tree. In: International Symposium on Multimedia, pp. 606–611 (2009) Ma, A., Sethi, I., Patel, N.: Multimedia content tagging using multilabel decision tree. In: International Symposium on Multimedia, pp. 606–611 (2009)
55.
Zurück zum Zitat MacDorman, K.F., Ho, C.C.: Automatic emotion prediction of song excerpts: index construction, algorithm design, and empirical comparison. J. N. Music Res. 36(4), 281–299 (2007)CrossRef MacDorman, K.F., Ho, C.C.: Automatic emotion prediction of song excerpts: index construction, algorithm design, and empirical comparison. J. N. Music Res. 36(4), 281–299 (2007)CrossRef
56.
Zurück zum Zitat Madsen, J., Jensen, B.S., Larsen, J.: Learning combinations of multiple feature representations for music emotion prediction. In: Proceedings of the 1st International Workshop on Affect & Sentiment in Multimedia, pp. 3–8. ACM, New York (2015) Madsen, J., Jensen, B.S., Larsen, J.: Learning combinations of multiple feature representations for music emotion prediction. In: Proceedings of the 1st International Workshop on Affect & Sentiment in Multimedia, pp. 3–8. ACM, New York (2015)
57.
Zurück zum Zitat Mcadams, S., Giordano, B.L., Mcadams, S., Giordano, B.L.: The oxford handbook of music psychology. Jew. Q. Rev. 11, 72–80 (2009) Mcadams, S., Giordano, B.L., Mcadams, S., Giordano, B.L.: The oxford handbook of music psychology. Jew. Q. Rev. 11, 72–80 (2009)
58.
Zurück zum Zitat McKay, C.: Automatic genre classification of midi recordings. Ph.D. thesis, McGill University (2004) McKay, C.: Automatic genre classification of midi recordings. Ph.D. thesis, McGill University (2004)
59.
Zurück zum Zitat McKay, C., Fujinaga, I.: Automatic genre classification using large high-level musical feature sets. In: ISMIR, vol. 2004, pp. 525–530. Citeseer (2004) McKay, C., Fujinaga, I.: Automatic genre classification using large high-level musical feature sets. In: ISMIR, vol. 2004, pp. 525–530. Citeseer (2004)
60.
Zurück zum Zitat Mehrabian, A.: Pleasure-arousal-dominance: a general framework for describing and measuring individual differences in temperament. Curr. Psychol. 14(4), 261–292 (1996)MathSciNetCrossRef Mehrabian, A.: Pleasure-arousal-dominance: a general framework for describing and measuring individual differences in temperament. Curr. Psychol. 14(4), 261–292 (1996)MathSciNetCrossRef
61.
Zurück zum Zitat Morton, B.G., Speck, J.A., Schmidt, E.M., Kim, Y.E.: Improving music emotion labeling using human computation. In: Proceedings of the ACM SIGKDD Workshop on Human Computation, pp. 45–48. ACM, New York (2010) Morton, B.G., Speck, J.A., Schmidt, E.M., Kim, Y.E.: Improving music emotion labeling using human computation. In: Proceedings of the ACM SIGKDD Workshop on Human Computation, pp. 45–48. ACM, New York (2010)
62.
Zurück zum Zitat Myint, E.E.P., Pwint, M.: An approach for multi-label music mood classification. In: 2010 2nd International Conference on Signal Processing Systems (ICSPS), vol. 1, pp. V1–290. IEEE, New York (2010) Myint, E.E.P., Pwint, M.: An approach for multi-label music mood classification. In: 2010 2nd International Conference on Signal Processing Systems (ICSPS), vol. 1, pp. V1–290. IEEE, New York (2010)
63.
Zurück zum Zitat Neocleous, A., Ramirez, R., Perez, A., Maestre, E.: Modeling emotions in violin audio recordings. In: Proceedings of 3rd International Workshop on Machine Learning and Music, pp. 17–20. ACM, New York (2010) Neocleous, A., Ramirez, R., Perez, A., Maestre, E.: Modeling emotions in violin audio recordings. In: Proceedings of 3rd International Workshop on Machine Learning and Music, pp. 17–20. ACM, New York (2010)
64.
Zurück zum Zitat Nguyen, C.T., Zhan, D.C., Zhou, Z.H.: Multi-modal image annotation with multi-instance multi-label lda. In: Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence, pp. 1558–1564. AAAI Press, Cambridge (2013) Nguyen, C.T., Zhan, D.C., Zhou, Z.H.: Multi-modal image annotation with multi-instance multi-label lda. In: Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence, pp. 1558–1564. AAAI Press, Cambridge (2013)
65.
Zurück zum Zitat Panda, R., Paiva, R.P.: Using support vector machines for automatic mood tracking in audio music. Audio Engineering Society Convention, pp. 1–8. Audio Engineering Society, New York (2011) Panda, R., Paiva, R.P.: Using support vector machines for automatic mood tracking in audio music. Audio Engineering Society Convention, pp. 1–8. Audio Engineering Society, New York (2011)
66.
Zurück zum Zitat Pao, T.L., Cheng, Y.M., Yeh, J.H., Chen, Y.T., Pai, C.Y., Tsai, Y.W.: Comparison between weighted d-knn and other classifiers for music emotion recognition. In: 3rd International Conference on Innovative Computing Information and Control, 2008. ICICIC’08, pp. 530–530. IEEE, New York (2008) Pao, T.L., Cheng, Y.M., Yeh, J.H., Chen, Y.T., Pai, C.Y., Tsai, Y.W.: Comparison between weighted d-knn and other classifiers for music emotion recognition. In: 3rd International Conference on Innovative Computing Information and Control, 2008. ICICIC’08, pp. 530–530. IEEE, New York (2008)
67.
Zurück zum Zitat Park, S.H., Ihm, S.Y., Jang, W.I., Nasridinov, A., Park, Y.H.: A music recommendation method with emotion recognition using ranked attributes. Computer Science and its Applications, pp. 1065–1070. Springer, Berlin (2015) Park, S.H., Ihm, S.Y., Jang, W.I., Nasridinov, A., Park, Y.H.: A music recommendation method with emotion recognition using ranked attributes. Computer Science and its Applications, pp. 1065–1070. Springer, Berlin (2015)
68.
Zurück zum Zitat Patra, B.G., Das, D., Bandyopadhyay, S.: Unsupervised approach to hindi music mood classification. Mining Intelligence and Knowledge Exploration, pp. 62–69. Springer, Berlin (2013)CrossRef Patra, B.G., Das, D., Bandyopadhyay, S.: Unsupervised approach to hindi music mood classification. Mining Intelligence and Knowledge Exploration, pp. 62–69. Springer, Berlin (2013)CrossRef
69.
Zurück zum Zitat Posner, J., Russell, J.A., Peterson, B.S.: The circumplex model of affect: an integrative approach to affective neuroscience, cognitive development, and psychopathology. Dev. Psychopathol. 17(03), 715–734 (2005)CrossRef Posner, J., Russell, J.A., Peterson, B.S.: The circumplex model of affect: an integrative approach to affective neuroscience, cognitive development, and psychopathology. Dev. Psychopathol. 17(03), 715–734 (2005)CrossRef
70.
Zurück zum Zitat Raykar, V.C., Yu, S., Zhao, L.H., Valadez, G.H., Florin, C., Bogoni, L., Moy, L.: Learning from crowds. J. Mach. Learn. Res. 11, 1297–1322 (2010)MathSciNet Raykar, V.C., Yu, S., Zhao, L.H., Valadez, G.H., Florin, C., Bogoni, L., Moy, L.: Learning from crowds. J. Mach. Learn. Res. 11, 1297–1322 (2010)MathSciNet
71.
Zurück zum Zitat Russell, J.A.: A circumplex model of affect. J. Pers. Soc. Psychol. 39(6), 1161–1178 (1980)CrossRef Russell, J.A.: A circumplex model of affect. J. Pers. Soc. Psychol. 39(6), 1161–1178 (1980)CrossRef
72.
Zurück zum Zitat Schmidt, E.M., Kim, Y.E.: Prediction of time-varying musical mood distributions from audio. In: International Society for Music Information Retrieval Conference, Ismir 2010, Utrecht, The Netherlands, August, pp. 465–470 (2010) Schmidt, E.M., Kim, Y.E.: Prediction of time-varying musical mood distributions from audio. In: International Society for Music Information Retrieval Conference, Ismir 2010, Utrecht, The Netherlands, August, pp. 465–470 (2010)
73.
Zurück zum Zitat Schmidt, E.M., Kim, Y.E.: Prediction of time-varying musical mood distributions using kalman filtering. In: 2010 Ninth International Conference on Machine Learning and Applications (ICMLA), pp. 655–660. IEEE, New York (2010) Schmidt, E.M., Kim, Y.E.: Prediction of time-varying musical mood distributions using kalman filtering. In: 2010 Ninth International Conference on Machine Learning and Applications (ICMLA), pp. 655–660. IEEE, New York (2010)
74.
Zurück zum Zitat Schmidt, E.M., Kim, Y.E.: Learning emotion-based acoustic features with deep belief networks. In: 2011 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), pp. 65–68. IEEE, New York (2011) Schmidt, E.M., Kim, Y.E.: Learning emotion-based acoustic features with deep belief networks. In: 2011 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), pp. 65–68. IEEE, New York (2011)
75.
Zurück zum Zitat Schmidt, E.M., Turnbull, D., Kim, Y.E.: Feature selection for content-based, time-varying musical emotion regression. In: Proceedings of the International Conference on Multimedia Information Retrieval, pp. 267–274. ACM, New York (2010) Schmidt, E.M., Turnbull, D., Kim, Y.E.: Feature selection for content-based, time-varying musical emotion regression. In: Proceedings of the International Conference on Multimedia Information Retrieval, pp. 267–274. ACM, New York (2010)
76.
Zurück zum Zitat Schubert, E.: Update of the Hevner adjective checklist. Percept. Motor Skills 96(3 suppl), 1117–1122 (2003)MathSciNetCrossRef Schubert, E.: Update of the Hevner adjective checklist. Percept. Motor Skills 96(3 suppl), 1117–1122 (2003)MathSciNetCrossRef
77.
Zurück zum Zitat Schubert, E.: Modeling perceived emotion with continuous musical features. Music Percept. Interdiscipl. J. 21(4), 561–585 (2004)CrossRef Schubert, E.: Modeling perceived emotion with continuous musical features. Music Percept. Interdiscipl. J. 21(4), 561–585 (2004)CrossRef
78.
Zurück zum Zitat Sloboda, J.A., Juslin, P.N.: Psychological perspectives on music and emotion. Oxford University Press (2001) Sloboda, J.A., Juslin, P.N.: Psychological perspectives on music and emotion. Oxford University Press (2001)
79.
Zurück zum Zitat Soleymani, M., Caro, M.N., Schmidt, E.M., Sha, C.Y., Yang, Y.H.: 1000 songs for emotional analysis of music. In: Proceedings of the 2nd ACM International Workshop on Crowdsourcing for Multimedia, pp. 1–6. ACM, New York (2013) Soleymani, M., Caro, M.N., Schmidt, E.M., Sha, C.Y., Yang, Y.H.: 1000 songs for emotional analysis of music. In: Proceedings of the 2nd ACM International Workshop on Crowdsourcing for Multimedia, pp. 1–6. ACM, New York (2013)
80.
Zurück zum Zitat Speck, J.A., Schmidt, E.M., Morton, B.G., Kim, Y.E.: A comparative study of collaborative vs. traditional musical mood annotation. In: ISMIR, pp. 549–554. Citeseer (2011) Speck, J.A., Schmidt, E.M., Morton, B.G., Kim, Y.E.: A comparative study of collaborative vs. traditional musical mood annotation. In: ISMIR, pp. 549–554. Citeseer (2011)
81.
Zurück zum Zitat Thayer, R.E.: Toward a psychological theory of multidimensional activation (arousal). Motiv. Emot. 2(1), 1–34 (1978)CrossRef Thayer, R.E.: Toward a psychological theory of multidimensional activation (arousal). Motiv. Emot. 2(1), 1–34 (1978)CrossRef
82.
Zurück zum Zitat Thayer, R.E., McNally, R.J.: The biopsychology of mood and arousal. Cognit. Behav. Neurol. 5(1), 65 (1992) Thayer, R.E., McNally, R.J.: The biopsychology of mood and arousal. Cognit. Behav. Neurol. 5(1), 65 (1992)
83.
Zurück zum Zitat Tomo, T.P., Enriquez, G., Hashimoto, S.: Indonesian puppet theater robot with gamelan music emotion recognition. In: 2015 IEEE International Conference on Robotics and Biomimetics (ROBIO), pp. 1177–1182. IEEE, New York (2015) Tomo, T.P., Enriquez, G., Hashimoto, S.: Indonesian puppet theater robot with gamelan music emotion recognition. In: 2015 IEEE International Conference on Robotics and Biomimetics (ROBIO), pp. 1177–1182. IEEE, New York (2015)
84.
Zurück zum Zitat Trohidis, K., Tsoumakas, G., Kalliris, G., Vlahavas, I.P.: Multi-label classification of music into emotions. ISMIR 8, 325–330 (2008) Trohidis, K., Tsoumakas, G., Kalliris, G., Vlahavas, I.P.: Multi-label classification of music into emotions. ISMIR 8, 325–330 (2008)
85.
Zurück zum Zitat Turnbull, D., Barrington, L., Torres, D., Lanckriet, G.: Towards musical query-by-semantic-description using the cal500 data set. In: Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 439–446. ACM, New York (2007) Turnbull, D., Barrington, L., Torres, D., Lanckriet, G.: Towards musical query-by-semantic-description using the cal500 data set. In: Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 439–446. ACM, New York (2007)
86.
Zurück zum Zitat Ujlambkar, A.M., Attar, V.Z.: Mood classification of Indian popular music. In: Proceedings of the CUBE International Information Technology Conference, pp. 278–283. ACM, New York (2012) Ujlambkar, A.M., Attar, V.Z.: Mood classification of Indian popular music. In: Proceedings of the CUBE International Information Technology Conference, pp. 278–283. ACM, New York (2012)
87.
Zurück zum Zitat Van De Laar, B.: Emotion detection in music, a survey. Twente Stud. Conf. IT 1, 1–7 (2006) Van De Laar, B.: Emotion detection in music, a survey. Twente Stud. Conf. IT 1, 1–7 (2006)
89.
Zurück zum Zitat Wang, J.C., Yang, Y.H., Chang, K., Wang, H.M., Jeng, S.K.: Exploring the relationship between categorical and dimensional emotion semantics of music. In: Proceedings of the Second International ACM Workshop on Music Information Retrieval with User-Centered and Multimodal Strategies, pp. 63–68. ACM, New York (2012) Wang, J.C., Yang, Y.H., Chang, K., Wang, H.M., Jeng, S.K.: Exploring the relationship between categorical and dimensional emotion semantics of music. In: Proceedings of the Second International ACM Workshop on Music Information Retrieval with User-Centered and Multimodal Strategies, pp. 63–68. ACM, New York (2012)
90.
Zurück zum Zitat Wang, J.C., Yang, Y.H., Jhuo, I.H., Lin, Y.Y., Wang, H.M., et al.: The acousticvisual emotion guassians model for automatic generation of music video. In: Proceedings of the 20th ACM International Conference on Multimedia, pp. 1379–1380. ACM, New York (2012) Wang, J.C., Yang, Y.H., Jhuo, I.H., Lin, Y.Y., Wang, H.M., et al.: The acousticvisual emotion guassians model for automatic generation of music video. In: Proceedings of the 20th ACM International Conference on Multimedia, pp. 1379–1380. ACM, New York (2012)
91.
Zurück zum Zitat Wang, J.C., Yang, Y.H., Wang, H.M., Jeng, S.K.: The acoustic emotion gaussians model for emotion-based music annotation and retrieval. In: Proceedings of the 20th ACM International Conference on Multimedia, pp. 89–98. ACM, New York (2012) Wang, J.C., Yang, Y.H., Wang, H.M., Jeng, S.K.: The acoustic emotion gaussians model for emotion-based music annotation and retrieval. In: Proceedings of the 20th ACM International Conference on Multimedia, pp. 89–98. ACM, New York (2012)
92.
Zurück zum Zitat Wang, J.C., Yang, Y.H., Wang, H.M., Jeng, S.K.: Personalized music emotion recognition via model adaptation. In: Signal & Information Processing Association Annual Summit and Conference (APSIPA ASC), 2012 Asia-Pacific, pp. 1–7. IEEE, New York (2012) Wang, J.C., Yang, Y.H., Wang, H.M., Jeng, S.K.: Personalized music emotion recognition via model adaptation. In: Signal & Information Processing Association Annual Summit and Conference (APSIPA ASC), 2012 Asia-Pacific, pp. 1–7. IEEE, New York (2012)
93.
Zurück zum Zitat Wang, J.C., Yang, Y.H., Wang, H.M., Jeng, S.K.: Modeling the affective content of music with a gaussian mixture model. IEEE Trans. Affect. Comput. 6(1), 56–68 (2015)CrossRef Wang, J.C., Yang, Y.H., Wang, H.M., Jeng, S.K.: Modeling the affective content of music with a gaussian mixture model. IEEE Trans. Affect. Comput. 6(1), 56–68 (2015)CrossRef
94.
Zurück zum Zitat Wieczorkowska, A., Synak, P., Raś, Z.W.: Multi-label classification of emotions in music. In: Intelligent Information Processing and Web Mining, pp. 307–315. Springer, Berlin (2006) Wieczorkowska, A., Synak, P., Raś, Z.W.: Multi-label classification of emotions in music. In: Intelligent Information Processing and Web Mining, pp. 307–315. Springer, Berlin (2006)
95.
Zurück zum Zitat Wu, B., Zhong, E., Horner, A., Yang, Q.: Music emotion recognition by multi-label multi-layer multi-instance multi-view learning. In: Proceedings of the 22nd ACM International Conference on Multimedia, pp. 117–126. ACM, New York (2014) Wu, B., Zhong, E., Horner, A., Yang, Q.: Music emotion recognition by multi-label multi-layer multi-instance multi-view learning. In: Proceedings of the 22nd ACM International Conference on Multimedia, pp. 117–126. ACM, New York (2014)
96.
Zurück zum Zitat Wu, T.L., Jeng, S.K.: Probabilistic estimation of a novel music emotion model. In: International Conference on Multimedia Modeling, pp. 487–497. Springer, Berlin (2008) Wu, T.L., Jeng, S.K.: Probabilistic estimation of a novel music emotion model. In: International Conference on Multimedia Modeling, pp. 487–497. Springer, Berlin (2008)
97.
Zurück zum Zitat Wu, W., Xie, L.: Discriminating mood taxonomy of Chinese traditional music and western classical music with content feature sets. In: Congress on Image and Signal Processing, 2008. CISP’08, vol. 5, pp. 148–152. IEEE, New York (2008) Wu, W., Xie, L.: Discriminating mood taxonomy of Chinese traditional music and western classical music with content feature sets. In: Congress on Image and Signal Processing, 2008. CISP’08, vol. 5, pp. 148–152. IEEE, New York (2008)
98.
Zurück zum Zitat Xiao, Z., Dellandrea, E., Dou, W., Chen, L.: What is the best segment duration for music mood analysis? In: 2008 International Workshop on Content-Based Multimedia Indexing, pp. 17–24. IEEE, New York (2008) Xiao, Z., Dellandrea, E., Dou, W., Chen, L.: What is the best segment duration for music mood analysis? In: 2008 International Workshop on Content-Based Multimedia Indexing, pp. 17–24. IEEE, New York (2008)
99.
Zurück zum Zitat Xu, J., Li, X., Hao, Y., Yang, G.: Source separation improves music emotion recognition. In: Proceedings of International Conference on Multimedia Retrieval, pp. 423–426. ACM, New York (2014) Xu, J., Li, X., Hao, Y., Yang, G.: Source separation improves music emotion recognition. In: Proceedings of International Conference on Multimedia Retrieval, pp. 423–426. ACM, New York (2014)
100.
Zurück zum Zitat Xue, H., Xue, L., Su, F.: Multimodal music mood classification by fusion of audio and lyrics. In: International Conference on Multimedia Modeling, pp. 26–37. Springer, Berlin (2015) Xue, H., Xue, L., Su, F.: Multimodal music mood classification by fusion of audio and lyrics. In: International Conference on Multimedia Modeling, pp. 26–37. Springer, Berlin (2015)
101.
Zurück zum Zitat Yang, D., Chen, X., Zhao, Y.: A lda-based approach to lyric emotion regression. Knowledge Engineering and Management, pp. 331–340. Springer, Berlin (2011)CrossRef Yang, D., Chen, X., Zhao, Y.: A lda-based approach to lyric emotion regression. Knowledge Engineering and Management, pp. 331–340. Springer, Berlin (2011)CrossRef
102.
Zurück zum Zitat Yang, D., Lee, W.S.: Disambiguating music emotion using software agents. ISMIR 4, 218–223 (2004) Yang, D., Lee, W.S.: Disambiguating music emotion using software agents. ISMIR 4, 218–223 (2004)
103.
Zurück zum Zitat Yang, Y.H., Chen, H.H.: Music Emotion Recognition. CRC Press, Boca Raton (2011)MATH Yang, Y.H., Chen, H.H.: Music Emotion Recognition. CRC Press, Boca Raton (2011)MATH
104.
Zurück zum Zitat Yang, Y.H., Chen, H.H.: Ranking-based emotion recognition for music organization and retrieval. IEEE Trans. Audio Speech Lang. Process. 19(4), 762–774 (2011)CrossRef Yang, Y.H., Chen, H.H.: Ranking-based emotion recognition for music organization and retrieval. IEEE Trans. Audio Speech Lang. Process. 19(4), 762–774 (2011)CrossRef
105.
Zurück zum Zitat Yang, Y.H., Chen, H.H.: Machine recognition of music emotion: a review. ACM Trans. Intell. Syst. Technol. (TIST) 3(3), 40 (2012) Yang, Y.H., Chen, H.H.: Machine recognition of music emotion: a review. ACM Trans. Intell. Syst. Technol. (TIST) 3(3), 40 (2012)
106.
Zurück zum Zitat Yang, Y.H., Hu, X.: Cross-cultural music mood classification: a comparison on English and Chinese songs. Int. Soc. Music Inf. Retr. Conf. Ismir 2012, 19–24 (2012) Yang, Y.H., Hu, X.: Cross-cultural music mood classification: a comparison on English and Chinese songs. Int. Soc. Music Inf. Retr. Conf. Ismir 2012, 19–24 (2012)
107.
Zurück zum Zitat Yang, Y.H., Lin, Y.C., Su, Y.F., Chen, H.H.: Music emotion classification: a regression approach. In: IEEE International Conference on Multimedia and Expo, pp. 208–211. IEEE, New York (2007) Yang, Y.H., Lin, Y.C., Su, Y.F., Chen, H.H.: Music emotion classification: a regression approach. In: IEEE International Conference on Multimedia and Expo, pp. 208–211. IEEE, New York (2007)
108.
Zurück zum Zitat Yang, Y.H., Lin, Y.C., Su, Y.F., Chen, H.H.: A regression approach to music emotion recognition. IEEE Trans. Audio Speech Lang. Process. 16(2), 448–457 (2008)CrossRef Yang, Y.H., Lin, Y.C., Su, Y.F., Chen, H.H.: A regression approach to music emotion recognition. IEEE Trans. Audio Speech Lang. Process. 16(2), 448–457 (2008)CrossRef
109.
Zurück zum Zitat Yang, Y.H., Liu, C.C., Chen, H.H.: Music emotion classification: a fuzzy approach. In: Proceedings of the 14th ACM International Conference on Multimedia, pp. 81–84. ACM, New York (2006) Yang, Y.H., Liu, C.C., Chen, H.H.: Music emotion classification: a fuzzy approach. In: Proceedings of the 14th ACM International Conference on Multimedia, pp. 81–84. ACM, New York (2006)
110.
Zurück zum Zitat Yang, Y.H., Su, Y.F., Lin, Y.C., Chen, H.H.: Music emotion recognition: the role of individuality. In: Proceedings of the International Workshop on Human-Centered Multimedia, pp. 13–22. ACM, New York (2007) Yang, Y.H., Su, Y.F., Lin, Y.C., Chen, H.H.: Music emotion recognition: the role of individuality. In: Proceedings of the International Workshop on Human-Centered Multimedia, pp. 13–22. ACM, New York (2007)
111.
Zurück zum Zitat Yazdani, A., Kappeler, K., Ebrahimi, T.: Affective content analysis of music video clips. In: Proceedings of the 1st International ACM Workshop on Music Information Retrieval with User-Centered and Multimodal Strategies, pp. 7–12. ACM, New York (2011) Yazdani, A., Kappeler, K., Ebrahimi, T.: Affective content analysis of music video clips. In: Proceedings of the 1st International ACM Workshop on Music Information Retrieval with User-Centered and Multimodal Strategies, pp. 7–12. ACM, New York (2011)
112.
Zurück zum Zitat Yoon, J.H., Cho, J.H., Lee, J.K., Lee, H.E., et al.: Method and apparatus for visualizing music information. US Patent 20,160,035,323 (2016) Yoon, J.H., Cho, J.H., Lee, J.K., Lee, H.E., et al.: Method and apparatus for visualizing music information. US Patent 20,160,035,323 (2016)
113.
Zurück zum Zitat Yu, Y.C., You, S.D., Tsai, D.R.: Magic mirror table for social-emotion alleviation in the smart home. IEEE Trans. Consum. Electron. 58(1), 126–131 (2012)CrossRef Yu, Y.C., You, S.D., Tsai, D.R.: Magic mirror table for social-emotion alleviation in the smart home. IEEE Trans. Consum. Electron. 58(1), 126–131 (2012)CrossRef
114.
Zurück zum Zitat Zhang, J.L., Huang, X.L., Yang, L.F., Xu, Y., Sun, S.T.: Feature selection and feature learning in arousal dimension of music emotion by using shrinkage methods. Multimedia Syst. 23(2), 251–264 (2017)CrossRef Zhang, J.L., Huang, X.L., Yang, L.F., Xu, Y., Sun, S.T.: Feature selection and feature learning in arousal dimension of music emotion by using shrinkage methods. Multimedia Syst. 23(2), 251–264 (2017)CrossRef
Metadaten
Titel
Review of data features-based music emotion recognition methods
verfasst von
Xinyu Yang
Yizhuo Dong
Juan Li
Publikationsdatum
03.08.2017
Verlag
Springer Berlin Heidelberg
Erschienen in
Multimedia Systems / Ausgabe 4/2018
Print ISSN: 0942-4962
Elektronische ISSN: 1432-1882
DOI
https://doi.org/10.1007/s00530-017-0559-4

Weitere Artikel der Ausgabe 4/2018

Multimedia Systems 4/2018 Zur Ausgabe

Neuer Inhalt