ABSTRACT
Music is composed to be emotionally expressive, and emotional associations provide an especially natural domain for indexing and recommendation in today's vast digital music libraries. But such libraries require powerful automated tools, and the development of systems for automatic prediction of musical emotion presents a myriad challenges. The perceptual nature of musical emotion necessitates the collection of data from human subjects. The interpretation of emotion varies between listeners thus each clip needs to be annotated by a distribution of subjects. In addition, the sharing of large music content libraries for the development of such systems, even for academic research, presents complicated legal issues which vary by country. This work presents a new publicly available dataset for music emotion recognition research and a baseline system. In addressing the difficulties of emotion annotation we have turned to crowdsourcing, using Amazon Mechanical Turk, and have developed a two-stage procedure for filtering out poor quality workers. The dataset consists entirely of creative commons music from the Free Music Archive, which as the name suggests, can be shared freely without penalty. The final dataset contains 1000 songs, each annotated by a minimum of 10 subjects, which is larger than many currently available music emotion dataset.
- M. Barthet, G. Fazekas, and M. Sandler. Multidisciplinary perspectives on music emotion recognition: Implications for content and context-based models. In Int'l Symp. Computer Music Modelling & Retrieval, pages 492--507, 2012.Google Scholar
- M. M. Bradley and P. J. Lang. Measuring emotion: the Self-Assessment Manikin and the Semantic Differential. Journal of Behavior Therapy and Experimental Psychiatry, 25(1):49--59, 1994.Google ScholarCross Ref
- R. Cowie, E. Douglas-Cowie, S. Savvidou, E. Mcmahon, M. Sawey, and M. Schröder. feeltrace': an instrument for recording perceived emotion in real time, 2000.Google Scholar
- P. Ekman. Basic Emotions, pages 45--60. John Wiley & Sons, Ltd, 2005.Google Scholar
- D. P. W. Ellis. PLP and RASTA (and MFCC, and inversion) in Matlab, 2005. online web resource.Google Scholar
- K. Hevner. Experimental studies of the elements of expression in music. American Journal of Psychology, (48):246--268, 1936.Google ScholarCross Ref
- X. Hu, J. S. Downie, C. Laurier, M. Bay, and A. F. Ehmann. The 2007 MIREX audio mood classification task: Lessons learned. In Proc. Int. Soc. Music Info. Retrieval Conf., pages 462--467, 2008.Google Scholar
- A. Huq, J. P. Bello, and R. Rowe. Automated music emotion recognition: A systematic evaluation. Journal of New Music Research, 39(3):227--244, 2010.Google ScholarCross Ref
- Y. E. Kim, E. Schmidt, and L. Emelle. Moodswings: A collaborative game for music mood label collection. In Proc. Int. Soc. Music Info. Retrieval Conf., pages 231--236, 2008.Google Scholar
- Y. E. Kim, E. M. Schmidt, R. Migneco, B. G. Morton, P. Richardson, J. Scott, J. Speck, and D. Turnbull. Music emotion recognition: A state of the art review. In Proc. Int. Soc. Music Info. Retrieval Conf., 2010.Google Scholar
- A. Kittur, E. H. Chi, and B. Suh. Crowdsourcing user studies with Mechanical Turk. In Proc. annual SIGCHI Conf. Human factors in computing systems, CHI '08, pages 453--456, New York, NY, USA, 2008. ACM. Google ScholarDigital Library
- C. Laurier and P. Herrera. Audio music mood classification using support vector machine. In MIREX task on Audio Mood Classification, 2007.Google Scholar
- A. J. Lonsdale and A. C. North. Why do we listen to music? a uses and gratifications analysis. British Journal of Psychology, 102:108--134, 2011.Google ScholarCross Ref
- S. Marsella, J. Gratch, and P. Petta. Computational models of emotion, chapter 1.2, pages 21--41. Oxford University Press, Oxford, UK, 2010.Google Scholar
- G. McKeown, M. Valstar, R. Cowie, M. Pantic, and M. Schroder. The semaine database: Annotated multimodal records of emotionally colored conversations between a person and a limited agent. IEEE Trans. Affective Computing, 3(1):5--17, 2012. Google ScholarDigital Library
- B. G. Morton, J. A. Speck, E. M. Schmidt, and Y. E. Kim. Improving music emotion labeling using human computation. In Proc. ACM SIGKDD Workshop on Human Computation, 2010. Google ScholarDigital Library
- M. Quirin, M. Kazén, and J. Kuhl. When Nonsense Sounds Happy or Helpless: The Implicit Positive and Negative Affect Test (IPANAT). Journal of Personality and Social Psychology, 97(3):500--516, 2009.Google ScholarCross Ref
- J. A. Russell. A circumplex model of affect. J. Personality Social Psychology, 39:1161--1178, 1980.Google ScholarCross Ref
- K. R. Scherer. What are emotions? And how can they be measured? Social Science Information, 44(4):695--729, 2005.Google ScholarCross Ref
- E. M. Schmidt and Y. E. Kim. Prediction of time-varying musical mood distributions from audio. In Proc. Int. Soc. Music Information Retrieval Conf., August 2010.Google Scholar
- E. M. Schmidt, D. Turnbull, and Y. E. Kim. Feature selection for content-based, time-varying musical emotion regression. In Proc. ACM Int. Conf. Multimedia Information Retrieval, Philadelphia, PA, March 2010. Google ScholarDigital Library
- M. Soleymani and M. Larson. Crowdsourcing for affective annotation of video: Development of a viewer-reported boredom corpus. In Workshop on Crowdsourcing for Search Evaluation, SIGIR 2010, Geneva, Switzerland, 2010.Google Scholar
- J. A. Speck, E. M. Schmidt, B. G. Morton, and Y. E. Kim. A comparative study of collaborative vs. traditional musical mood annotation. In Proc. Int. Soc. Music Info. Retrieval Conf., 2011.Google Scholar
- J.-C. Wang, Y.-H. Yang, H.-M. Wang, and S.-K. Jeng. The acoustic emotion Gaussians model for emotion-based music annotation and retrieval. In Proc. ACM Multimedia, pages 89--98, 2012. Google ScholarDigital Library
- Y.-H. Yang and H. H. Chen. Music Emotion Recognition. CRC Press, Boca Raton, Florida, 2011. Google ScholarDigital Library
- Y.-H. Yang and H. H. Chen. Ranking-based emotion recognition for music organization and retrieval. IEEE Transactions on Audio, Speech & Language Processing, 19(4):762--774, 2011. Google ScholarDigital Library
Index Terms
- 1000 songs for emotional analysis of music
Recommendations
Emotional Analysis of Music: A Comparison of Methods
MM '14: Proceedings of the 22nd ACM international conference on MultimediaMusic as a form of art is intentionally composed to be emotionally expressive. The emotional features of music are invaluable for music indexing and recommendation. In this paper we present a cross-comparison of automatic emotional analysis of music. We ...
Examining the Relationship Between Songs and Psychological Characteristics
HCI International 2020 – Late Breaking Papers: Cognition, Learning and GamesAbstractListening to music may serve both as an indicator of the listener’s emotional situation and as an actor on the listener’s emotions. In this study, we explored the relationship between music and listeners’ psychological characteristics. A total of ...
Emotion control system for MIDI excerpts: MOR2ART
Fun and Games '10: Proceedings of the 3rd International Conference on Fun and GamesEmotional expression when performing music (singing or playing musical instruments) requires skill, but such a skill is generally difficult to learn. Computer systems that can make it easy for non-musicians to express any emotion have been proposed[1]. ...
Comments