skip to main content
10.1145/2506364.2506365acmconferencesArticle/Chapter ViewAbstractPublication PagesmmConference Proceedingsconference-collections
research-article

1000 songs for emotional analysis of music

Published:22 October 2013Publication History

ABSTRACT

Music is composed to be emotionally expressive, and emotional associations provide an especially natural domain for indexing and recommendation in today's vast digital music libraries. But such libraries require powerful automated tools, and the development of systems for automatic prediction of musical emotion presents a myriad challenges. The perceptual nature of musical emotion necessitates the collection of data from human subjects. The interpretation of emotion varies between listeners thus each clip needs to be annotated by a distribution of subjects. In addition, the sharing of large music content libraries for the development of such systems, even for academic research, presents complicated legal issues which vary by country. This work presents a new publicly available dataset for music emotion recognition research and a baseline system. In addressing the difficulties of emotion annotation we have turned to crowdsourcing, using Amazon Mechanical Turk, and have developed a two-stage procedure for filtering out poor quality workers. The dataset consists entirely of creative commons music from the Free Music Archive, which as the name suggests, can be shared freely without penalty. The final dataset contains 1000 songs, each annotated by a minimum of 10 subjects, which is larger than many currently available music emotion dataset.

References

  1. M. Barthet, G. Fazekas, and M. Sandler. Multidisciplinary perspectives on music emotion recognition: Implications for content and context-based models. In Int'l Symp. Computer Music Modelling & Retrieval, pages 492--507, 2012.Google ScholarGoogle Scholar
  2. M. M. Bradley and P. J. Lang. Measuring emotion: the Self-Assessment Manikin and the Semantic Differential. Journal of Behavior Therapy and Experimental Psychiatry, 25(1):49--59, 1994.Google ScholarGoogle ScholarCross RefCross Ref
  3. R. Cowie, E. Douglas-Cowie, S. Savvidou, E. Mcmahon, M. Sawey, and M. Schröder. feeltrace': an instrument for recording perceived emotion in real time, 2000.Google ScholarGoogle Scholar
  4. P. Ekman. Basic Emotions, pages 45--60. John Wiley & Sons, Ltd, 2005.Google ScholarGoogle Scholar
  5. D. P. W. Ellis. PLP and RASTA (and MFCC, and inversion) in Matlab, 2005. online web resource.Google ScholarGoogle Scholar
  6. K. Hevner. Experimental studies of the elements of expression in music. American Journal of Psychology, (48):246--268, 1936.Google ScholarGoogle ScholarCross RefCross Ref
  7. X. Hu, J. S. Downie, C. Laurier, M. Bay, and A. F. Ehmann. The 2007 MIREX audio mood classification task: Lessons learned. In Proc. Int. Soc. Music Info. Retrieval Conf., pages 462--467, 2008.Google ScholarGoogle Scholar
  8. A. Huq, J. P. Bello, and R. Rowe. Automated music emotion recognition: A systematic evaluation. Journal of New Music Research, 39(3):227--244, 2010.Google ScholarGoogle ScholarCross RefCross Ref
  9. Y. E. Kim, E. Schmidt, and L. Emelle. Moodswings: A collaborative game for music mood label collection. In Proc. Int. Soc. Music Info. Retrieval Conf., pages 231--236, 2008.Google ScholarGoogle Scholar
  10. Y. E. Kim, E. M. Schmidt, R. Migneco, B. G. Morton, P. Richardson, J. Scott, J. Speck, and D. Turnbull. Music emotion recognition: A state of the art review. In Proc. Int. Soc. Music Info. Retrieval Conf., 2010.Google ScholarGoogle Scholar
  11. A. Kittur, E. H. Chi, and B. Suh. Crowdsourcing user studies with Mechanical Turk. In Proc. annual SIGCHI Conf. Human factors in computing systems, CHI '08, pages 453--456, New York, NY, USA, 2008. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. C. Laurier and P. Herrera. Audio music mood classification using support vector machine. In MIREX task on Audio Mood Classification, 2007.Google ScholarGoogle Scholar
  13. A. J. Lonsdale and A. C. North. Why do we listen to music? a uses and gratifications analysis. British Journal of Psychology, 102:108--134, 2011.Google ScholarGoogle ScholarCross RefCross Ref
  14. S. Marsella, J. Gratch, and P. Petta. Computational models of emotion, chapter 1.2, pages 21--41. Oxford University Press, Oxford, UK, 2010.Google ScholarGoogle Scholar
  15. G. McKeown, M. Valstar, R. Cowie, M. Pantic, and M. Schroder. The semaine database: Annotated multimodal records of emotionally colored conversations between a person and a limited agent. IEEE Trans. Affective Computing, 3(1):5--17, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. B. G. Morton, J. A. Speck, E. M. Schmidt, and Y. E. Kim. Improving music emotion labeling using human computation. In Proc. ACM SIGKDD Workshop on Human Computation, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. M. Quirin, M. Kazén, and J. Kuhl. When Nonsense Sounds Happy or Helpless: The Implicit Positive and Negative Affect Test (IPANAT). Journal of Personality and Social Psychology, 97(3):500--516, 2009.Google ScholarGoogle ScholarCross RefCross Ref
  18. J. A. Russell. A circumplex model of affect. J. Personality Social Psychology, 39:1161--1178, 1980.Google ScholarGoogle ScholarCross RefCross Ref
  19. K. R. Scherer. What are emotions? And how can they be measured? Social Science Information, 44(4):695--729, 2005.Google ScholarGoogle ScholarCross RefCross Ref
  20. E. M. Schmidt and Y. E. Kim. Prediction of time-varying musical mood distributions from audio. In Proc. Int. Soc. Music Information Retrieval Conf., August 2010.Google ScholarGoogle Scholar
  21. E. M. Schmidt, D. Turnbull, and Y. E. Kim. Feature selection for content-based, time-varying musical emotion regression. In Proc. ACM Int. Conf. Multimedia Information Retrieval, Philadelphia, PA, March 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. M. Soleymani and M. Larson. Crowdsourcing for affective annotation of video: Development of a viewer-reported boredom corpus. In Workshop on Crowdsourcing for Search Evaluation, SIGIR 2010, Geneva, Switzerland, 2010.Google ScholarGoogle Scholar
  23. J. A. Speck, E. M. Schmidt, B. G. Morton, and Y. E. Kim. A comparative study of collaborative vs. traditional musical mood annotation. In Proc. Int. Soc. Music Info. Retrieval Conf., 2011.Google ScholarGoogle Scholar
  24. J.-C. Wang, Y.-H. Yang, H.-M. Wang, and S.-K. Jeng. The acoustic emotion Gaussians model for emotion-based music annotation and retrieval. In Proc. ACM Multimedia, pages 89--98, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Y.-H. Yang and H. H. Chen. Music Emotion Recognition. CRC Press, Boca Raton, Florida, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Y.-H. Yang and H. H. Chen. Ranking-based emotion recognition for music organization and retrieval. IEEE Transactions on Audio, Speech & Language Processing, 19(4):762--774, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. 1000 songs for emotional analysis of music

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      CrowdMM '13: Proceedings of the 2nd ACM international workshop on Crowdsourcing for multimedia
      October 2013
      44 pages
      ISBN:9781450323963
      DOI:10.1145/2506364

      Copyright © 2013 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 22 October 2013

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      CrowdMM '13 Paper Acceptance Rate8of16submissions,50%Overall Acceptance Rate16of42submissions,38%

      Upcoming Conference

      MM '24
      MM '24: The 32nd ACM International Conference on Multimedia
      October 28 - November 1, 2024
      Melbourne , VIC , Australia

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader