skip to main content
10.1145/3071178.3071314acmconferencesArticle/Chapter ViewAbstractPublication PagesgeccoConference Proceedingsconference-collections
research-article

Can you feel it?: evaluation of affective expression in music generated by MetaCompose

Published:01 July 2017Publication History

ABSTRACT

This paper describes an evaluation conducted on the MetaCompose music generator, which is based on evolutionary computation and uses a hybrid evolutionary technique that combines FI-2POP and multi-objective optimization. The main objective of MetaCompose is to create music in real-time that can express different mood-states. The experiment presented here aims to evaluate: (i) if the perceived mood experienced by the participants of a music score matches intended mood the system is trying to express and (ii) if participants can identify transitions in the mood expression that occur mid-piece. Music clips including transitions and with static affective states were produced by MetaCompose and a quantitative user study was performed. Participants were tasked with annotating the perceived mood and moreover were asked to annotate in real-time changes in valence. The data collected confirms the hypothesis that people can recognize changes in music mood and that MetaCompose can express perceptibly different levels of arousal. In regards to valence we observe that, while it is mainly perceived as expected, changes in arousal seems to also influence perceived valence, suggesting that one or more of the music features MetaCompose associates with arousal has some effect on valence as well.

References

  1. Steven Abrams, Daniel V Oppenheim, Don Pazel, James Wright, and others. 1999. Higher-level composition control in music sketcher: Modifiers and smart harmony. In Proceedings of the ICMC. Citeseer.Google ScholarGoogle Scholar
  2. J-J Aucouturier, François Pachet, and Mark Sandler. 2005. "The way it Sounds": timbre models for analysis and retrieval of music signals. Multimedia, IEEE Transactions on 7, 6 (2005), 1028--1035. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. C Daniel Batson, Laura L Shaw, and Kathryn C Oleson. 1992. Differentiating affect, mood, and emotion: Toward functionally based conceptual distinctions. (1992).Google ScholarGoogle Scholar
  4. Christopher Beedie, Peter Terry, and Andrew Lane. 2005. Distinctions between emotion and mood. Cognition & Emotion 19, 6 (2005), 847--878.Google ScholarGoogle ScholarCross RefCross Ref
  5. John Biles. 1994. GenJam: A genetic algorithm for generating jazz solos. In Proceedings of the International Computer Music Conference. INTERNATIONAL COMPUTER MUSIC ACCOCIATION, 131--131.Google ScholarGoogle Scholar
  6. David Birchfield. 2003. Generative model for the creation of musical emotion, meaning, and form. In Proceedings of the 2003 ACM SIGMM Workshop on Experiential Telepresence. 99--104. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Chris R Brewin. 1989. Cognitive change processes in psychotherapy. Psychological review 96, 3 (1989), 379.Google ScholarGoogle Scholar
  8. Daniel Brown. 2012. Mezzo: An Adaptive, Real-Time Composition Program for Game Soundtracks. In Proceedings of the AIIDE 2012 Workshop on Musical Metacreation. 68--72.Google ScholarGoogle Scholar
  9. Deepti Chafekar, Jiang Xuan, and Khaled Rasheed. 2003. Constrained multi-objective optimization using steady state genetic algorithms. In Genetic and Evolutionary ComputationfiGECCO 2003. Springer, 813--824. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Heather Chan and Dan A Ventura. 2008. Automatic composition of themed mood pieces. (2008).Google ScholarGoogle Scholar
  11. Andrea Clerico, Cindy Chamberland, Mark Parent, Pierre-Emmanuel Michon, Sebastien Tremblay, Tiago Falk, Jean-Christophe Gagnon, and Philip Jackson. 2016. Biometrics and classifier fusion to predict the fun-factor in video gaming. In IEEE Conference on Computational Intelligence and Games. IEEE, 233--240.Google ScholarGoogle ScholarCross RefCross Ref
  12. Palle Dahlstedt. 2007. Autonomous evolution of complete piano pieces and performances. In Proceedings of Music AL Workshop. Citeseer.Google ScholarGoogle Scholar
  13. Kalyanmoy Deb. 2001. Multi-objective optimization using evolutionary algorithms. Vol. 16. John Wiley & Sons. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Kalyanmoy Deb, Amrit Pratap, and T Meyarivan. 2001. Constrained test problems for multi-objective evolutionary optimization. In Evolutionary Multi-Criterion Optimization. Springer, 284--298. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Paul Ekman. 1992. An argument for basic emotions. Cognition & emotion 6, 3--4 (1992), 169--200.Google ScholarGoogle Scholar
  16. Mirjam Eladhari, Rik Nieuwdorp, and Mikael Fridenfalk. 2006. The soundtrack of your mind: mind music-adaptive audio for game characters. In Proceedings of Advances in Computer Entertainment Technology. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Alf Gabrielsson. 2001. Emotion perceived and emotion felt: Same or different? Musicae Scientiae 5, 1_suppl (2001), 123--147.Google ScholarGoogle Scholar
  18. John M Grey and John W Gordon. 1978. Perceptual effects of spectral modifications on musical timbres. The Journal of the Acoustical Society of America 63, 5 (1978), 1493--1500.Google ScholarGoogle ScholarCross RefCross Ref
  19. Amitay Isaacs, Tapabrata Ray, and Warren Smith. 2008. Blessings of maintaining infeasible solutions for constrained multi-objective optimization problems. In IEEE Congress on Evolutionary Computation. IEEE, 2780--2787.Google ScholarGoogle ScholarCross RefCross Ref
  20. Fernando Jimenez, Antonio F Gómez-Skarmeta, Gracia Sánchez, and Kalyanmoy Deb. 2002. An evolutionary algorithm for constrained multi-objective optimization. In Proceedings of the Congress on Evolutionary Computation. IEEE, 1133--1138. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. H Katayose, M Imai, and S Inokuchi. 1988. Sentiment extraction in music. In Proceedings of the 9th International Conference on Pattern Recognition. 1083--1087.Google ScholarGoogle ScholarCross RefCross Ref
  22. Steven Orla Kimbrough, Gary J Koehler, Ming Lu, and David Harlan Wood. 2008. On a Feasible-Infeasible Two-Population (FI-2Pop) genetic algorithm for constrained optimization: Distance tracing and no free lunch. Eur. J. Operational Research 190, 2 (2008), 310--327.Google ScholarGoogle ScholarCross RefCross Ref
  23. Vladimir J Konečni. 2008. Does music induce emotion? A theoretical and methodological analysis. Psychology of Aesthetics, Creativity, and the Arts 2, 2 (2008), 115.Google ScholarGoogle ScholarCross RefCross Ref
  24. Gunter Kreutz, Ulrich Ott, Daniel Teichmann, Patrick Osawa, and Dieter Vaitl. 2008. Using music to induce emotions: Influences of musical preference and absorption. Psychology of Music 36, 1 (2008), 101--126.Google ScholarGoogle ScholarCross RefCross Ref
  25. Carl Georg Lange and William James. 1922. The emotions. Vol. 1. Williams & Wilkins.Google ScholarGoogle Scholar
  26. Thibault Langlois and Gonçalo Marques. 2009. A Music Classification Method based on Timbral Features.. In ISMIR. 81--86.Google ScholarGoogle Scholar
  27. Richard S Lazarus. 1991. Emotion and Adaptation. Oxford University Press.Google ScholarGoogle Scholar
  28. Jennifer S Lerner and Dacher Keltner. 2000. Beyond valence: Toward a model of emotion-specific influences on judgement and choice. Cognition & Emotion 14, 4 (2000), 473--493.Google ScholarGoogle ScholarCross RefCross Ref
  29. Erik Lindström, Patrik N Juslin, Roberto Bresin, and Aaron Williamon. 2003. "Expressivity comes from within your soul": A questionnaire study of music students' perspectives on expressivity. Research Studies in Music Education 20, 1 (2003), 23--47.Google ScholarGoogle ScholarCross RefCross Ref
  30. Dan Liu, Lie Lu, and Hong-Jiang Zhang. 2003. Automatic mood detection from acoustic music data. In Proceedings of the International Symposium on Music Information Retrieval. 81--7.Google ScholarGoogle Scholar
  31. Steven R Livingstone and Andrew R Brown. 2005. Dynamic response: Real-time adaptation for music emotion. In Proceedings of the 2nd Australasian Conference on Interactive Entertainment. 105--111. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. Roisin Loughran, James McDermott, and Michael O'Neill. 2015. Tonality driven piano compositions with grammatical evolution. In IEEE Congress on Evolutionary Computation (CEC). IEEE, 2168--2175.Google ScholarGoogle ScholarCross RefCross Ref
  33. Brett AS Martin. 2003. The influence of gender on mood effects in advertising. Psychology & Marketing 20, 3 (2003), 249--273.Google ScholarGoogle ScholarCross RefCross Ref
  34. Sidney K Meier and Jeffrey L Briggs. 1996. System for real-time music composition and synthesis. (March 5 1996). US Patent 5,496,962.Google ScholarGoogle Scholar
  35. Leonard B Meyer. 2008. Emotion and meaning in music. University of Chicago Press.Google ScholarGoogle Scholar
  36. Eduardo Reck Miranda. 2013. Readings in music and artificial intelligence. Routledge.Google ScholarGoogle Scholar
  37. Eduardo Reck Miranda and Al Biles. 2007. Evolutionary computer music. Springer. Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Kristine Monteith, Tony Martinez, and Dan Ventura. 2010. Automatic generation of music for inducing emotive response. In Proceedings of the International Conference on Computational Creativity. 140--149.Google ScholarGoogle Scholar
  39. George Papadopoulos and Geraint Wiggins. 1999. AI methods for algorithmic composition: A survey, a critical view and future prospects. In AISB Symposium on Musical Creativity. Edinburgh, UK, 110--117.Google ScholarGoogle Scholar
  40. Jonathan Posner, James A Russell, and Bradley S Peterson. 2005. The circumplex model of affect: An integrative approach to affective neuroscience, cognitive development, and psychopathology. Development and psychopathology 17, 03 (2005), 715--734.Google ScholarGoogle Scholar
  41. Miller Puckette and others. 1996. Pure Data: another integrated computer music environment. Proceedings of the Second Intercollege Computer Music Concerts (1996), 37--41.Google ScholarGoogle Scholar
  42. Alexander P Rigopulos and Eran B Egozy 1997. Real-time music creation system. (May 6 1997). US Patent 5,627,335.Google ScholarGoogle Scholar
  43. Judy Robertson, Andrew de Quincey, Tom Stapleford, and Geraint Wiggins. 1998. Real-time music generation for a virtual environment. In Proceedings of ECAI-98 Workshop on AI/Alife and Entertainment. Citeseer.Google ScholarGoogle Scholar
  44. James A Russell. 1980. A circumplex model of affect. Journal of Personality and Social Psychology 39, 6 (1980), 1161--1178.Google ScholarGoogle ScholarCross RefCross Ref
  45. Klaus R Scherer, Angela Schorr, and Tom Johnstone. 2001. Appraisal processes in emotion: Theory, methods, research. Oxford University Press.Google ScholarGoogle Scholar
  46. Harold Schlosberg. 1954. Three dimensions of emotion. Psychological review 61, 2 (1954), 81.Google ScholarGoogle Scholar
  47. Marco Scirea. 2013. Mood Dependent Music Generator. In Proceedings of Advances in Computer Entertainment. 626--629.Google ScholarGoogle ScholarCross RefCross Ref
  48. Marco Scirea, Yun-Gyung Cheong, Byung Chull Bae, and Mark Nelson. 2014. Evaluating musical foreshadowing of videogame narrative experiences. In Proceedings of Audio Mostly 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. Marco Scirea, Mark J Nelson, and Julian Togelius. 2015. Moody Music Generator: Characterising Control Parameters Using Crowdsourcing. In Evolutionary and Biologically Inspired Music, Sound, Art and Design. Springer, 200--211.Google ScholarGoogle Scholar
  50. Marco Scirea, Julian Togelius, Peter Eklund, and Sebastian Risi. 2016. Meta-Compose: A Compositional Evolutionary Music Composer. In International Conference on Evolutionary and Biologically Inspired Music and Art. Springer, 202--217. Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. Alan Smaill, Geraint Wiggins, and Mitch Harris. 1993. Hierarchical music representation for composition and analysis. Computers and the Humanities 27, 1 (1993), 7--17.Google ScholarGoogle ScholarCross RefCross Ref
  52. Robert E Thayer. 1989. The Biopsychology of Mood and Arousal. Oxford University Press.Google ScholarGoogle Scholar
  53. Laurel J Trainor and Becky M Heinmiller. 1998. The development of evaluative responses to music:: Infants prefer to listen to consonance over dissonance. Infant Behavior and Development 21, 1 (1998), 77--88.Google ScholarGoogle ScholarCross RefCross Ref
  54. Geraint Wiggins, Mitch Harris, and Alan Smaill. 1990. Representing music for analysis and composition. University of Edinburgh, Department of Artificial Intelligence.Google ScholarGoogle Scholar
  55. Rene Wooller, Andrew R Brown, Eduardo Miranda, Joachim Diederich, and Rodney Berry. 2005. A framework for comparison of process in algorithmic music systems. In Generative Arts Practice 2005 --- A Creativity & Cognition Symposium.Google ScholarGoogle Scholar
  56. Wilhelm Wundt. 1980. Outlines of psychology. Springer.Google ScholarGoogle Scholar
  57. Georgios N. Yannakakis and Julian Togelius. 2011. Experience-driven procedural content generation. IEEE Transactions on Affective Computing 2, 3 (2011), 147--161. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Can you feel it?: evaluation of affective expression in music generated by MetaCompose

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      GECCO '17: Proceedings of the Genetic and Evolutionary Computation Conference
      July 2017
      1427 pages
      ISBN:9781450349208
      DOI:10.1145/3071178

      Copyright © 2017 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 1 July 2017

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      GECCO '17 Paper Acceptance Rate178of462submissions,39%Overall Acceptance Rate1,669of4,410submissions,38%

      Upcoming Conference

      GECCO '24
      Genetic and Evolutionary Computation Conference
      July 14 - 18, 2024
      Melbourne , VIC , Australia

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader