ABSTRACT
This paper describes an evaluation conducted on the MetaCompose music generator, which is based on evolutionary computation and uses a hybrid evolutionary technique that combines FI-2POP and multi-objective optimization. The main objective of MetaCompose is to create music in real-time that can express different mood-states. The experiment presented here aims to evaluate: (i) if the perceived mood experienced by the participants of a music score matches intended mood the system is trying to express and (ii) if participants can identify transitions in the mood expression that occur mid-piece. Music clips including transitions and with static affective states were produced by MetaCompose and a quantitative user study was performed. Participants were tasked with annotating the perceived mood and moreover were asked to annotate in real-time changes in valence. The data collected confirms the hypothesis that people can recognize changes in music mood and that MetaCompose can express perceptibly different levels of arousal. In regards to valence we observe that, while it is mainly perceived as expected, changes in arousal seems to also influence perceived valence, suggesting that one or more of the music features MetaCompose associates with arousal has some effect on valence as well.
- Steven Abrams, Daniel V Oppenheim, Don Pazel, James Wright, and others. 1999. Higher-level composition control in music sketcher: Modifiers and smart harmony. In Proceedings of the ICMC. Citeseer.Google Scholar
- J-J Aucouturier, François Pachet, and Mark Sandler. 2005. "The way it Sounds": timbre models for analysis and retrieval of music signals. Multimedia, IEEE Transactions on 7, 6 (2005), 1028--1035. Google ScholarDigital Library
- C Daniel Batson, Laura L Shaw, and Kathryn C Oleson. 1992. Differentiating affect, mood, and emotion: Toward functionally based conceptual distinctions. (1992).Google Scholar
- Christopher Beedie, Peter Terry, and Andrew Lane. 2005. Distinctions between emotion and mood. Cognition & Emotion 19, 6 (2005), 847--878.Google ScholarCross Ref
- John Biles. 1994. GenJam: A genetic algorithm for generating jazz solos. In Proceedings of the International Computer Music Conference. INTERNATIONAL COMPUTER MUSIC ACCOCIATION, 131--131.Google Scholar
- David Birchfield. 2003. Generative model for the creation of musical emotion, meaning, and form. In Proceedings of the 2003 ACM SIGMM Workshop on Experiential Telepresence. 99--104. Google ScholarDigital Library
- Chris R Brewin. 1989. Cognitive change processes in psychotherapy. Psychological review 96, 3 (1989), 379.Google Scholar
- Daniel Brown. 2012. Mezzo: An Adaptive, Real-Time Composition Program for Game Soundtracks. In Proceedings of the AIIDE 2012 Workshop on Musical Metacreation. 68--72.Google Scholar
- Deepti Chafekar, Jiang Xuan, and Khaled Rasheed. 2003. Constrained multi-objective optimization using steady state genetic algorithms. In Genetic and Evolutionary ComputationfiGECCO 2003. Springer, 813--824. Google ScholarDigital Library
- Heather Chan and Dan A Ventura. 2008. Automatic composition of themed mood pieces. (2008).Google Scholar
- Andrea Clerico, Cindy Chamberland, Mark Parent, Pierre-Emmanuel Michon, Sebastien Tremblay, Tiago Falk, Jean-Christophe Gagnon, and Philip Jackson. 2016. Biometrics and classifier fusion to predict the fun-factor in video gaming. In IEEE Conference on Computational Intelligence and Games. IEEE, 233--240.Google ScholarCross Ref
- Palle Dahlstedt. 2007. Autonomous evolution of complete piano pieces and performances. In Proceedings of Music AL Workshop. Citeseer.Google Scholar
- Kalyanmoy Deb. 2001. Multi-objective optimization using evolutionary algorithms. Vol. 16. John Wiley & Sons. Google ScholarDigital Library
- Kalyanmoy Deb, Amrit Pratap, and T Meyarivan. 2001. Constrained test problems for multi-objective evolutionary optimization. In Evolutionary Multi-Criterion Optimization. Springer, 284--298. Google ScholarDigital Library
- Paul Ekman. 1992. An argument for basic emotions. Cognition & emotion 6, 3--4 (1992), 169--200.Google Scholar
- Mirjam Eladhari, Rik Nieuwdorp, and Mikael Fridenfalk. 2006. The soundtrack of your mind: mind music-adaptive audio for game characters. In Proceedings of Advances in Computer Entertainment Technology. Google ScholarDigital Library
- Alf Gabrielsson. 2001. Emotion perceived and emotion felt: Same or different? Musicae Scientiae 5, 1_suppl (2001), 123--147.Google Scholar
- John M Grey and John W Gordon. 1978. Perceptual effects of spectral modifications on musical timbres. The Journal of the Acoustical Society of America 63, 5 (1978), 1493--1500.Google ScholarCross Ref
- Amitay Isaacs, Tapabrata Ray, and Warren Smith. 2008. Blessings of maintaining infeasible solutions for constrained multi-objective optimization problems. In IEEE Congress on Evolutionary Computation. IEEE, 2780--2787.Google ScholarCross Ref
- Fernando Jimenez, Antonio F Gómez-Skarmeta, Gracia Sánchez, and Kalyanmoy Deb. 2002. An evolutionary algorithm for constrained multi-objective optimization. In Proceedings of the Congress on Evolutionary Computation. IEEE, 1133--1138. Google ScholarDigital Library
- H Katayose, M Imai, and S Inokuchi. 1988. Sentiment extraction in music. In Proceedings of the 9th International Conference on Pattern Recognition. 1083--1087.Google ScholarCross Ref
- Steven Orla Kimbrough, Gary J Koehler, Ming Lu, and David Harlan Wood. 2008. On a Feasible-Infeasible Two-Population (FI-2Pop) genetic algorithm for constrained optimization: Distance tracing and no free lunch. Eur. J. Operational Research 190, 2 (2008), 310--327.Google ScholarCross Ref
- Vladimir J Konečni. 2008. Does music induce emotion? A theoretical and methodological analysis. Psychology of Aesthetics, Creativity, and the Arts 2, 2 (2008), 115.Google ScholarCross Ref
- Gunter Kreutz, Ulrich Ott, Daniel Teichmann, Patrick Osawa, and Dieter Vaitl. 2008. Using music to induce emotions: Influences of musical preference and absorption. Psychology of Music 36, 1 (2008), 101--126.Google ScholarCross Ref
- Carl Georg Lange and William James. 1922. The emotions. Vol. 1. Williams & Wilkins.Google Scholar
- Thibault Langlois and Gonçalo Marques. 2009. A Music Classification Method based on Timbral Features.. In ISMIR. 81--86.Google Scholar
- Richard S Lazarus. 1991. Emotion and Adaptation. Oxford University Press.Google Scholar
- Jennifer S Lerner and Dacher Keltner. 2000. Beyond valence: Toward a model of emotion-specific influences on judgement and choice. Cognition & Emotion 14, 4 (2000), 473--493.Google ScholarCross Ref
- Erik Lindström, Patrik N Juslin, Roberto Bresin, and Aaron Williamon. 2003. "Expressivity comes from within your soul": A questionnaire study of music students' perspectives on expressivity. Research Studies in Music Education 20, 1 (2003), 23--47.Google ScholarCross Ref
- Dan Liu, Lie Lu, and Hong-Jiang Zhang. 2003. Automatic mood detection from acoustic music data. In Proceedings of the International Symposium on Music Information Retrieval. 81--7.Google Scholar
- Steven R Livingstone and Andrew R Brown. 2005. Dynamic response: Real-time adaptation for music emotion. In Proceedings of the 2nd Australasian Conference on Interactive Entertainment. 105--111. Google ScholarDigital Library
- Roisin Loughran, James McDermott, and Michael O'Neill. 2015. Tonality driven piano compositions with grammatical evolution. In IEEE Congress on Evolutionary Computation (CEC). IEEE, 2168--2175.Google ScholarCross Ref
- Brett AS Martin. 2003. The influence of gender on mood effects in advertising. Psychology & Marketing 20, 3 (2003), 249--273.Google ScholarCross Ref
- Sidney K Meier and Jeffrey L Briggs. 1996. System for real-time music composition and synthesis. (March 5 1996). US Patent 5,496,962.Google Scholar
- Leonard B Meyer. 2008. Emotion and meaning in music. University of Chicago Press.Google Scholar
- Eduardo Reck Miranda. 2013. Readings in music and artificial intelligence. Routledge.Google Scholar
- Eduardo Reck Miranda and Al Biles. 2007. Evolutionary computer music. Springer. Google ScholarDigital Library
- Kristine Monteith, Tony Martinez, and Dan Ventura. 2010. Automatic generation of music for inducing emotive response. In Proceedings of the International Conference on Computational Creativity. 140--149.Google Scholar
- George Papadopoulos and Geraint Wiggins. 1999. AI methods for algorithmic composition: A survey, a critical view and future prospects. In AISB Symposium on Musical Creativity. Edinburgh, UK, 110--117.Google Scholar
- Jonathan Posner, James A Russell, and Bradley S Peterson. 2005. The circumplex model of affect: An integrative approach to affective neuroscience, cognitive development, and psychopathology. Development and psychopathology 17, 03 (2005), 715--734.Google Scholar
- Miller Puckette and others. 1996. Pure Data: another integrated computer music environment. Proceedings of the Second Intercollege Computer Music Concerts (1996), 37--41.Google Scholar
- Alexander P Rigopulos and Eran B Egozy 1997. Real-time music creation system. (May 6 1997). US Patent 5,627,335.Google Scholar
- Judy Robertson, Andrew de Quincey, Tom Stapleford, and Geraint Wiggins. 1998. Real-time music generation for a virtual environment. In Proceedings of ECAI-98 Workshop on AI/Alife and Entertainment. Citeseer.Google Scholar
- James A Russell. 1980. A circumplex model of affect. Journal of Personality and Social Psychology 39, 6 (1980), 1161--1178.Google ScholarCross Ref
- Klaus R Scherer, Angela Schorr, and Tom Johnstone. 2001. Appraisal processes in emotion: Theory, methods, research. Oxford University Press.Google Scholar
- Harold Schlosberg. 1954. Three dimensions of emotion. Psychological review 61, 2 (1954), 81.Google Scholar
- Marco Scirea. 2013. Mood Dependent Music Generator. In Proceedings of Advances in Computer Entertainment. 626--629.Google ScholarCross Ref
- Marco Scirea, Yun-Gyung Cheong, Byung Chull Bae, and Mark Nelson. 2014. Evaluating musical foreshadowing of videogame narrative experiences. In Proceedings of Audio Mostly 2014. Google ScholarDigital Library
- Marco Scirea, Mark J Nelson, and Julian Togelius. 2015. Moody Music Generator: Characterising Control Parameters Using Crowdsourcing. In Evolutionary and Biologically Inspired Music, Sound, Art and Design. Springer, 200--211.Google Scholar
- Marco Scirea, Julian Togelius, Peter Eklund, and Sebastian Risi. 2016. Meta-Compose: A Compositional Evolutionary Music Composer. In International Conference on Evolutionary and Biologically Inspired Music and Art. Springer, 202--217. Google ScholarDigital Library
- Alan Smaill, Geraint Wiggins, and Mitch Harris. 1993. Hierarchical music representation for composition and analysis. Computers and the Humanities 27, 1 (1993), 7--17.Google ScholarCross Ref
- Robert E Thayer. 1989. The Biopsychology of Mood and Arousal. Oxford University Press.Google Scholar
- Laurel J Trainor and Becky M Heinmiller. 1998. The development of evaluative responses to music:: Infants prefer to listen to consonance over dissonance. Infant Behavior and Development 21, 1 (1998), 77--88.Google ScholarCross Ref
- Geraint Wiggins, Mitch Harris, and Alan Smaill. 1990. Representing music for analysis and composition. University of Edinburgh, Department of Artificial Intelligence.Google Scholar
- Rene Wooller, Andrew R Brown, Eduardo Miranda, Joachim Diederich, and Rodney Berry. 2005. A framework for comparison of process in algorithmic music systems. In Generative Arts Practice 2005 --- A Creativity & Cognition Symposium.Google Scholar
- Wilhelm Wundt. 1980. Outlines of psychology. Springer.Google Scholar
- Georgios N. Yannakakis and Julian Togelius. 2011. Experience-driven procedural content generation. IEEE Transactions on Affective Computing 2, 3 (2011), 147--161. Google ScholarDigital Library
Index Terms
- Can you feel it?: evaluation of affective expression in music generated by MetaCompose
Recommendations
On Being Told How We Feel: How Algorithmic Sensor Feedback Influences Emotion Perception
Algorithms and sensors are increasingly deployed for highly personal aspects of our everyday lives. Recent work suggests people have imperfect understanding of system outputs, often assuming sophisticated capabilities and deferring to feedback. We ...
The Effect of Different Affective Arousal Levels on Taste Perception
ICMI '20 Companion: Companion Publication of the 2020 International Conference on Multimodal InteractionThe emotions we experience shape our perception, and our emotion is shaped by our perceptions. Taste perception is also influenced by emotions. Positive and negative emotions alter sweetness, sourness, and bitterness perception. However, most previous ...
Watch and feel: an affective interface in a virtual storytelling environment
ACII'05: Proceedings of the First international conference on Affective Computing and Intelligent InteractionIn this paper we describe a study carried out with SenToy: a tangible interface that has the shape of a doll and is used to capture emotions from its user whilst performing specific gestures. SenToy was used with an application named Fearnot!, which is ...
Comments