Swipe to navigate through the chapters of this book
One of the main tasks of any work of art is transferring emotion conceived by the author to its recipient. When using several modalities a synergistic effect occurs, making the achievement of the target emotional state more likely. In reading, mostly, visual perception is involved, nevertheless, we can supplement it with an audio modality with the soundtrack’s help via specially selected music that corresponds to the emotional state of a text fragment.
As a base model for representing emotional state we have selected physiologically motivated Lövheim’s cube model which embraces 8 emotional states instead of 2 (positive and negative) usually used in sentiment analysis.
This article describes the concept of selecting special music for the “mood” of a text extract by mapping text emotional labels to tags in LastFM API, fetching music data to play and experimental validation of this approach.
Please log in to get access to this content
To get access to this content you need the following product:
Su, F., Markert, K.: From words to senses: a case study in subjectivity recognition (PDF). In: Proceedings of Coling 2008, Manchester, UK (2008)
Hu, M., Liu, B.: Mining and summarizing customer reviews. In: Proceedings of KDD 2004 (2004)
Kim, S.M., Hovy, E.H.: Identifying and analyzing judgment opinions (PDF). In: Proceedings of the Human Language Technology/North American Association of Computational Linguistics conference (HLT-NAACL 2006), New York, NY (2006)
Mehrabian, A.: Basic dimensions for a general psychological theory, pp. 39–53 (1980)
Lance, B., et al.: Relation between gaze behavior and attribution of emotion. In: Prendinger, H. (ed.) Intelligent Virtual Agents: 8th International Conference IVA, pp. 1–9 (2008)
Lövheim, H.: A new three-dimensional model for emotions and monoamine neurotransmitters. Med. Hypotheses 78, 341–348 (2012) CrossRef
Talanov, M., Toschev, A.: Computational emotional thinking and virtual neurotransmitters. Int. J. Synth. Emotions 5(1), 1–8 CrossRef
Affective Text: data annotated for emotions and polarity. Dataset. http://web.eecs.umich.edu/~mihalcea/downloads.html#affective
EmoBank. 10 k sentences annotated with Valence, Arousal and Dominance values. Dataset. https://github.com/JULIELab/EmoBank
- Automated Soundtrack Generation for Fiction Books Backed by Lövheim’s Cube Emotional Model
- Springer International Publishing
- Sequence number
Neuer Inhalt/© ITandMEDIA