skip to main content
research-article

A system for embodied social active listening to sound and music content

Published:26 August 2011Publication History
Skip Abstract Section

Abstract

This article introduces a model and a system for embodied social active listening to sound and music content. The model is based on the simultaneous navigation/exploration of multiple maps, starting from a low-level physical map, up to a high-level emotional, affective map. The paper discusses the concepts underlying such a model, the system implementing it, and two concrete examples: The Orchestra Explorer designed for the Museo degli Strumenti Musicali at Accademia Nazionale di Santa Cecilia, Roma, Italy, and the interactive dance and music performance The Bow Is bent and Drawn (composer Nicola Ferrari), presented at Casa Paganini, Genova, Italy, in occasion of the opening concert of the 8th Intl. Conference on New Interfaces for Musical Expression (NIME08), June 4, 2008, and in museum interactive experiences at Festival della Scienza, Genova. This work is part of current research at Casa Paganini—InfoMus on embodied social active listening to sound and music content through the analysis and processing of expressiveness in human full-body movement and gesture. A user-centric interactive multimedia system architecture is proposed, operating on prerecorded music. From the perspective of valorization of cultural heritage, this research provides engaging paradigms of interaction with prerecorded music content, enabling a large number of nonexpert users to rediscover the musical heritage (e.g., classical and contemporary music) they may not be familiar with. Research was carried out in the framework of the EU-ICT Project SAME (Sound and Music for Everyone, Everyday, Everywhere, Every Way, www.sameproject.eu, 2008–2010) and has been recently extended to active experience of audiovisual content, and in particular to cultural heritage and museum scenarios: the novel interactive permanent museum exhibition Viaggiatori di Sguardo has been designed and developed at Palazzo Ducale, Genova, enabling visitors to explore virtually the UNESCO Heritage Palazzi dei Rolli in Genova.

References

  1. Argyle, M. 1980. Bodily Communication, Methuen & Co Ltd, London.Google ScholarGoogle Scholar
  2. Boone, R. T. and Cunningham, J. G. 1998. Children's decoding of emotion in expressive body movement: The development of cue attunement. Develop. Psych. 34, 1007--1016.Google ScholarGoogle ScholarCross RefCross Ref
  3. Camurri, A., Frixione, M., and Innocenti, C. 1994. A Cognitive Model and a Knowledge Representation System for Music and Multimedia. J. New Music Researc. 23, 317--347. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Camurri, A., Lagerlöf, I., and Volpe, G. 2003. Recognizing emotion from dance movement: Comparison of spectator recognition and automated techniques. Int. J. Hum.-Comput. Studi. 59, 1-2, 213--225. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Camurri, A., Mazzarino, B., Ricchetti, M., Timmers, R., and Volpe, G. 2004. Multimodal analysis of expressive gesture in music and dance performances. In Camurri A. and Volpe G. Eds., Gesture-Based Communication in Human-Computer Interaction. Lecture Notes in Artificial Intelligence vol. 2915, Springer Verlag, 20--29.Google ScholarGoogle Scholar
  6. Camurri, A., De Poli, G., Leman, M., and Volpe, G. 2005. Toward communicating expressiveness and affect in multimodal interactive systems for performing art and cultural applications. IEEE Multimedia Mag. 12, 1, 43--53. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Camurri, A., Canepa, C., and Volpe, G. 2007. Active listening to a virtual orchestra through an expressive gestural interface: The Orchestra Explorer. In Proceedings of the 7th Intl. Conference on New Interfaces for Musical Expression (NIME'07). Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Camurri, A., Canepa, C., Coletta, P., Mazzarino, B., and Volpe, G. 2008a. Mappe per Affetti Erranti: A Multimodal System for Social Active Listening and Expressive Performance. In Proceedings of the 8th Intl. Conference on New Interfaces for Musical Expression (NIME'08), 134--139.Google ScholarGoogle Scholar
  9. Camurri, A., Canepa, C., Coletta, P., Ferrari, N., Mazzarino, B., and Volpe, G. 2008b. The Interactive Piece The Bow is bent and drawn. In Proceedings of the 3rd ACM International Conference on Digital Interactive Media in Entertainment and Arts (DIMEA'08). Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Camurri, A., Volpe, G., Vinet, H., Bresin, R., Maestre, E., Llop, J., Kleimola, J., Oksanen, S., Välimäki, V., and Seppanen, J. 2009. User-centric context-aware mobile applications for embodied music listening. In Proceedings of the 1st International ICST Conference on User Centric Media.Google ScholarGoogle Scholar
  11. Camurri, A., Varni, G., and Volpe, G. 2010. Computational model of entrainment within small groups of people: Toward novel approaches to KANSEI information Processing. In Proceedings of the International Conference on Kansei Engineering and Emotion Research.Google ScholarGoogle Scholar
  12. Canazza, S., De Poli, G., Drioli, C., Rodà, A., and Vidolin, A. 2000. Audio morphing different expressive intentions for multimedia systems. IEEE Multimedia Mag. 7, 3, 79--83. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Clayton, M., Sager, R., and Will, U. 2004. In time with the music: The concept of entrainment and its significance for ethnomusicology. ESEM Counterpoint 1, 1--82.Google ScholarGoogle Scholar
  14. De Meijer, M. 1989. The contribution of general features of body movement to the attribution of emotions. J. Nonver. Beha. 13, 247--268.Google ScholarGoogle ScholarCross RefCross Ref
  15. Fraisse, P. 1982. Rhythm and tempo. In D. Deutsch Ed., The Psychology of Music, 1st Ed., Academic Press, New York, 149--180.Google ScholarGoogle Scholar
  16. Friberg, A. 2006. pDM: An expressive sequencer with real-time control of the KTH music performance rules. Comput. Music J. 30, 1, 37--48. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Gaggioli, A., Bassi, M., and Delle Fave, A. 2003. Quality of Experience in Virtual Environments. In G. Riva, F. Davide, W. A. Ijsselsteijn Eds., Being There: Concepts, Effects and Measurement of User Presence in Synthetic Environments, Ios Press, Amsterdam, The Netherlands.Google ScholarGoogle Scholar
  18. Goto, M. 2007. Active Music Listening Interfaces Based on Signal Processing. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'07). 1441--1444.Google ScholarGoogle ScholarCross RefCross Ref
  19. Hamanaka, M. and Lee, S. 2007. Sound scope headphones: Controlling an audio mixer through natural movement. In Proceedings of the International Computer Music Conference (ICMC'06). 155--158.Google ScholarGoogle Scholar
  20. Hashimoto, S. 1997. KANSEI as the third target of information processing and related topics in Japan. In Proceedings of the International Workshop on KANSEI: The Technology of Emotion. Camurri, A. Ed., AIMI (Italian Computer Music Association) and DIST-University of Genova, 101--104.Google ScholarGoogle Scholar
  21. Horprasert, T., Harwood, D., and Davis, L. S. 1999. A statistical approach for real-time robust background subtraction and shadow detection. In Proceedings of the 7th IEEE International Conference on Computer Vision, Frame Rate Workshop (ICCV'99). 1--19.Google ScholarGoogle Scholar
  22. Juslin, P. N. 2000. Cue utilization in communication of emotion in music performance: relating performance to perception. J. Exper. Psych. Hum. Percept. Perform. 26, 6, 1797--1813.Google ScholarGoogle ScholarCross RefCross Ref
  23. Juslin, P. N. and Sloboda, J. 2010. Handbook of Music and Emotion (Affective Science) Oxford University Press.Google ScholarGoogle Scholar
  24. Keller, P. E. 2008. Joint action in music performance. In Morganti, F., Carassa, A., Riva, G. Eds., Enacting Intersubjectivity: A Cognitive and Social Perspective on the Study of Interaction, IOS Press, Amsterdam, 205--221.Google ScholarGoogle Scholar
  25. Laban, R. and Lawrence, F. C. 1947. Effort. Macdonald & Evans Ltd., London.Google ScholarGoogle Scholar
  26. Laban, R. 1963. Modern Educational dance. Macdonald & Evans Ltd., London.Google ScholarGoogle Scholar
  27. Laso-Ballesteros, I. and Daras, P. Eds. 2008. User Centric Future Media Internet, EU Commission.Google ScholarGoogle Scholar
  28. Lessiter, J., Freeman, J., Keogh, E., and Davidoff, J. D. 2001. A Cross-media presence questionnaire: The ITC sense of presence inventory. Presence, Teleop. Virtual Environ. 10, 3, 282--297. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Pachet, F., Delerue, O. 2000. On-the-fly multi track mixing. In Proceedings of the 109th AES Convention.Google ScholarGoogle Scholar
  30. Pachet, F. 2004. Creativity studies and musical interaction. In Deliège I. and Wiggins G. Eds., Musical Creativity: Current Research in Theory and Practice, Psychology Press.Google ScholarGoogle Scholar
  31. Pakarinen, J., Puputti, T., and Valimaki, V. 2008. Virtual slide guitar. Comput. Music J. 32, 3, 42--54. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. Pentland, A. 2007. Social signal processing. IEEE Signal Proces. Mag. 24, 4, 108--111.Google ScholarGoogle ScholarCross RefCross Ref
  33. Rocchesso, D. and Serafin, S. 2008. Sonic interaction design: sound, information and experience. In Proceedings of the ACM CHI'08 Conference. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Rowe, R. 1993. Interactive Music Systems: Machine Listening and Composition. MIT Press, Cambridge MA. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Russell, J. A. 1980. A circumplex model of affect. J. Person. Soc. Psych. 39, 1161--1178.Google ScholarGoogle ScholarCross RefCross Ref
  36. Schaeffer, P. 1977. Traité des Objets Musicaux. 2nd Ed. Editions du Seuil, Paris.Google ScholarGoogle Scholar
  37. Streich, S. and Herrera, P. 2005. Detrended fluctuation analysis of music signals: Danceability estimation and further semantic characterization. In Proceedings of the 118th AES Convention.Google ScholarGoogle Scholar
  38. Tellegen, A., Watson, D., and Clark, L. A. 1999. On the dimensional and hierarchical structure of affect. Psych. Sci. 10, 4, 297--303.Google ScholarGoogle ScholarCross RefCross Ref
  39. Varni, G., Volpe, G., and Camurri, A. 2010. A system for real-time multimodal analysis of nonverbal affective social interaction in User-Centric Media. IEEE Trans. Multimed. 12, 6, 576--590, IEEE CS Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. Vinciarelli, A., Pantic, M., and Bourlard, H. 2009. Social signal processing: Survey of an emerging domain. J. Image Vis. Comput. 27, 12, 1743--1759. Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. Vines, B. W., Krumhansl, C. L., Wanderley, M. M., Ioana, M. D., and Levitin, D. J. 2005. Dimensions of emotion in expressive musical performance. Ann. N. Y. Acad. Sci., 1060, 462--466.Google ScholarGoogle ScholarCross RefCross Ref
  42. Wallbott, H. G. 1998. Bodily expression of emotion. Eur. J. Soc. Psych. 28, 879--896.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. A system for embodied social active listening to sound and music content

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in

      Full Access

      • Published in

        cover image Journal on Computing and Cultural Heritage
        Journal on Computing and Cultural Heritage   Volume 4, Issue 1
        August 2011
        62 pages
        ISSN:1556-4673
        EISSN:1556-4711
        DOI:10.1145/2001416
        Issue’s Table of Contents

        Copyright © 2011 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 26 August 2011
        • Accepted: 1 January 2011
        • Revised: 1 May 2010
        • Received: 1 October 2009
        Published in jocch Volume 4, Issue 1

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article
        • Research
        • Refereed

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader