skip to main content
10.1145/3077981.3078047acmotherconferencesArticle/Chapter ViewAbstractPublication PagesmocoConference Proceedingsconference-collections
research-article

Data-Driven Design of Sound for Enhancing the Perception of Expressive Robotic Movement

Authors Info & Claims
Published:28 June 2017Publication History

ABSTRACT

Since people communicate intentions and inner states through movement, robots can better interact with humans if they too can modify their movements to communicate changing state. These movements, which may be seen as supplementary to those required for workspace tasks, may be termed "expressive." However, robot hardware, which cannot recreate the same range of dynamics as human limbs, often limit expressive capacity. One solution is to augment expressive robotic movement with expressive sound. To that end, this paper presents a study to find a qualitative mapping between movement and sound. Musicians were asked to vocalize sounds in response to animations of a simple simulated upper body movement performed with different movement qualities, parametrized according to Laban's Effort System. Qualitative labelling and quantitative signal analysis of these sounds suggests a number of correspondences between movement qualities and sound qualities. These correspondences are presented and analyzed here to set up future work that will test user perceptions when expressive movements and sounds are used in conjunction.

References

  1. Lin Bai, Jon Bellona, Luke Dahl, and Amy LaViers. 2016. Design of Perceptually Meaningful Quality in Robotic Motion. In Workshop on Artistically Skilled Robots, IEEE/RSJ International Conference on Intelligent Robots and Systems.Google ScholarGoogle Scholar
  2. Luke Dahl. 2016. Comparing the Timing of Movement Events for Air-Drumming Gestures. In Music, mind, and embodiment. Vol. 9617. Springer International Publishing Switzerland, 3--21.Google ScholarGoogle Scholar
  3. Alfred Effenberg, Joachim Melzer, Andreas Weber, and Arno Zinke. 2005. MotionLab Sonify: A Framework for the Sonification of Human Motion Data. In Proceedings of the Ninth International Conference on Information Visualisation (IV '05). IEEE Computer Society, Washington, DC, USA, 17--23. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Zohar Eitan and Roni Y. Granot. 2006. How Music Moves: Musical Parameters and Listeners' Images of Motion. Music Perception 23, 3 (2006), 221--248.Google ScholarGoogle ScholarCross RefCross Ref
  5. Jules Françoise, Sarah Fdili Alaoui, Thecla Schiphorst, and Frederic Bevilacqua. Vocalizing dance movement for interactive sonification of laban effort factors. In Proceedings of the 2014 conference on Designing interactive systems (2014). ACM Press, 1079--1082. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Rolf Inge Godøy. 2010. Gestural affordances of musical sound. In Musical gestures: sound, movement, and meaning, Rolf Inge Godøy and Marc Leman (Eds.). Routledge, New York, Chapter 5, 103--125.Google ScholarGoogle Scholar
  7. Rolf Inge Godøy, Minho Song, Kristian Nymoen, Mari Romarheim Haugen, and Alexander Refsum Jensenius. 2016. Exploring Sound-Motion Similarity in Musical Experience. Journal of New Music Research 0, 0 (2016), 1--13.Google ScholarGoogle Scholar
  8. Mark L Johnson and Steve Larson. 2003. Something in the Way She Moves -- Metaphors of Musical Motion. Metaphor and symbol 18, 2 (2003), 63--84.Google ScholarGoogle Scholar
  9. Heather Knight and Reid Simmons. 2014. Expressive motion with x, y and theta: Laban effort features for mobile robots. In The 23rd IEEE International Symposium on Robot and Human Interactive Communication. IEEE, 267 - 273.Google ScholarGoogle ScholarCross RefCross Ref
  10. Ksenia Kolykhalova, Paolo Alborno, Antonio Camurri, and Gualtiero Volpe. 2016. A serious games platform for validating sonification of human full-body movement qualities. In Proceedings of the 3rd International Symposium on Movement and Computing. ACM, 39. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Rudolf Laban and F. C. Lawrence. 1974. Effort: Economy of Human Movement (2nd ed.). Macdonald and Evans.Google ScholarGoogle Scholar
  12. Olivier Lartillot and Petri Toiviainen. 2007. A Matlab toolbox for musical feature extraction from audio. In International Conference on Digital Audio Effects. 237--244.Google ScholarGoogle Scholar
  13. Amy LaViers, Lin Bai, Masoud Bashiri, Gerald Heddy, and Yu Sheng. 2015. Abstractions for Design-by-Humans of Heterogeneous Behaviors. In Dance Notations and Robot Motion. Springer Tracts in Advanced Robotics (STAR), 237 -262.Google ScholarGoogle Scholar
  14. Amy LaViers and Magnus Egerstedt. 2012. Style-based robotic motion. In Proceedings of the 2012 American Control Conference. Montreal, QC.Google ScholarGoogle ScholarCross RefCross Ref
  15. Toru Nakata, Taketoshi Mori, and Tomomasa Sato. 2002. Analysis of impression of robot bodily expression. Journal of Robotics and Mechatronics 14, 1 (2002), 27--36.Google ScholarGoogle ScholarCross RefCross Ref
  16. Luiz Naveda, Isabel C Martínez, Javier Damesón, Alejandro Pereira Ghiena, Romina Herrera, and Manuel Alejandro Ordás. 2015. Cross-Cultural Comparisons of Unconstrained Body Responses to Argentinian and Afro-Brazilian Music. In International Symposium on Computer Music Multidisciplinary Research. Springer International Publishing Switzerland, 464--482.Google ScholarGoogle Scholar
  17. Kristian Nymoen, Rolf Inge Godøy, Alexander Refsum Jensenius, and Jim Torresen. 2012. Analyzing correspondence between sound objects and body motion. ACM Transactions on Applied Perception (TAP) 10, 2 (2012). Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Álvaro Sarasúa and Enric Guaus. 2014. Beat tracking from conducting gestural data: a multi-subject study. In Proceedings of the 2014 International Workshop on Movement and Computing. ACM, 118. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Eric Scheirer and Malcolm Slaney. 1997. Construction and evaluation of a robust multifeature speech/music discriminator. In Acoustics, Speech, and Signal Processing, 1997. ICASSP-97., 1997 IEEE International Conference on, Vol. 2. IEEE, 1331--1334. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Hugo Scurto, Guillaume Lemaitre, Jules Françoise, Frédéric Voisin, Frédéric Bevilacqua, and Patrick Susini. 2015. Combining gestures and vocalizations to imitate sounds. The Journal of the Acoustical Society of America 138, 3 (2015).Google ScholarGoogle ScholarCross RefCross Ref
  21. Megha Sharma, Dale Hildebrandt, Gem Newman, James E. Young, and Rasit Eskicioglu. 2013. Communicating affect via flight path exploring use of the laban effort system for designing affective locomotion paths. In 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI). 293 - 300. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Karen Studd and Laura Cox. 2013. Everybody is a Body. Dog Ear Publishing, Indianapolis, IN.Google ScholarGoogle Scholar
  23. Federico Visi, Esther Coorevits, Rodrigo Schramm, and Eduardo Miranda. 2016. Analysis of mimed violin performance movements of Neophytes: patterns, periodicities, commonalities and individualities. In Music, mind, and embodiment. Vol. 9617. Springer International Publishing Switzerland, 88--108.Google ScholarGoogle Scholar

Index Terms

  1. Data-Driven Design of Sound for Enhancing the Perception of Expressive Robotic Movement

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Other conferences
      MOCO '17: Proceedings of the 4th International Conference on Movement Computing
      June 2017
      206 pages
      ISBN:9781450352093
      DOI:10.1145/3077981

      Copyright © 2017 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 28 June 2017

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article
      • Research
      • Refereed limited

      Acceptance Rates

      Overall Acceptance Rate50of110submissions,45%

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader