ABSTRACT
Since people communicate intentions and inner states through movement, robots can better interact with humans if they too can modify their movements to communicate changing state. These movements, which may be seen as supplementary to those required for workspace tasks, may be termed "expressive." However, robot hardware, which cannot recreate the same range of dynamics as human limbs, often limit expressive capacity. One solution is to augment expressive robotic movement with expressive sound. To that end, this paper presents a study to find a qualitative mapping between movement and sound. Musicians were asked to vocalize sounds in response to animations of a simple simulated upper body movement performed with different movement qualities, parametrized according to Laban's Effort System. Qualitative labelling and quantitative signal analysis of these sounds suggests a number of correspondences between movement qualities and sound qualities. These correspondences are presented and analyzed here to set up future work that will test user perceptions when expressive movements and sounds are used in conjunction.
- Lin Bai, Jon Bellona, Luke Dahl, and Amy LaViers. 2016. Design of Perceptually Meaningful Quality in Robotic Motion. In Workshop on Artistically Skilled Robots, IEEE/RSJ International Conference on Intelligent Robots and Systems.Google Scholar
- Luke Dahl. 2016. Comparing the Timing of Movement Events for Air-Drumming Gestures. In Music, mind, and embodiment. Vol. 9617. Springer International Publishing Switzerland, 3--21.Google Scholar
- Alfred Effenberg, Joachim Melzer, Andreas Weber, and Arno Zinke. 2005. MotionLab Sonify: A Framework for the Sonification of Human Motion Data. In Proceedings of the Ninth International Conference on Information Visualisation (IV '05). IEEE Computer Society, Washington, DC, USA, 17--23. Google ScholarDigital Library
- Zohar Eitan and Roni Y. Granot. 2006. How Music Moves: Musical Parameters and Listeners' Images of Motion. Music Perception 23, 3 (2006), 221--248.Google ScholarCross Ref
- Jules Françoise, Sarah Fdili Alaoui, Thecla Schiphorst, and Frederic Bevilacqua. Vocalizing dance movement for interactive sonification of laban effort factors. In Proceedings of the 2014 conference on Designing interactive systems (2014). ACM Press, 1079--1082. Google ScholarDigital Library
- Rolf Inge Godøy. 2010. Gestural affordances of musical sound. In Musical gestures: sound, movement, and meaning, Rolf Inge Godøy and Marc Leman (Eds.). Routledge, New York, Chapter 5, 103--125.Google Scholar
- Rolf Inge Godøy, Minho Song, Kristian Nymoen, Mari Romarheim Haugen, and Alexander Refsum Jensenius. 2016. Exploring Sound-Motion Similarity in Musical Experience. Journal of New Music Research 0, 0 (2016), 1--13.Google Scholar
- Mark L Johnson and Steve Larson. 2003. Something in the Way She Moves -- Metaphors of Musical Motion. Metaphor and symbol 18, 2 (2003), 63--84.Google Scholar
- Heather Knight and Reid Simmons. 2014. Expressive motion with x, y and theta: Laban effort features for mobile robots. In The 23rd IEEE International Symposium on Robot and Human Interactive Communication. IEEE, 267 - 273.Google ScholarCross Ref
- Ksenia Kolykhalova, Paolo Alborno, Antonio Camurri, and Gualtiero Volpe. 2016. A serious games platform for validating sonification of human full-body movement qualities. In Proceedings of the 3rd International Symposium on Movement and Computing. ACM, 39. Google ScholarDigital Library
- Rudolf Laban and F. C. Lawrence. 1974. Effort: Economy of Human Movement (2nd ed.). Macdonald and Evans.Google Scholar
- Olivier Lartillot and Petri Toiviainen. 2007. A Matlab toolbox for musical feature extraction from audio. In International Conference on Digital Audio Effects. 237--244.Google Scholar
- Amy LaViers, Lin Bai, Masoud Bashiri, Gerald Heddy, and Yu Sheng. 2015. Abstractions for Design-by-Humans of Heterogeneous Behaviors. In Dance Notations and Robot Motion. Springer Tracts in Advanced Robotics (STAR), 237 -262.Google Scholar
- Amy LaViers and Magnus Egerstedt. 2012. Style-based robotic motion. In Proceedings of the 2012 American Control Conference. Montreal, QC.Google ScholarCross Ref
- Toru Nakata, Taketoshi Mori, and Tomomasa Sato. 2002. Analysis of impression of robot bodily expression. Journal of Robotics and Mechatronics 14, 1 (2002), 27--36.Google ScholarCross Ref
- Luiz Naveda, Isabel C Martínez, Javier Damesón, Alejandro Pereira Ghiena, Romina Herrera, and Manuel Alejandro Ordás. 2015. Cross-Cultural Comparisons of Unconstrained Body Responses to Argentinian and Afro-Brazilian Music. In International Symposium on Computer Music Multidisciplinary Research. Springer International Publishing Switzerland, 464--482.Google Scholar
- Kristian Nymoen, Rolf Inge Godøy, Alexander Refsum Jensenius, and Jim Torresen. 2012. Analyzing correspondence between sound objects and body motion. ACM Transactions on Applied Perception (TAP) 10, 2 (2012). Google ScholarDigital Library
- Álvaro Sarasúa and Enric Guaus. 2014. Beat tracking from conducting gestural data: a multi-subject study. In Proceedings of the 2014 International Workshop on Movement and Computing. ACM, 118. Google ScholarDigital Library
- Eric Scheirer and Malcolm Slaney. 1997. Construction and evaluation of a robust multifeature speech/music discriminator. In Acoustics, Speech, and Signal Processing, 1997. ICASSP-97., 1997 IEEE International Conference on, Vol. 2. IEEE, 1331--1334. Google ScholarDigital Library
- Hugo Scurto, Guillaume Lemaitre, Jules Françoise, Frédéric Voisin, Frédéric Bevilacqua, and Patrick Susini. 2015. Combining gestures and vocalizations to imitate sounds. The Journal of the Acoustical Society of America 138, 3 (2015).Google ScholarCross Ref
- Megha Sharma, Dale Hildebrandt, Gem Newman, James E. Young, and Rasit Eskicioglu. 2013. Communicating affect via flight path exploring use of the laban effort system for designing affective locomotion paths. In 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI). 293 - 300. Google ScholarDigital Library
- Karen Studd and Laura Cox. 2013. Everybody is a Body. Dog Ear Publishing, Indianapolis, IN.Google Scholar
- Federico Visi, Esther Coorevits, Rodrigo Schramm, and Eduardo Miranda. 2016. Analysis of mimed violin performance movements of Neophytes: patterns, periodicities, commonalities and individualities. In Music, mind, and embodiment. Vol. 9617. Springer International Publishing Switzerland, 88--108.Google Scholar
Index Terms
- Data-Driven Design of Sound for Enhancing the Perception of Expressive Robotic Movement
Recommendations
Sound emblems for affective multimodal output of a robotic tutor: a perception study
ICMI '16: Proceedings of the 18th ACM International Conference on Multimodal InteractionHuman and robot tutors alike have to give careful consideration as to how feedback is delivered to students to provide a motivating yet clear learning context. Here, we performed a perception study to investigate attitudes towards negative and positive ...
The Perception of Sound Movements as Expressive Gestures
Sound, Music, and MotionAbstractThis paper is a preliminary attempt to investigate the perception of sound movements as expressive gestures. The idea is that if sound movement is used as a musical parameter, a listener (or a subject) should be able to distinguish among different ...
Shimon: an interactive improvisational robotic marimba player
CHI EA '10: CHI '10 Extended Abstracts on Human Factors in Computing SystemsShimon is an autonomous marimba-playing robot designed to create interactions with human players that lead to novel musical outcomes. The robot combines music perception, interaction, and improvisation with the capacity to produce melodic and harmonic ...
Comments