Skip to main content
Top

2023 | OriginalPaper | Chapter

3. The Robot Soundscape

Authors : Frederic Anthony Robinson, Oliver Bown, Mari Velonaki

Published in: Cultural Robotics: Social Robots and Their Emergent Cultural Ecologies

Publisher: Springer International Publishing

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

As social robots make their way into human environments, they need to communicate with the humans around them in rich and engaging ways. Sound is one of the core modalities of communication and, beyond speech, affects and engages people across cultures and language barriers. While a growing body of work in human–robot interaction (HRI) investigates the various ways it affects interactions, a comprehensive map of the many approaches to sound has yet to be created. In this chapter, we therefore ask “What are the ways robotic agents can communicate with us through sound?”, “How does it affect the listener?” and “What goals should researchers, practitioners and designers have when creating these languages?” These questions are examined with reference to HRI studies, and robotic agents developed in commercial, artistic and academic contexts. The resulting map provides an overview of how sound can be used to enrich human–robot interactions, including sound uttered by robots, sound performed by robots, sound as background to HRI scenarios, sound associated with robot movement, and sound responsive to human actions. We aim to provide researchers and designers with a visual tool that summarises the role sound can play in creating rich and engaging human–robot interactions and hope to establish a common framework for thinking about robot sound, encouraging robot makers to engage with sound as a serious part of the robot interface.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literature
go back to reference Aarestrup M, Jensen LC, Fischer K (2015) The sound makes the greeting: interpersonal functions of intonation in human-robot interaction. In: 2015 AAAI spring symposium series Aarestrup M, Jensen LC, Fischer K (2015) The sound makes the greeting: interpersonal functions of intonation in human-robot interaction. In: 2015 AAAI spring symposium series
go back to reference Atkeson CG, Hale JG, Pollick F, Riley M, Kotosaka S, Schaul S, Shibata T, Tevatia G, Ude A, Vijayakumar S et al (2000) Using humanoid robots to study human behavior. IEEE Intell Syst Appl 15(4):46–56CrossRef Atkeson CG, Hale JG, Pollick F, Riley M, Kotosaka S, Schaul S, Shibata T, Tevatia G, Ude A, Vijayakumar S et al (2000) Using humanoid robots to study human behavior. IEEE Intell Syst Appl 15(4):46–56CrossRef
go back to reference Ayesh A (2006) Structured sound based language for emotional robotic communicative interaction. In: ROMAN 2006—The 15th IEEE international symposium on robot and human interactive communication. IEEE, pp 135–140 Ayesh A (2006) Structured sound based language for emotional robotic communicative interaction. In: ROMAN 2006—The 15th IEEE international symposium on robot and human interactive communication. IEEE, pp 135–140
go back to reference Ayesh A (2009) Emotionally expressive music based interaction language for social robots. ICGST Int J Autom Robot Auton Syst 9(1):1–10 Ayesh A (2009) Emotionally expressive music based interaction language for social robots. ICGST Int J Autom Robot Auton Syst 9(1):1–10
go back to reference Barton S (2013) The human, the mechanical, and the spaces in between: explorations in human-robotic musical improvisation. In: Ninth artificial intelligence and interactive digital entertainment conference Barton S (2013) The human, the mechanical, and the spaces in between: explorations in human-robotic musical improvisation. In: Ninth artificial intelligence and interactive digital entertainment conference
go back to reference Beck A, Hiolle A, Mazel A, Cañamero L (2010) Interpretation of emotional body language displayed by robots. In: Proceedings of the 3rd international workshop on affective interaction in natural environments, pp 37–42 Beck A, Hiolle A, Mazel A, Cañamero L (2010) Interpretation of emotional body language displayed by robots. In: Proceedings of the 3rd international workshop on affective interaction in natural environments, pp 37–42
go back to reference Becker-Asano C, Kanda T, Ishi C, Ishiguro H (2011) Studying laughter in combination with two humanoid robots. AI & Soc 26(3):291–300CrossRef Becker-Asano C, Kanda T, Ishi C, Ishiguro H (2011) Studying laughter in combination with two humanoid robots. AI & Soc 26(3):291–300CrossRef
go back to reference Becker-Asano C, Ishiguro H (2009) Laughter in social robotics-no laughing matter. In: Intl. workshop on social intelligence design. Citeseer, pp 287–300 Becker-Asano C, Ishiguro H (2009) Laughter in social robotics-no laughing matter. In: Intl. workshop on social intelligence design. Citeseer, pp 287–300
go back to reference Bellona J, Bai L, Dahl L, LaViers A (2017) Empirically informed sound synthesis application for enhancing the perception of expressive robotic movement. In: Proceedings of the 23rd international conference on auditory display—ICAD 2017. The International Community for Auditory Display, University Park Campus, pp 73–80. https://doi.org/10.21785/icad2017.049 Bellona J, Bai L, Dahl L, LaViers A (2017) Empirically informed sound synthesis application for enhancing the perception of expressive robotic movement. In: Proceedings of the 23rd international conference on auditory display—ICAD 2017. The International Community for Auditory Display, University Park Campus, pp 73–80. https://​doi.​org/​10.​21785/​icad2017.​049
go back to reference Belpaeme T, Baxter P, Read R, Wood R, Cuayáhuitl H, Kiefer B, Racioppa S, Kruijff-Korbayová I, Athanasopoulos G, Enescu V et al (2013) Multimodal child-robot interaction: building social bonds. J Hum Robot Interact 1(2):33–53CrossRef Belpaeme T, Baxter P, Read R, Wood R, Cuayáhuitl H, Kiefer B, Racioppa S, Kruijff-Korbayová I, Athanasopoulos G, Enescu V et al (2013) Multimodal child-robot interaction: building social bonds. J Hum Robot Interact 1(2):33–53CrossRef
go back to reference Berglund B, Hassmen P, Job RS (1996) Sources and effects of low frequency noise. J Acoust Soc Am 99(5):2985–3002CrossRef Berglund B, Hassmen P, Job RS (1996) Sources and effects of low frequency noise. J Acoust Soc Am 99(5):2985–3002CrossRef
go back to reference Bethel CL, Murphy RR (2006) Auditory and other non-verbal expressions of affect for robots. In: AAAI fall symposium: aurally informed performance, pp 1–5 Bethel CL, Murphy RR (2006) Auditory and other non-verbal expressions of affect for robots. In: AAAI fall symposium: aurally informed performance, pp 1–5
go back to reference Bramas B, Kim YM, Kwon, DS (2008) Design of a sound system to increase emotional expression impact in human-robot interaction. In: 2008 international conference on control, automation and systems. IEEE, pp 2732–2737 Bramas B, Kim YM, Kwon, DS (2008) Design of a sound system to increase emotional expression impact in human-robot interaction. In: 2008 international conference on control, automation and systems. IEEE, pp 2732–2737
go back to reference Breazeal CL (2000) Sociable machines: expressive social exchange between humans and robots. PhD Thesis, Massachusetts Institute of Technology Breazeal CL (2000) Sociable machines: expressive social exchange between humans and robots. PhD Thesis, Massachusetts Institute of Technology
go back to reference Breazeal CL (2004) Designing sociable robots. MIT Press Breazeal CL (2004) Designing sociable robots. MIT Press
go back to reference Breazeal C, Kidd CD, Thomaz AL, Hoffman G, Berlin M (2005) Effects of nonverbal communication on efficiency and robustness in human-robot teamwork. In: 2005 IEEE/RSJ international conference on intelligent robots and systems. IEEE, Edmonton, Alta., Canada, pp 708–713. https://doi.org/10.1109/IROS.2005.1545011 Breazeal C, Kidd CD, Thomaz AL, Hoffman G, Berlin M (2005) Effects of nonverbal communication on efficiency and robustness in human-robot teamwork. In: 2005 IEEE/RSJ international conference on intelligent robots and systems. IEEE, Edmonton, Alta., Canada, pp 708–713. https://​doi.​org/​10.​1109/​IROS.​2005.​1545011
go back to reference Bretan M, Cicconet M, Nikolaidis R, Weinberg G (2012) Developing and composing for a robotic musician using different modes of interaction. In: ICMC Bretan M, Cicconet M, Nikolaidis R, Weinberg G (2012) Developing and composing for a robotic musician using different modes of interaction. In: ICMC
go back to reference Bretan M, Weinberg G (2016) A survey of robotic musicianship. Commun ACM 59(5):100–109CrossRef Bretan M, Weinberg G (2016) A survey of robotic musicianship. Commun ACM 59(5):100–109CrossRef
go back to reference Brock DP, Martinson E (2006) Using the concept of auditory perspective taking to improve robotic speech presentations for individual human listeners. In: AAAI fall symposium: aurally informed performance, pp 11–15 Brock DP, Martinson E (2006) Using the concept of auditory perspective taking to improve robotic speech presentations for individual human listeners. In: AAAI fall symposium: aurally informed performance, pp 11–15
go back to reference Burkhardt F, Sendlmeier WF (2000) Verification of acoustical correlates of emotional speech using formant-synthesis. In: ISCA tutorial and research workshop (ITRW) on speech and emotion Burkhardt F, Sendlmeier WF (2000) Verification of acoustical correlates of emotional speech using formant-synthesis. In: ISCA tutorial and research workshop (ITRW) on speech and emotion
go back to reference Cahn JE (1990) The generation of affect in synthesized speech. J Am Voice I/o Soc 8(1):1–1MathSciNet Cahn JE (1990) The generation of affect in synthesized speech. J Am Voice I/o Soc 8(1):1–1MathSciNet
go back to reference Cha E, Fitter NT, Kim Y, Fong T, Matarić MJ (2018a) Effects of robot sound on auditory localization in human-robot collaboration. In: Proceedings of the 2018a ACM/IEEE international conference on human-robot interaction—HRI ’18. ACM Press, Chicago, IL, USA, pp 434–442. https://doi.org/10.1145/3171221.3171285 Cha E, Fitter NT, Kim Y, Fong T, Matarić MJ (2018a) Effects of robot sound on auditory localization in human-robot collaboration. In: Proceedings of the 2018a ACM/IEEE international conference on human-robot interaction—HRI ’18. ACM Press, Chicago, IL, USA, pp 434–442. https://​doi.​org/​10.​1145/​3171221.​3171285
go back to reference Chadefaux D, Le Carrou JL, Vitrani MA, Billout S, Quartier L (2012) Harp plucking robotic finger. In: 2012 IEEE/RSJ international conference on intelligent robots and systems. IEEE, pp 4886–4891 Chadefaux D, Le Carrou JL, Vitrani MA, Billout S, Quartier L (2012) Harp plucking robotic finger. In: 2012 IEEE/RSJ international conference on intelligent robots and systems. IEEE, pp 4886–4891
go back to reference Chomsky N (1956) Three models for the description of language. IRE Trans Inf Theory 2(3):113–124MATHCrossRef Chomsky N (1956) Three models for the description of language. IRE Trans Inf Theory 2(3):113–124MATHCrossRef
go back to reference Cicconet M, Bretan M, Weinberg G (2013) Human-robot percussion ensemble: Anticipation on the basis of visual cues. IEEE Robot Autom Mag 20(4):105–110CrossRef Cicconet M, Bretan M, Weinberg G (2013) Human-robot percussion ensemble: Anticipation on the basis of visual cues. IEEE Robot Autom Mag 20(4):105–110CrossRef
go back to reference Cosentino S, Takanishi A (2021) Human–robot musical interaction. In: handbook of artificial intelligence for music. Springer, Heidelberg, pp 799–822 Cosentino S, Takanishi A (2021) Human–robot musical interaction. In: handbook of artificial intelligence for music. Springer, Heidelberg, pp 799–822
go back to reference D’Mello S, McCauley L, Markham J (2005) A mechanism for human-robot interaction through informal voice commands. In: ROMAN 2005. IEEE international workshop on robot and human interactive communication. IEEE, pp 184–189 D’Mello S, McCauley L, Markham J (2005) A mechanism for human-robot interaction through informal voice commands. In: ROMAN 2005. IEEE international workshop on robot and human interactive communication. IEEE, pp 184–189
go back to reference Dahl L, Bellona J, Bai L, LaViers A (2017) Data-driven design of sound for enhancing the perception of expressive robotic movement. In: Proceedings of the 4th international conference on movement computing—MOCO ’17. ACM Press, London, United Kingdom, pp 1–8. https://doi.org/10.1145/3077981.3078047 Dahl L, Bellona J, Bai L, LaViers A (2017) Data-driven design of sound for enhancing the perception of expressive robotic movement. In: Proceedings of the 4th international conference on movement computing—MOCO ’17. ACM Press, London, United Kingdom, pp 1–8. https://​doi.​org/​10.​1145/​3077981.​3078047
go back to reference Danielsson A, Landstrom Ü (1985) Blood pressure changes in man during infrasonic exposure: an experimental study. Acta Med Scand 217(5):531–535CrossRef Danielsson A, Landstrom Ü (1985) Blood pressure changes in man during infrasonic exposure: an experimental study. Acta Med Scand 217(5):531–535CrossRef
go back to reference Dannenberg RB, Brown B, Zeglin G, Lupish R (2005) McBlare: a robotic bagpipe player. In: Proceedings of the 2005 conference on new interfaces for musical expression. National University of Singapore, pp 80–84 Dannenberg RB, Brown B, Zeglin G, Lupish R (2005) McBlare: a robotic bagpipe player. In: Proceedings of the 2005 conference on new interfaces for musical expression. National University of Singapore, pp 80–84
go back to reference Davies A, Crosby A (2016) Compressorhead: the robot band and its transmedia storyworld. In: Koh JTKV, Dunstan BJ, Silvera-Tawil D, Velonaki M (eds) Cultural robotics. Springer, Cham, pp 175–189CrossRef Davies A, Crosby A (2016) Compressorhead: the robot band and its transmedia storyworld. In: Koh JTKV, Dunstan BJ, Silvera-Tawil D, Velonaki M (eds) Cultural robotics. Springer, Cham, pp 175–189CrossRef
go back to reference Dingler T, Lindsay J, Walker BN (2008) Learnabiltiy of sound cues for environmental features: auditory icons, earcons, spearcons, and speech. In: Proceedings of the 14th international conference on auditory display. Paris, France, June 24–27 Dingler T, Lindsay J, Walker BN (2008) Learnabiltiy of sound cues for environmental features: auditory icons, earcons, spearcons, and speech. In: Proceedings of the 14th international conference on auditory display. Paris, France, June 24–27
go back to reference Esnaola U, Smithers T (2005) MiReLa: A musical robot. In: 2005 International symposium on computational intelligence in robotics and automation. IEEE, pp 67–72 Esnaola U, Smithers T (2005) MiReLa: A musical robot. In: 2005 International symposium on computational intelligence in robotics and automation. IEEE, pp 67–72
go back to reference Evans MJ, Tempest W (1972) Some effects of infrasonic noise in transportation. J Sound Vib 22(1):19–24CrossRef Evans MJ, Tempest W (1972) Some effects of infrasonic noise in transportation. J Sound Vib 22(1):19–24CrossRef
go back to reference Eyssel F, Kuchenbrandt D, Bobinger S (2012) ‘If you sound like me, you must be more human’: on the interplay of robot and user features on human-robot acceptance and anthropomorphism, vol 2 Eyssel F, Kuchenbrandt D, Bobinger S (2012) ‘If you sound like me, you must be more human’: on the interplay of robot and user features on human-robot acceptance and anthropomorphism, vol 2
go back to reference Fernald A, Mazzie C (1991) Prosody and focus in speech to infants and adults. Dev Psychol 27(2):209CrossRef Fernald A, Mazzie C (1991) Prosody and focus in speech to infants and adults. Dev Psychol 27(2):209CrossRef
go back to reference Fernald A, Taeschner T, Dunn J, Papousek M, de Boysson Bardies B, Fukui I (1989) A cross-language study of prosodic modifications in mothers’ and fathers’ speech to preverbal infants. J Child Lang 16(3):477–501CrossRef Fernald A, Taeschner T, Dunn J, Papousek M, de Boysson Bardies B, Fukui I (1989) A cross-language study of prosodic modifications in mothers’ and fathers’ speech to preverbal infants. J Child Lang 16(3):477–501CrossRef
go back to reference Ferrand D, Vergez C (2008) Blowing machine for wind musical instrument: toward a real-time control of the blowing pressure. In: 2008 16th Mediterranean conference on control and automation. IEEE, pp 1562–1567 Ferrand D, Vergez C (2008) Blowing machine for wind musical instrument: toward a real-time control of the blowing pressure. In: 2008 16th Mediterranean conference on control and automation. IEEE, pp 1562–1567
go back to reference Fischer K, Niebuhr O, Jensen LC, Bodenhagen L (2019) Speech melody matters—how robots profit from using charismatic speech. ACM Trans Hum-Robot Interact (THRI) 9(1):1–21 Fischer K, Niebuhr O, Jensen LC, Bodenhagen L (2019) Speech melody matters—how robots profit from using charismatic speech. ACM Trans Hum-Robot Interact (THRI) 9(1):1–21
go back to reference Forlizzi J, Battarbee K (2004) Understanding experience in interactive systems. In: Proceedings of the 2004 conference on designing interactive systems processes, practices, methods, and techniques—DIS ’04. ACM Press, Cambridge, MA, USA, p 261. https://doi.org/10.1145/1013115.1013152 Forlizzi J, Battarbee K (2004) Understanding experience in interactive systems. In: Proceedings of the 2004 conference on designing interactive systems processes, practices, methods, and techniques—DIS ’04. ACM Press, Cambridge, MA, USA, p 261. https://​doi.​org/​10.​1145/​1013115.​1013152
go back to reference Frid E, Bresin R (2021) Perceptual evaluation of blended sonification of mechanical robot sounds produced by emotionally expressive gestures: augmenting consequential sounds to improve non-verbal robot communication. Int J Soc Robot, pp 1–16 Frid E, Bresin R (2021) Perceptual evaluation of blended sonification of mechanical robot sounds produced by emotionally expressive gestures: augmenting consequential sounds to improve non-verbal robot communication. Int J Soc Robot, pp 1–16
go back to reference Frid E, Bresin R, Alexanderson S (2018) Perception of mechanical sounds inherent to expressive gestures of a nao robot-implications for movement sonification of humanoids. In: Sound and music computing Frid E, Bresin R, Alexanderson S (2018) Perception of mechanical sounds inherent to expressive gestures of a nao robot-implications for movement sonification of humanoids. In: Sound and music computing
go back to reference Friend M (2000) Developmental changes in sensitivity to vocal paralanguage. Dev Sci 3(2):148–162CrossRef Friend M (2000) Developmental changes in sensitivity to vocal paralanguage. Dev Sci 3(2):148–162CrossRef
go back to reference Fujita M, Sabe K, Kuroki Y, Ishida T, Doi TT (2005) SDR-4X II: A small humanoid as an entertainer in home environment. In: robotics research. The eleventh international symposium. Springer, Heidelberg, pp 355–364 Fujita M, Sabe K, Kuroki Y, Ishida T, Doi TT (2005) SDR-4X II: A small humanoid as an entertainer in home environment. In: robotics research. The eleventh international symposium. Springer, Heidelberg, pp 355–364
go back to reference Goodrich MA, Schultz AC et al (2008) Human-robot interaction: a survey. Found Trends Hum Comput Interact 1(3):203–275MATHCrossRef Goodrich MA, Schultz AC et al (2008) Human-robot interaction: a survey. Found Trends Hum Comput Interact 1(3):203–275MATHCrossRef
go back to reference Gorostiza JF, Salichs MA (2011) End-user programming of a social robot by dialog. Robot Auton Syst 59(12):1102–1114CrossRef Gorostiza JF, Salichs MA (2011) End-user programming of a social robot by dialog. Robot Auton Syst 59(12):1102–1114CrossRef
go back to reference Gouaillier D, Hugel V, Blazevic P Kilner C, Monceaux J, Lafourcade P, Marnier B, Serre J, Maisonnier B (2009) Mechatronic design of NAO humanoid. In: 2009 IEEE international conference on robotics and automation. IEEE, pp 769–774 Gouaillier D, Hugel V, Blazevic P Kilner C, Monceaux J, Lafourcade P, Marnier B, Serre J, Maisonnier B (2009) Mechatronic design of NAO humanoid. In: 2009 IEEE international conference on robotics and automation. IEEE, pp 769–774
go back to reference Hallahan WI (1995) DECtalk software: text-to-speech technology and implementation. Digit Tech J 7(4):5–19 Hallahan WI (1995) DECtalk software: text-to-speech technology and implementation. Digit Tech J 7(4):5–19
go back to reference Harris CS, Johnson DL (1978) Effects of infrasound on cognitive performance. Aviation, space, and environmental medicine Harris CS, Johnson DL (1978) Effects of infrasound on cognitive performance. Aviation, space, and environmental medicine
go back to reference Hermann T, Hunt A, Neuhoff JG (2011) The sonification handbook. Logos Verlag Berlin Hermann T, Hunt A, Neuhoff JG (2011) The sonification handbook. Logos Verlag Berlin
go back to reference Hespanhol L, Tomitsch M (2012) Designing for collective participation with media installations in public spaces. In: Proceedings of the 4th media architecture biennale conference on participation—MAB ’12. ACM Press, Aarhus, Denmark, pp 33–42. https://doi.org/10.1145/2421076. 2421082 Hespanhol L, Tomitsch M (2012) Designing for collective participation with media installations in public spaces. In: Proceedings of the 4th media architecture biennale conference on participation—MAB ’12. ACM Press, Aarhus, Denmark, pp 33–42. https://​doi.​org/​10.​1145/​2421076. 2421082
go back to reference Holzapfel H, Gieselmann P (2004) A way out of dead end situations in dialogue systems for human-robot interaction. In: 4th IEEE/RAS international conference on humanoid robots, vol. 1. IEEE, pp 184–195 Holzapfel H, Gieselmann P (2004) A way out of dead end situations in dialogue systems for human-robot interaction. In: 4th IEEE/RAS international conference on humanoid robots, vol. 1. IEEE, pp 184–195
go back to reference Hornecker E, Stifter M (2006) Learning from interactive museum installations about interaction design for public settings. In: Proceedings of the 20th conference of the computer-human interaction special interest group (CHISIG) of Australia on computer-human interaction: design: activities, artefacts and environments—OZCHI ’06. ACM Press, Sydney, Australia, p 135. https://doi.org/10.1145/1228175.1228201 Hornecker E, Stifter M (2006) Learning from interactive museum installations about interaction design for public settings. In: Proceedings of the 20th conference of the computer-human interaction special interest group (CHISIG) of Australia on computer-human interaction: design: activities, artefacts and environments—OZCHI ’06. ACM Press, Sydney, Australia, p 135. https://​doi.​org/​10.​1145/​1228175.​1228201
go back to reference Hoyer R, Bartetzki A, Kirchner D, Witsch A, van de Molengraft M, Geihs K (2013) Giving robots a voice: a kineto-acoustic project. In: International conference on arts and technology. Springer, Heidelberg, pp 41–48 Hoyer R, Bartetzki A, Kirchner D, Witsch A, van de Molengraft M, Geihs K (2013) Giving robots a voice: a kineto-acoustic project. In: International conference on arts and technology. Springer, Heidelberg, pp 41–48
go back to reference Hu J, Le D, Funk M, Wang F, Rauterberg M (2013) Attractiveness of an interactive public art installation. In: International conference on distributed, ambient, and pervasive interactions. Springer, Heidelberg, pp 430–438 Hu J, Le D, Funk M, Wang F, Rauterberg M (2013) Attractiveness of an interactive public art installation. In: International conference on distributed, ambient, and pervasive interactions. Springer, Heidelberg, pp 430–438
go back to reference Ikeuchi K, Fukumoto M, Lee JH, Kravitz JL, Baumert DW (2020) Noise reduction in robot human communication. Google Patents Ikeuchi K, Fukumoto M, Lee JH, Kravitz JL, Baumert DW (2020) Noise reduction in robot human communication. Google Patents
go back to reference Inoue K, Wada K, Ito Y (2008) Effective application of Paro: Seal type robots for disabled people in according to ideas of occupational therapists. In: International conference on computers for handicapped persons. Springer, Heidelberg, pp 1321–1324 Inoue K, Wada K, Ito Y (2008) Effective application of Paro: Seal type robots for disabled people in according to ideas of occupational therapists. In: International conference on computers for handicapped persons. Springer, Heidelberg, pp 1321–1324
go back to reference Javed H, Jeon M, Park CH (2018) Adaptive framework for emotional engagement in child-robot interactions for autism interventions. In: 2018 15th International conference on ubiquitous robots (UR). IEEE, pp 396–400 Javed H, Jeon M, Park CH (2018) Adaptive framework for emotional engagement in child-robot interactions for autism interventions. In: 2018 15th International conference on ubiquitous robots (UR). IEEE, pp 396–400
go back to reference Jee ES, Park SY, Kim CH, Kobayashi H (2009) Composition of musical sound to express robot’s emotion with intensity and synchronized expression with robot’s behavior. In: RO-MAN 2009—The 18th IEEE international symposium on robot and human interactive communication. IEEE, Toyama, Japan, pp 369–374. https://doi.org/10.1109/ROMAN.2009.5326258 Jee ES, Park SY, Kim CH, Kobayashi H (2009) Composition of musical sound to express robot’s emotion with intensity and synchronized expression with robot’s behavior. In: RO-MAN 2009—The 18th IEEE international symposium on robot and human interactive communication. IEEE, Toyama, Japan, pp 369–374. https://​doi.​org/​10.​1109/​ROMAN.​2009.​5326258
go back to reference Jee ES, Kim CH, Park SY, Lee KW (2007) Composition of musical sound expressing an emotion of robot based on musical factors. In: RO-MAN 2007—The 16th IEEE international symposium on robot and human interactive communication. IEEE, Jeju, South Korea, pp 637–641. https://doi.org/10.1109/ROMAN.2007.4415161 Jee ES, Kim CH, Park SY, Lee KW (2007) Composition of musical sound expressing an emotion of robot based on musical factors. In: RO-MAN 2007—The 16th IEEE international symposium on robot and human interactive communication. IEEE, Jeju, South Korea, pp 637–641. https://​doi.​org/​10.​1109/​ROMAN.​2007.​4415161
go back to reference Johannsen G (2001) Auditory displays in human-machine interfaces of mobile robots for non-speech communication with humans. J Intell Rob Syst 32(2):161–169MATHCrossRef Johannsen G (2001) Auditory displays in human-machine interfaces of mobile robots for non-speech communication with humans. J Intell Rob Syst 32(2):161–169MATHCrossRef
go back to reference Johannsen G (2004) Auditory displays in human-machine interfaces. Proc IEEE 92(4):742–758CrossRef Johannsen G (2004) Auditory displays in human-machine interfaces. Proc IEEE 92(4):742–758CrossRef
go back to reference Johannsen G (2002) Auditory display of directions and states for mobile systems. Georgia Institute of Technology Johannsen G (2002) Auditory display of directions and states for mobile systems. Georgia Institute of Technology
go back to reference Johnson WF, Emde RN, Scherer KR, Klinnert MD (1986) Recognition of emotion from vocal cues. Arch Gen Psychiatry 43(3):280–283CrossRef Johnson WF, Emde RN, Scherer KR, Klinnert MD (1986) Recognition of emotion from vocal cues. Arch Gen Psychiatry 43(3):280–283CrossRef
go back to reference Jordà S (2002) Afasia: The ultimate homeric one-man-multimedia-band. In: NIME, pp 132–137 Jordà S (2002) Afasia: The ultimate homeric one-man-multimedia-band. In: NIME, pp 132–137
go back to reference Jousmäki V, Hari R (1998) Parchment-skin illusion: sound-biased touch. Curr Biol 8(6):190–191 Jousmäki V, Hari R (1998) Parchment-skin illusion: sound-biased touch. Curr Biol 8(6):190–191
go back to reference Kac E (1997) Foundation and development of robotic art. Art Journal 56(3):60–67CrossRef Kac E (1997) Foundation and development of robotic art. Art Journal 56(3):60–67CrossRef
go back to reference Kadish D (2019) Robophony: A new voice in the soundscape. In: RE: SOUND 2019–8th international conference on media art, science, and technology 8, pp 243–252 Kadish D (2019) Robophony: A new voice in the soundscape. In: RE: SOUND 2019–8th international conference on media art, science, and technology 8, pp 243–252
go back to reference Kapur A (2005) A history of robotic musical instruments. In: ICMC. Citeseer Kapur A (2005) A history of robotic musical instruments. In: ICMC. Citeseer
go back to reference Kapur A (2011) Multimodal techniques for human/robot interaction. In: Musical robots and interactive multimodal systems. Springer, Heidelberg, pp 215–232 Kapur A (2011) Multimodal techniques for human/robot interaction. In: Musical robots and interactive multimodal systems. Springer, Heidelberg, pp 215–232
go back to reference Kapur A, Murphy JW, Carnegie DA (2012) Kritaanjali: a robotic harmonium for performance, pedogogy and research. In: NIME Kapur A, Murphy JW, Carnegie DA (2012) Kritaanjali: a robotic harmonium for performance, pedogogy and research. In: NIME
go back to reference Kato I, Ohteru S, Shirai K, Matsushima T, Narita S, Sugano S, Kobayashi T, Fujisawa E (1987) The robot musician ‘wabot-2’(waseda robot-2). Robotics 3(2):143–155CrossRef Kato I, Ohteru S, Shirai K, Matsushima T, Narita S, Sugano S, Kobayashi T, Fujisawa E (1987) The robot musician ‘wabot-2’(waseda robot-2). Robotics 3(2):143–155CrossRef
go back to reference Knoll MA, Uther M, Costall A (2009) Effects of low-pass filtering on the judgment of vocal affect in speech directed to infants, adults and foreigners. Speech Commun 51(3):210–216CrossRef Knoll MA, Uther M, Costall A (2009) Effects of low-pass filtering on the judgment of vocal affect in speech directed to infants, adults and foreigners. Speech Commun 51(3):210–216CrossRef
go back to reference Kobayashi T, Fujie S (2013) Conversational robots: an approach to conversation protocol issues that utilizes the paralinguistic information available in a robot-human setting. Acoust Sci Technol 34(2):64–72CrossRef Kobayashi T, Fujie S (2013) Conversational robots: an approach to conversation protocol issues that utilizes the paralinguistic information available in a robot-human setting. Acoust Sci Technol 34(2):64–72CrossRef
go back to reference Komatsu T (2005) Toward making humans empathize with artificial agents by means of subtle expressions. In: International conference on affective computing and intelligent interaction. Springer, Heidelberg, pp 458–465 Komatsu T (2005) Toward making humans empathize with artificial agents by means of subtle expressions. In: International conference on affective computing and intelligent interaction. Springer, Heidelberg, pp 458–465
go back to reference Komatsu T, Yamada S, Kobayashi K, Funakoshi K, Nakano M (2010) Artificial subtle expressions: intuitive notification methodology of artifacts. In: Proceedings of the 28th international conference on human factors in computing systems—CHI ’10. ACM Press, Atlanta, Georgia, USA, p 1941. https://doi.org/10.1145/1753326.1753619 Komatsu T, Yamada S, Kobayashi K, Funakoshi K, Nakano M (2010) Artificial subtle expressions: intuitive notification methodology of artifacts. In: Proceedings of the 28th international conference on human factors in computing systems—CHI ’10. ACM Press, Atlanta, Georgia, USA, p 1941. https://​doi.​org/​10.​1145/​1753326.​1753619
go back to reference Komatsu T, Kobayashi K, Yamada S, Funakoshi K, Nakano M (2018) Vibrational artificial subtle expressions: conveying system’s confidence level to users by means of smartphone vibration. In: Proceedings of the 2018 CHI conference on human factors in computing systems—CHI ’18. ACM Press, Montreal QC, Canada, pp 1–9. https://doi.org/10.1145/3173574.3174052 Komatsu T, Kobayashi K, Yamada S, Funakoshi K, Nakano M (2018) Vibrational artificial subtle expressions: conveying system’s confidence level to users by means of smartphone vibration. In: Proceedings of the 2018 CHI conference on human factors in computing systems—CHI ’18. ACM Press, Montreal QC, Canada, pp 1–9. https://​doi.​org/​10.​1145/​3173574.​3174052
go back to reference Korcsok B, Faragó T, Ferdinandy B, Miklósi, Korondi P, Gácsi M (2020) Artificial sounds following biological rules: a novel approach for non-verbal communication in HRI. Sci Rep 10(1):1–13 Korcsok B, Faragó T, Ferdinandy B, Miklósi, Korondi P, Gácsi M (2020) Artificial sounds following biological rules: a novel approach for non-verbal communication in HRI. Sci Rep 10(1):1–13
go back to reference Kozima H, Michalowski MP, Nakagawa C (2009) Keepon. Int J Soc Robot 1(1):3–18CrossRef Kozima H, Michalowski MP, Nakagawa C (2009) Keepon. Int J Soc Robot 1(1):3–18CrossRef
go back to reference Krzyz̈aniak M (20121) Musical robot swarms, timing, and equilibria. J New Music Res:1–19 Krzyz̈aniak M (20121) Musical robot swarms, timing, and equilibria. J New Music Res:1–19
go back to reference Lageat T, Czellar S, Laurent G (2003) Engineering hedonic attributes to generate perceptions of luxury: consumer perception of an everyday sound. Mark Lett 14(2):97–109CrossRef Lageat T, Czellar S, Laurent G (2003) Engineering hedonic attributes to generate perceptions of luxury: consumer perception of an everyday sound. Mark Lett 14(2):97–109CrossRef
go back to reference Latupeirissa AB, Frid E, Bresin R (2019) Sonic characteristics of robots in films. In: Sound and music computing conference, pp 1–6 Latupeirissa AB, Frid E, Bresin R (2019) Sonic characteristics of robots in films. In: Sound and music computing conference, pp 1–6
go back to reference Lee MK, Kiesler S, Forlizzi J, Srinivasa S, Rybski P (2010) Gracefully mitigating breakdowns in robotic services. In: 2010 5th ACM/IEEE international conference on human-robot interaction (HRI). IEEE, pp 203– 210 Lee MK, Kiesler S, Forlizzi J, Srinivasa S, Rybski P (2010) Gracefully mitigating breakdowns in robotic services. In: 2010 5th ACM/IEEE international conference on human-robot interaction (HRI). IEEE, pp 203– 210
go back to reference Lin HS, Shen YT, Lin TH, Lin PC (2014) Disco lamp: An interactive robot lamp. In: 2014 IEEE international conference on automation science and engineering (CASE). IEEE, pp 1214–1219 Lin HS, Shen YT, Lin TH, Lin PC (2014) Disco lamp: An interactive robot lamp. In: 2014 IEEE international conference on automation science and engineering (CASE). IEEE, pp 1214–1219
go back to reference Logan-Greene R (2011) Submersions I, University of Washington Logan-Greene R (2011) Submersions I, University of Washington
go back to reference Loupa G (2020) Influence of noise on patient recovery. Curr Pollut Rep:1–7 Loupa G (2020) Influence of noise on patient recovery. Curr Pollut Rep:1–7
go back to reference Maes L, Raes G-W, Rogers T (2011) The man and machine robot orchestra at logos. Comput Music J 35(4):28–48CrossRef Maes L, Raes G-W, Rogers T (2011) The man and machine robot orchestra at logos. Comput Music J 35(4):28–48CrossRef
go back to reference Martin J, Gleeson C (2011) The sound of shadows: Peter Vogel. In: Sounding out the museum: Peter Vogel retrospective exhibition Martin J, Gleeson C (2011) The sound of shadows: Peter Vogel. In: Sounding out the museum: Peter Vogel retrospective exhibition
go back to reference McColl D, Nejat G (2014) Recognizing emotional body language displayed by a human-like social robot. Int J Soc Robot 6(2):261–280CrossRef McColl D, Nejat G (2014) Recognizing emotional body language displayed by a human-like social robot. Int J Soc Robot 6(2):261–280CrossRef
go back to reference McNaught W (1892) The history and uses of the sol-fa syllables. Proc Music Assoc 19:35–51CrossRef McNaught W (1892) The history and uses of the sol-fa syllables. Proc Music Assoc 19:35–51CrossRef
go back to reference Meyer LB (2008) Emotion and meaning in music. University of Chicago Press Meyer LB (2008) Emotion and meaning in music. University of Chicago Press
go back to reference Michalowski MP (2010) Rhythmic human-robot social interaction. Carnegie Mellon University Michalowski MP (2010) Rhythmic human-robot social interaction. Carnegie Mellon University
go back to reference Michalowski MP, Sabanovic S, Kozima H (2007) A dancing robot for rhythmic social interaction. In: Proceedings of the ACM/IEEE international conference on human-robot interaction, pp 89–96 Michalowski MP, Sabanovic S, Kozima H (2007) A dancing robot for rhythmic social interaction. In: Proceedings of the ACM/IEEE international conference on human-robot interaction, pp 89–96
go back to reference Middlebrooks JC, Green DM (1991) Sound localization by human listeners. Annu Rev Psychol 42(1):135–159CrossRef Middlebrooks JC, Green DM (1991) Sound localization by human listeners. Annu Rev Psychol 42(1):135–159CrossRef
go back to reference Miklósi A, Korondi P, Matellán V, Gácsi M (2017) Ethorobotics: a new approach to human-robot relationship. Front Psychol 8:958CrossRef Miklósi A, Korondi P, Matellán V, Gácsi M (2017) Ethorobotics: a new approach to human-robot relationship. Front Psychol 8:958CrossRef
go back to reference Moore RK (2014) Spoken language processing: time to look outside? In: International conference on statistical language and speech processing. Springer, Heidelberg, pp 21–36 Moore RK (2014) Spoken language processing: time to look outside? In: International conference on statistical language and speech processing. Springer, Heidelberg, pp 21–36
go back to reference Moore D, Tennent H, Martelaro N, Ju W (2017) Making noise intentional: a study of servo sound perception. In: Proceedings of the 2017 ACM/IEEE international conference on human-robot interaction—HRI ’17. ACM Press, Vienna, Austria, pp 12–21 Moore D, Tennent H, Martelaro N, Ju W (2017) Making noise intentional: a study of servo sound perception. In: Proceedings of the 2017 ACM/IEEE international conference on human-robot interaction—HRI ’17. ACM Press, Vienna, Austria, pp 12–21
go back to reference Moore D, Dahl T, Varela P, Ju W, Næs T, Berget I (2019) Unintended consonances: methods to understand robot motor sound perception. In: Proceedings of the 2019 CHI conference on human factors in computing systems—CHI ’19. ACM Press, Glasgow, Scotland UK, pp 1–12. https://doi.org/10.1145/3290605.330073 Moore D, Dahl T, Varela P, Ju W, Næs T, Berget I (2019) Unintended consonances: methods to understand robot motor sound perception. In: Proceedings of the 2019 CHI conference on human factors in computing systems—CHI ’19. ACM Press, Glasgow, Scotland UK, pp 1–12. https://​doi.​org/​10.​1145/​3290605.​330073
go back to reference Mozos OM, Jensfelt P, Zender H, Kruijff GJM, Burgard W (2007) From labels to semantics: an integrated system for conceptual spatial representations of indoor environments for mobile robots. In: ICRA workshop: semantic information in robotics. Citeseer Mozos OM, Jensfelt P, Zender H, Kruijff GJM, Burgard W (2007) From labels to semantics: an integrated system for conceptual spatial representations of indoor environments for mobile robots. In: ICRA workshop: semantic information in robotics. Citeseer
go back to reference Mubin O, Bartneck C, Feijs L (2009) What you say is not what you get: arguing for artificial languages instead of natural languages in human robot speech interaction. In: Proceedings of the spoken dialogue and human-robot interaction workshop at IEEE RoMan 2009, Toyama Mubin O, Bartneck C, Feijs L (2009) What you say is not what you get: arguing for artificial languages instead of natural languages in human robot speech interaction. In: Proceedings of the spoken dialogue and human-robot interaction workshop at IEEE RoMan 2009, Toyama
go back to reference Nakagawa K, Shiomi M, Shinozawa K, Matsumura R, Ishiguro H, Hagita N (2013) Effect of robot’s whispering behavior on people’s motivation. Int J Soc Robot 5(1):5–16CrossRef Nakagawa K, Shiomi M, Shinozawa K, Matsumura R, Ishiguro H, Hagita N (2013) Effect of robot’s whispering behavior on people’s motivation. Int J Soc Robot 5(1):5–16CrossRef
go back to reference Németh G, Olaszy G, Csapó TG (2011) Spemoticons: text to speech based emotional auditory cues. Int Commun Auditory Display Németh G, Olaszy G, Csapó TG (2011) Spemoticons: text to speech based emotional auditory cues. Int Commun Auditory Display
go back to reference Olaszy G, Németh G, Olaszi P, Kiss G, Zainkó C, Gordos G (2000) Profivox—A Hungarian text-to-speech system for telecommunications applications. Int J Speech Technol 3(3–4):201–215MATHCrossRef Olaszy G, Németh G, Olaszi P, Kiss G, Zainkó C, Gordos G (2000) Profivox—A Hungarian text-to-speech system for telecommunications applications. Int J Speech Technol 3(3–4):201–215MATHCrossRef
go back to reference Otsuka T, Nakadai K, Takahashi T, Komatani K, Ogata T, Okuno HG (2009) Voice quality manipulation for humanoid robots consistent with their head movements. In: 2009 9th IEEE-RAS international conference on humanoid robots. IEEE, Paris, France, pp 405–410. https://doi.org/10.1109/ICHR.2009.5379569T Otsuka T, Nakadai K, Takahashi T, Komatani K, Ogata T, Okuno HG (2009) Voice quality manipulation for humanoid robots consistent with their head movements. In: 2009 9th IEEE-RAS international conference on humanoid robots. IEEE, Paris, France, pp 405–410. https://​doi.​org/​10.​1109/​ICHR.​2009.​5379569T
go back to reference Oudeyer P-Y (2003) The production and recognition of emotions in speech: features and algorithms. Int J Hum Comput Stud 59(1–2):157–183 Oudeyer P-Y (2003) The production and recognition of emotions in speech: features and algorithms. Int J Hum Comput Stud 59(1–2):157–183
go back to reference Özcan E, van Egmond R (2006) Product sound design and application: an overview. In: Proceedings of the fifth international conference on design and emotion, Gothenburg Özcan E, van Egmond R (2006) Product sound design and application: an overview. In: Proceedings of the fifth international conference on design and emotion, Gothenburg
go back to reference Pan Y, Kim MG, Suzuki K (2010) A robot musician interacting with a human partner through initiative exchange. In: NIME, pp 166–169 Pan Y, Kim MG, Suzuki K (2010) A robot musician interacting with a human partner through initiative exchange. In: NIME, pp 166–169
go back to reference Peng H, Zhou C, Hu H, Chao F, Li J (2015) Robotic dance in social robotics—a taxonomy. IEEE Trans Hum Mach Syst 45(3):281–293CrossRef Peng H, Zhou C, Hu H, Chao F, Li J (2015) Robotic dance in social robotics—a taxonomy. IEEE Trans Hum Mach Syst 45(3):281–293CrossRef
go back to reference Pinker S (1989) Language acquisition. Found Cognitive Sci:359–400 Pinker S (1989) Language acquisition. Found Cognitive Sci:359–400
go back to reference Prendinger H, Becker C, Ishizuka M (2006) A study in users’ physiological response to an empathic interface agent. Int J Humanoid Rob 3(03):371–391CrossRef Prendinger H, Becker C, Ishizuka M (2006) A study in users’ physiological response to an empathic interface agent. Int J Humanoid Rob 3(03):371–391CrossRef
go back to reference Read R, Belpaeme T (2012) How to use non-linguistic utterances to convey emotion in child-robot interaction. In: Proceedings of the seventh annual ACM/IEEE international conference on human robot interaction—HRI ’12. ACM Press, Boston, Massachusetts, USA, p 219 https://doi.org/10.1145/2157689.2157764 Read R, Belpaeme T (2012) How to use non-linguistic utterances to convey emotion in child-robot interaction. In: Proceedings of the seventh annual ACM/IEEE international conference on human robot interaction—HRI ’12. ACM Press, Boston, Massachusetts, USA, p 219 https://​doi.​org/​10.​1145/​2157689.​2157764
go back to reference Read R, Belpaeme T (2014) Situational context directs how people affectively interpret robotic non-linguistic utterances. In: Proceedings of the 2014 ACM/IEEE international conference on human-robot interaction—HRI ’14. ACM Press, Bielefeld, Germany, pp 41–48. https://doi.org/10.1145/2559636.2559680 Read R, Belpaeme T (2014) Situational context directs how people affectively interpret robotic non-linguistic utterances. In: Proceedings of the 2014 ACM/IEEE international conference on human-robot interaction—HRI ’14. ACM Press, Bielefeld, Germany, pp 41–48. https://​doi.​org/​10.​1145/​2559636.​2559680
go back to reference Remez RE, Rubin PE, Pisoni DB, Carrell TD (1981) Speech perception without traditional speech cues. Science 212(4497):947–949CrossRef Remez RE, Rubin PE, Pisoni DB, Carrell TD (1981) Speech perception without traditional speech cues. Science 212(4497):947–949CrossRef
go back to reference Rinaldo KE (1998) Technology recapitulates phylogeny: artificial life art. Leonardo:371–376 Rinaldo KE (1998) Technology recapitulates phylogeny: artificial life art. Leonardo:371–376
go back to reference Ritschel H, Aslan I, Mertes S, Seiderer A, André E (2019) Personalized synthesis of intentional and emotional non-verbal sounds for social robots. In: 2019 8th international conference on affective computing and intelligent interaction (ACII). IEEE, pp 1–7 Ritschel H, Aslan I, Mertes S, Seiderer A, André E (2019) Personalized synthesis of intentional and emotional non-verbal sounds for social robots. In: 2019 8th international conference on affective computing and intelligent interaction (ACII). IEEE, pp 1–7
go back to reference Robinson FA, Velonaki M, Bown O (2021) Smooth operator: tuning robot perception through artificial movement sound. In: Proceedings of the 2021 ACM/IEEE international conference on human-robot interaction, pp 53–62 Robinson FA, Velonaki M, Bown O (2021) Smooth operator: tuning robot perception through artificial movement sound. In: Proceedings of the 2021 ACM/IEEE international conference on human-robot interaction, pp 53–62
go back to reference Rossi S, Dell’Aquila E, Bucci B (2019) Evaluating the emotional valence of affective sounds for child-robot interaction. In: International conference on social robotics. Springer, Heidelberg, pp 505–514 Rossi S, Dell’Aquila E, Bucci B (2019) Evaluating the emotional valence of affective sounds for child-robot interaction. In: International conference on social robotics. Springer, Heidelberg, pp 505–514
go back to reference Savery R, Zahray L, Weinberg G (2020) Emotional musical prosody for the enhancement of trust in robotic arm communication. arXiv preprint arXiv:2009.09048 Savery R, Zahray L, Weinberg G (2020) Emotional musical prosody for the enhancement of trust in robotic arm communication. arXiv preprint arXiv:​2009.​09048
go back to reference Savery R, Rose R, Weinberg G (2019) Establishing human-robot trust through music-driven robotic emotion prosody and gesture. In: 2019 28th IEEE international conference on robot and human interactive communication (RO-MAN). IEEE, pp 1–7 Savery R, Rose R, Weinberg G (2019) Establishing human-robot trust through music-driven robotic emotion prosody and gesture. In: 2019 28th IEEE international conference on robot and human interactive communication (RO-MAN). IEEE, pp 1–7
go back to reference Savery R, Zahray L, Weinberg G (2020) Emotional musical prosody: validated vocal dataset for human robot interaction. The 2020 joint conference on AI music creativity (CSMC + MUME) Savery R, Zahray L, Weinberg G (2020) Emotional musical prosody: validated vocal dataset for human robot interaction. The 2020 joint conference on AI music creativity (CSMC + MUME)
go back to reference Savery R, Zahray L, Weinberg G (2021) Shimon sings: robotic musicianship finds its voice. In: Handbook of artificial intelligence for music. Springer, Heidelberg, pp 823–847 Savery R, Zahray L, Weinberg G (2021) Shimon sings: robotic musicianship finds its voice. In: Handbook of artificial intelligence for music. Springer, Heidelberg, pp 823–847
go back to reference Scherer KR (1971) Randomized splicing: a note on a simple technique for masking speech content. J Exp Res Pers Scherer KR (1971) Randomized splicing: a note on a simple technique for masking speech content. J Exp Res Pers
go back to reference Scherer KR (1985) Vocal affect signalling: a comparative approach. Adv Study Behav 15:189–244CrossRef Scherer KR (1985) Vocal affect signalling: a comparative approach. Adv Study Behav 15:189–244CrossRef
go back to reference Scherer KR, Koivumaki J, Rosenthal R (1972) Minimal cues in the vocal communication of affect: judging emotions from content-masked speech. J Psycholinguist Res 1(3):269–285CrossRef Scherer KR, Koivumaki J, Rosenthal R (1972) Minimal cues in the vocal communication of affect: judging emotions from content-masked speech. J Psycholinguist Res 1(3):269–285CrossRef
go back to reference Scherer KR (1982) Methods of research on vocal communication: paradigms and parameters. Handbook of methods in nonverbal behavior research, pp 136–198 Scherer KR (1982) Methods of research on vocal communication: paradigms and parameters. Handbook of methods in nonverbal behavior research, pp 136–198
go back to reference Scherer KR (1994) Affect bursts. Emotions: essays on emotion theory, vol 161 Scherer KR (1994) Affect bursts. Emotions: essays on emotion theory, vol 161
go back to reference Schroder M, Bevacqua E, Cowie R, Eyben F, Gunes H, Heylen D, Ter Maat M, McKeown G, Pammi S, Pantic M et al (2011) Building autonomous sensitive artificial listeners. IEEE Trans Affect Comput 3(2):165–183CrossRef Schroder M, Bevacqua E, Cowie R, Eyben F, Gunes H, Heylen D, Ter Maat M, McKeown G, Pammi S, Pantic M et al (2011) Building autonomous sensitive artificial listeners. IEEE Trans Affect Comput 3(2):165–183CrossRef
go back to reference Schroder M, Burkhardt F, Krstulovic S (2010) Synthesis of emotional speech. Blueprint for affective computing, pp 222–231 Schroder M, Burkhardt F, Krstulovic S (2010) Synthesis of emotional speech. Blueprint for affective computing, pp 222–231
go back to reference Schuller B, Batliner A (2013) Computational paralinguistics: emotion, affect and personality in speech and language processing. Wiley, New York Schuller B, Batliner A (2013) Computational paralinguistics: emotion, affect and personality in speech and language processing. Wiley, New York
go back to reference Seo JH, Yang JY, Kim J, Kwon DS (2013) Autonomous humanoid robot dance generation system based on real-time music input. In: 2013 IEEE RO-MAN. IEEE, pp 204–209 Seo JH, Yang JY, Kim J, Kwon DS (2013) Autonomous humanoid robot dance generation system based on real-time music input. In: 2013 IEEE RO-MAN. IEEE, pp 204–209
go back to reference Shibuya K, Ideguchi H, Ikushima K (2012) Volume control by adjusting wrist moment of violin-playing robot. Int J Synth Emot (IJSE) 3(2):31–47CrossRef Shibuya K, Ideguchi H, Ikushima K (2012) Volume control by adjusting wrist moment of violin-playing robot. Int J Synth Emot (IJSE) 3(2):31–47CrossRef
go back to reference Shiwa T, Kanda T, Imai M, Ishiguro H, Hagita N (2009) How quickly should a communication robot respond? Delaying strategies and habituation effects. Int J Soc Robot 1(2):141–155CrossRef Shiwa T, Kanda T, Imai M, Ishiguro H, Hagita N (2009) How quickly should a communication robot respond? Delaying strategies and habituation effects. Int J Soc Robot 1(2):141–155CrossRef
go back to reference Singer E, Feddersen J, Redmon C, Bowen B (2004) LEMUR’s musical robots. In: Proceedings of the 2004 conference on new interfaces for musical expression, pp 181–184 Singer E, Feddersen J, Redmon C, Bowen B (2004) LEMUR’s musical robots. In: Proceedings of the 2004 conference on new interfaces for musical expression, pp 181–184
go back to reference Snel J, Cullen C (2013) Judging emotion from low-pass filtered naturalistic emotional speech. In: 2013 humaine association conference on affective computing and intelligent interaction. IEEE, pp 336–342 Snel J, Cullen C (2013) Judging emotion from low-pass filtered naturalistic emotional speech. In: 2013 humaine association conference on affective computing and intelligent interaction. IEEE, pp 336–342
go back to reference Solis J, Chida K, Isoda S, Suefuji K, Arino C, Takanishi A (2005) The anthropomorphic flutist robot WF-4R: from mechanical to perceptual improvements. In: 2005 IEEE/RSJ international conference on intelligent robots and systems. IEEE, pp 64–69 Solis J, Chida K, Isoda S, Suefuji K, Arino C, Takanishi A (2005) The anthropomorphic flutist robot WF-4R: from mechanical to perceptual improvements. In: 2005 IEEE/RSJ international conference on intelligent robots and systems. IEEE, pp 64–69
go back to reference Solis J, Takanishi A, Hashimoto K (2010) Development of an anthropomorphic saxophone-playing robot. In: Brain, body and machine. Springer, Heidelberg, pp 175–186 Solis J, Takanishi A, Hashimoto K (2010) Development of an anthropomorphic saxophone-playing robot. In: Brain, body and machine. Springer, Heidelberg, pp 175–186
go back to reference Song S, Yamada S (2017) Expressing emotions through color, sound, and vibration with an appearance-constrained social robot. In: Proceedings of the 2017 ACM/IEEE international conference on human-robot interaction—HRI ’17. ACM Press, Vienna, Austria, pp 2–11. https://doi.org/10.1145/2909824.3020239 Song S, Yamada S (2017) Expressing emotions through color, sound, and vibration with an appearance-constrained social robot. In: Proceedings of the 2017 ACM/IEEE international conference on human-robot interaction—HRI ’17. ACM Press, Vienna, Austria, pp 2–11. https://​doi.​org/​10.​1145/​2909824.​3020239
go back to reference Teshigawara M, Amir N, Amir O, Wlosko E, Avivi M (2007) Effects of random splicing on listeners’ perceptions. In: ICPhS Teshigawara M, Amir N, Amir O, Wlosko E, Avivi M (2007) Effects of random splicing on listeners’ perceptions. In: ICPhS
go back to reference Thiessen R, Rea DJ, Garcha DS, Cheng C, Young JE (2019) Infrasound for HRI: A robot using low-frequency vibrations to impact how people perceive its actions. In: 2019 14th ACM/IEEE international conference on human-robot interaction (HRI). IEEE, pp 11–18 Thiessen R, Rea DJ, Garcha DS, Cheng C, Young JE (2019) Infrasound for HRI: A robot using low-frequency vibrations to impact how people perceive its actions. In: 2019 14th ACM/IEEE international conference on human-robot interaction (HRI). IEEE, pp 11–18
go back to reference Trimpin. Trimpin (2011) Contraptions for art and sound. University of Washington Press Trimpin. Trimpin (2011) Contraptions for art and sound. University of Washington Press
go back to reference Trovato G, Do M, Terlemez Mandery C, Ishii H, Bianchi-Berthouze N, Asfour T, Takanishi A (2016) Is hugging a robot weird? Investigating the influence of robot appearance on users’ perception of hugging. In: 2016 IEEE-RAS 16th international conference on humanoid robots (humanoids). IEEE, pp 318–323 Trovato G, Do M, Terlemez Mandery C, Ishii H, Bianchi-Berthouze N, Asfour T, Takanishi A (2016) Is hugging a robot weird? Investigating the influence of robot appearance on users’ perception of hugging. In: 2016 IEEE-RAS 16th international conference on humanoid robots (humanoids). IEEE, pp 318–323
go back to reference Trovato G, Paredes R, Balvin J, Cuellar F, Thomsen NB, Bech S, Tan ZH (2018) The sound or silence: investigating the influence of robot noise on proxemics. In: 2018 27th IEEE international symposium on robot and human interactive communication (RO-MAN). IEEE, Nanjing, pp 713–718. https://doi.org/10.1109/ROMAN.2018.8525795 Trovato G, Paredes R, Balvin J, Cuellar F, Thomsen NB, Bech S, Tan ZH (2018) The sound or silence: investigating the influence of robot noise on proxemics. In: 2018 27th IEEE international symposium on robot and human interactive communication (RO-MAN). IEEE, Nanjing, pp 713–718. https://​doi.​org/​10.​1109/​ROMAN.​2018.​8525795
go back to reference Tünnermann R, Hammerschmidt J, Hermann T (2013) Blended sonification—sonification for casual information interaction. Georgia Institute of Technology Tünnermann R, Hammerschmidt J, Hermann T (2013) Blended sonification—sonification for casual information interaction. Georgia Institute of Technology
go back to reference Urbanowicz K, Nyka L (2016) Media architecture and interactive art installations stimulating human involvement and activities in public spaces. In: CBU international conference proceedings, vol 4, pp 591–596 Urbanowicz K, Nyka L (2016) Media architecture and interactive art installations stimulating human involvement and activities in public spaces. In: CBU international conference proceedings, vol 4, pp 591–596
go back to reference Van Egmond R (2008) The experience of product sounds. In: Product experience. Elsevier, pp 69–89 Van Egmond R (2008) The experience of product sounds. In: Product experience. Elsevier, pp 69–89
go back to reference Vindriis R, Carnegie D (2016) Strum-Bot–An overview of a strumming guitar robot. In: Proceedings of the international conference on new interfaces for musical expression conference, Brisbane, pp 146–151 Vindriis R, Carnegie D (2016) Strum-Bot–An overview of a strumming guitar robot. In: Proceedings of the international conference on new interfaces for musical expression conference, Brisbane, pp 146–151
go back to reference Vircikova M, Sincak P (2010) Artificial intelligence in humanoid systems Vircikova M, Sincak P (2010) Artificial intelligence in humanoid systems
go back to reference Vircíková M, Sinčák P (2010) Dance choreography design of humanoid robots using interactive evolutionary computation. In: 3rd workshop for young researchers: human friendly robotics for young researchers Vircíková M, Sinčák P (2010) Dance choreography design of humanoid robots using interactive evolutionary computation. In: 3rd workshop for young researchers: human friendly robotics for young researchers
go back to reference Vircíková M, Fedor Z, Sinčák P (2011) Design of verbal and non-verbal human-robot interactive system. In: 2011 11th IEEE-RAS international conference on humanoid robots. IEEE, pp 87–92 Vircíková M, Fedor Z, Sinčák P (2011) Design of verbal and non-verbal human-robot interactive system. In: 2011 11th IEEE-RAS international conference on humanoid robots. IEEE, pp 87–92
go back to reference Walker BN, Lindsay J, Nance A, Nakano Y, Palladino DK, Dingler T, Jeon M (2013) Spearcons (speech-based earcons) improve navigation performance in advanced auditory menus. Hum Factors 55(1):157–182CrossRef Walker BN, Lindsay J, Nance A, Nakano Y, Palladino DK, Dingler T, Jeon M (2013) Spearcons (speech-based earcons) improve navigation performance in advanced auditory menus. Hum Factors 55(1):157–182CrossRef
go back to reference Walters ML, Syrdal DS, Koay KL, Dautenhahn K, Boekhorst R (2008) Human approach distances to a mechanical-looking robot with different robot voice styles. In: RO-MAN 2008—The 17th IEEE international symposium on robot and human interactive communication. IEEE, Munich, Germany, pp 707—712. https://doi.org/10.1109/ROMAN. 2008.4600750 Walters ML, Syrdal DS, Koay KL, Dautenhahn K, Boekhorst R (2008) Human approach distances to a mechanical-looking robot with different robot voice styles. In: RO-MAN 2008—The 17th IEEE international symposium on robot and human interactive communication. IEEE, Munich, Germany, pp 707—712. https://​doi.​org/​10.​1109/​ROMAN. 2008.4600750
go back to reference Ward N (1996) Using prosodic clues to decide when to produce back-channel utterances. In: Proceeding of fourth international conference on spoken language processing. ICSLP’96, vol 3. IEEE, pp 1728–1731 Ward N (1996) Using prosodic clues to decide when to produce back-channel utterances. In: Proceeding of fourth international conference on spoken language processing. ICSLP’96, vol 3. IEEE, pp 1728–1731
go back to reference Weinberg G, Driscoll S (2006) Toward robotic musicianship. Comput Music J 30(4):28–45CrossRef Weinberg G, Driscoll S (2006) Toward robotic musicianship. Comput Music J 30(4):28–45CrossRef
go back to reference Weinberg G, Bretan M, Hoffman G, Driscoll S (2020) Robotic musicianship: embodied artificial creativity and mechatronic musical expression, vol 8. Springer, Heidelberg Weinberg G, Bretan M, Hoffman G, Driscoll S (2020) Robotic musicianship: embodied artificial creativity and mechatronic musical expression, vol 8. Springer, Heidelberg
go back to reference Weinberg G, Driscoll S (2007) The design of a perceptual and improvisational robotic marimba player. In: RO-MAN 2007-The 16th IEEE international symposium on robot and human interactive communication. IEEE, pp 769–774 Weinberg G, Driscoll S (2007) The design of a perceptual and improvisational robotic marimba player. In: RO-MAN 2007-The 16th IEEE international symposium on robot and human interactive communication. IEEE, pp 769–774
go back to reference Williamson MM (1999) Robot arm control exploiting natural dynamics. PhD Thesis, Massachusetts Institute of Technology Williamson MM (1999) Robot arm control exploiting natural dynamics. PhD Thesis, Massachusetts Institute of Technology
go back to reference Wolford J, Gabaldon B, Rivas J, Min B (2019) Condition-based robot audio techniques. Google Patents Wolford J, Gabaldon B, Rivas J, Min B (2019) Condition-based robot audio techniques. Google Patents
go back to reference Woolf S, Bech T (2002) Experiments with reactive robotic sound sculptures. In: ALife VIII: workshop proceedings 2002 P2, vol 3 Woolf S, Bech T (2002) Experiments with reactive robotic sound sculptures. In: ALife VIII: workshop proceedings 2002 P2, vol 3
go back to reference Yoshida S, Sakamoto D, Sugiura Y, Inami M, Igarashi T (2012) Robo-Jockey: robotic dance entertainment for all. In: SIGGRAPH Asia 2012 emerging technologies, pp 1–2 Yoshida S, Sakamoto D, Sugiura Y, Inami M, Igarashi T (2012) Robo-Jockey: robotic dance entertainment for all. In: SIGGRAPH Asia 2012 emerging technologies, pp 1–2
go back to reference Zahray L, Savery R, Syrkett L, Weinberg G (2020) Robot gesture sonification to enhance awareness of robot status and enjoyment of interaction. In: 2020 29th IEEE international conference on robot and human interactive communication (RO-MAN). IEEE, pp 978–985 Zahray L, Savery R, Syrkett L, Weinberg G (2020) Robot gesture sonification to enhance awareness of robot status and enjoyment of interaction. In: 2020 29th IEEE international conference on robot and human interactive communication (RO-MAN). IEEE, pp 978–985
go back to reference Zhang A, Malhotra M, Matsuoka Y (2011) Musical piano performance by the ACT hand. In: 2011 IEEE international conference on robotics and automation. IEEE, pp 3536–3541 Zhang A, Malhotra M, Matsuoka Y (2011) Musical piano performance by the ACT hand. In: 2011 IEEE international conference on robotics and automation. IEEE, pp 3536–3541
go back to reference Zhang R, Jeon M, Park CH, Howard A (2015) Robotic sonification for promoting emotional and social interactions of children with ASD. In: Proceedings of the tenth annual ACM/IEEE international conference on human-robot interaction extended abstracts—HRI’15 extended abstracts. ACM Press, Portland, Oregon, USA, pp 111–112. https://doi.org/10.1145/2701973.2702033 Zhang R, Jeon M, Park CH, Howard A (2015) Robotic sonification for promoting emotional and social interactions of children with ASD. In: Proceedings of the tenth annual ACM/IEEE international conference on human-robot interaction extended abstracts—HRI’15 extended abstracts. ACM Press, Portland, Oregon, USA, pp 111–112. https://​doi.​org/​10.​1145/​2701973.​2702033
Metadata
Title
The Robot Soundscape
Authors
Frederic Anthony Robinson
Oliver Bown
Mari Velonaki
Copyright Year
2023
DOI
https://doi.org/10.1007/978-3-031-28138-9_3