skip to main content
10.1145/3313831.3376702acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article

Acoustic Transparency and the Changing Soundscape of Auditory Mixed Reality

Published:23 April 2020Publication History

ABSTRACT

Auditory headsets capable of actively or passively intermixing both real and virtual sounds are in-part acoustically transparent. This paper explores the consequences of acoustic transparency, both on the perception of virtual audio content, given the presence of a real-world auditory backdrop, and more broadly in facilitating a wearable, personal, private, always-available soundspace. We experimentally compare passive acoustically transparent, and active noise cancelling, orientation-tracked auditory headsets across a range of content types, both indoors and outdoors for validity. Our results show differences in terms of presence, realness and externalization for select content types. Via interviews and a survey, we discuss attitudes toward acoustic transparency (e.g. being perceived as safer), the potential shifts in audio usage that might be precipitated by adoption, and reflect on how such headsets and experiences fit within the area of Mixed Reality.

Skip Supplemental Material Section

Supplemental Material

paper573vf.mp4

mp4

86.9 MB

paper573pv.mp4

mp4

17.3 MB

a573-mcgill-presentation.mp4

mp4

128.4 MB

References

  1. Utku Günay Acer, Marc Van den Broeck, and Fahim Kawsar. 2019. The city as a personal assistant. In Proceedings of the 2019 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2019 ACM International Symposium on Wearable Computers - UbiComp/ISWC '19. ACM Press, New York, New York, USA, 1102--1106. DOI: http://dx.doi.org/10.1145/3341162.3350847Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Emanuel Aguilera, Jose J. Lopez, Pablo Gutierrez, and Maximo Cobos. 2014. An immersive multi-party conferencing system for mobile devices using binaural audio. In Proceedings of the AES International Conference, Vol. 2014-Janua. https://pdfs.semanticscholar.org/b7c9/842d3a32550f 449dbbec55e65f3744b78d47.pdfGoogle ScholarGoogle Scholar
  3. Robert Albrecht, Riitta Väänänen, and Tapio Lokki. 2016. Guided by music: pedestrian and cyclist navigation with route and beacon guidance. Personal and Ubiquitous Computing 20, 1 (2016), 121--145. DOI: http://dx.doi.org/10.1007/s00779-016-0906-zGoogle ScholarGoogle ScholarDigital LibraryDigital Library
  4. JJ Alvarsson, S Wiens, and Mats Nilsson. 2010. Stress recovery during exposure to nature sound and environmental noise. International Journal of Environmental Research and Public Health (2010). https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2872309/Google ScholarGoogle ScholarCross RefCross Ref
  5. Jesse Armstrong, Brian Welsh, and Charlie Brooker. 2011. The entire history of you. (2011).Google ScholarGoogle Scholar
  6. Ronald T. Azuma. 1997. A survey of augmented reality. Presence: Teleoperators and Virtual Environments 6, 4 (aug 1997), 355--385. DOI: http://dx.doi.org/10.1162/pres.1997.6.4.355Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Sean Barbeau. 2018. Dual-frequency GNSS on Android devices. (2018). https://medium.com/@sjbarbeau/dual-f requency-gnss-on-android-devices-152b8826e1cGoogle ScholarGoogle Scholar
  8. Amit Barde, William S. Helton, Gun Lee, and Mark Billinghurst. 2016. Binaural spatialisation over a bone conduction headset: Minimum discernable angular difference. In 140th Audio Engineering Society International Convention 2016, AES 2016. Audio Engineering Society. http://www.aes.org/e-lib/online/browse.cfm?elib=18254Google ScholarGoogle Scholar
  9. Amit Barde, Gun Lee, William S. Helton, and Mark Billinghurst. 2017. Binaural Spatialisation Over a Bone Conduction Headset: Elevation Perception. In Proceedings of the 22nd International Conference on Auditory Display - ICAD 2016. The International Community for Auditory Display, Arlington, Virginia, 173--176. DOI: http://dx.doi.org/10.21785/icad2016.013Google ScholarGoogle ScholarCross RefCross Ref
  10. Benjamin B. Bederson. 1995. Audio augmented reality: a prototype automated tour guide.. In CHI '95 Conference Companion on Human Factors in Computing Systems. 210--211. DOI: http://dx.doi.org/10.1145/223355.223526Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. D. R. Begault, E. M. Wenzel, and M. R. Anderson. 2001. Direct comparison of the impact of head tracking, reverberation, and individualized head-related transfer functions on the spatial perception of a virtual speech source. AES: Journal of the Audio Engineering Society 49, 10 (2001), 904--916. http://www.aes.org/e-lib/browse.cfm?elib=10175Google ScholarGoogle ScholarCross RefCross Ref
  12. Frauke Behrendt. 2015. Locative Media as Sonic Interaction Design: Walking Through Placed Sounds. Wi: Journal of Mobile Media 9, 2 (2015), 25. http://wi .mobilities.ca/frauke-behrendt-locative-media-as-son ic-interaction-designwalking-through-placed-soundsGoogle ScholarGoogle Scholar
  13. Jacob A. Benfield, B. Derrick Taff, Peter Newman, and Joshua Smyth. 2014. Natural sound facilitates mood recovery. Ecopsychology 6, 3 (2014), 183--188. DOI: http://dx.doi.org/10.1089/eco.2014.0028Google ScholarGoogle ScholarCross RefCross Ref
  14. Karin Bijsterveld. 2010. Acoustic Cocooning. The Senses and Society 5, 2 (jul 2010), 189--211. DOI: http://dx.doi.org/10.2752/174589210X12668381452809Google ScholarGoogle ScholarCross RefCross Ref
  15. Mark Billinghurst and Shaleen Deo. 2007. Motion-tracking in spatial mobile audio-conferencing. Workshop on spatial audio for mobile devices (SAMD 2007) at Mobile HCI (2007), 3--6. https://www.researchgate.net/profile/MarkGoogle ScholarGoogle Scholar
  16. Costas Boletsis and Dimitra Chasanidou. 2018. Smart tourism in cities: Exploring urban destinations with audio augmented reality. In ACM International Conference Proceeding Series. 515--521. DOI: http://dx.doi.org/10.1145/3197768.3201549Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Bose. 2019a. Smart Noise Cancelling Headphones 700. (2019). https://www.bose.co.uk/en_gb/products/headph ones/noise_cancelling_headphones/noise-cancelling-he adphones-700.htmlGoogle ScholarGoogle Scholar
  18. Bose. 2019b. SoundWear Companion Wearable Speaker. (2019). https://www.bose.co.uk/en_gb/products/speake rs/portable_speakers/soundwear-companion.htmlGoogle ScholarGoogle Scholar
  19. Bose. 2019c. Wearables by Bose - AR Audio Sunglasses. (2019). https://www.bose.co.uk/en_gb/products/frames.htmlGoogle ScholarGoogle Scholar
  20. A. L. Brown and Andreas Muhar. 2004. An approach to the acoustic design of outdoor space. Journal of Environmental Planning and Management 47, 6 (nov 2004), 827--842. DOI: http://dx.doi.org/10.1080/0964056042000284857Google ScholarGoogle ScholarCross RefCross Ref
  21. Alan Chamberlain, Mads Bødker, Adrian Hazzard, David McGookin, David De Roure, Pip Willcox, and Konstantinos Papangelis. 2017. Audio technology and mobile human computer interaction: From space and place, to social media, music, composition and creation. International Journal of Mobile Human Computer Interaction 9, 4 (2017), 25--40. DOI: http://dx.doi.org/10.4018/IJMHCI.2017100103Google ScholarGoogle ScholarCross RefCross Ref
  22. Priscilla Chueng. 2002. Designing Auditory Spaces to Support Sense of Place: The Role of Expectation. The Role of Place in On-line Communities (Workshop) November (2002). http://scom.hud.ac.uk/scompc2/research.htmhttp://pnainby.businesscatalyst.com/pdf/pchuengGoogle ScholarGoogle Scholar
  23. Michael Cohen. 1993. Throwing, pitching and catching sound: audio windowing models and modes. International Journal of Man-Machine Studies 39, 2 (1993), 269 -- 304. DOI: http://dx.doi.org/https://doi.org/10.1006/imms.1993.1062Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Poppy Crum. 2019. Hearables: Here come the: Technology tucked inside your ears will augment your daily life. IEEE Spectrum 56, 5 (may 2019), 38--43. DOI: http://dx.doi.org/10.1109/mspec.2019.8701198Google ScholarGoogle ScholarCross RefCross Ref
  25. James J. Cummings and Jeremy N. Bailenson. 2015. How Immersive Is Enough? A Meta-Analysis of the Effect of Immersive Technology on User Presence. Media Psychology (may 2015). http://www.tandfonline.com/doi/abs/10.1080/15213269.2015.1015740Google ScholarGoogle Scholar
  26. Jérome Daniel, J B Rault, and J D Polack. 1998. Ambisonics encoding of other audio formats for multiple listening conditions. 105th AES Convention 4795 (sep 1998), Preprint 4795. http://www.aes.org/e-lib/browse.cfm?elib=8385Google ScholarGoogle Scholar
  27. Florian Denk, Marko Hiipakka, Birger Kollmeier, and Stephan M.A. Ernst. 2018. An individualised acoustically transparent earpiece for hearing devices. International Journal of Audiology 57, sup3 (may 2018), S62--S70. DOI: http://dx.doi.org/10.1080/14992027.2017.1294768Google ScholarGoogle ScholarCross RefCross Ref
  28. Shaleen Deo, Mark Billinghurst, Nathan Adams, and Juha Lehikoinen. 2007. Experiments in spatial mobile audio-conferencing. In Mobility Conference 2007 - The 4th Int. Conf. Mobile Technology, Applications and Systems, Mobility 2007, Incorporating the 1st Int. Symp. Computer Human Interaction in Mobile Technology, IS-CHI 2007. 447--451. DOI: http://dx.doi.org/10.1145/1378063.1378133Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Frank Van Diggelen, Roy Want, and Wei Wang. 2018. How to achieve 1-meter accuracy in Android. (2018). https://www.gpsworld.com/how-to-achieve-1-meter-accu racy-in-android/Google ScholarGoogle Scholar
  30. H. Durrant-Whyte and T. Bailey. 2006. Simultaneous localization and mapping: part I. IEEE Robotics & Automation Magazine 13, 2 (jun 2006), 99--110. DOI: http://dx.doi.org/10.1109/MRA.2006.1638022Google ScholarGoogle ScholarCross RefCross Ref
  31. Andreas Floros, Nicolas Alexander Tatlas, and Stylianos Potirakis. 2011. Sonic perceptual crossings: A tic-tac-toe audio game. In ACM International Conference Proceeding Series. 88--94. DOI: http://dx.doi.org/10.1145/2095667.2095680Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. Claudio Forlivesi, Utku Günay Acer, Marc van den Broeck, and Fahim Kawsar. 2018. Mindful interruptions: a lightweight system for managing interruptibility on wearables. In Proceedings of the 4th ACM Workshop on Wearable Systems and Applications - WearSys '18. ACM Press, New York, New York, USA, 27--32. DOI: http://dx.doi.org/10.1145/3211960.3211974Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Jon Francombe. 2018. Vostok-K Incident: Immersive Audio Drama on Personal Devices. (2018). https://www.bbc.co.uk/rd/blog/2018--10-multi-speakerimmersive-audio-metadataGoogle ScholarGoogle Scholar
  34. J Freeman and J Lessiter. 2001. Here, there and everywhere: The effects of multichannel audio on presence. (2001). https://smartech.gatech.edu/handle/1853/50621Google ScholarGoogle Scholar
  35. Masaaki Fukumoto and Masaaki. 2018. SilentVoice: Unnoticeable Voice Input by Ingressive Speech. In The 31st Annual ACM Symposium on User Interface Software and Technology - UIST '18. ACM Press, New York, New York, USA, 237--246. DOI: http://dx.doi.org/10.1145/3242587.3242603Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Samuel Gibbs. 2020. Amazon launches Alexa smart ring, smart glasses and earbuds. (2020). https://www.theguardian.com/technology/2019/sep/26/amazon -launches-alexa-smart-ring-smart-glasses-and-earbudsGoogle ScholarGoogle Scholar
  37. Robert H. Gilkey and Janet M. Weisenberger. 1995. The Sense of Presence for the Suddenly Deafened Adult:Implications for Virtual Environments. Presence: Teleoperators and Virtual Environments 4, 4 (jan 1995), 357--363. DOI: http://dx.doi.org/10.1162/pres.1995.4.4.357Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Stephen Groening. 2016. ?No One Likes to Be a Captive Audience': Headphones and In-Flight Cinema. Film History: An International Journal (2016). https://muse.jhu.edu/article/640056/summaryGoogle ScholarGoogle Scholar
  39. Catherine Guastavino, Véronique Larcher, Guillaume Catusseau, and Patrick Boussard. 2007. Spatial Audio Quality Evaluation: Comparing Transaural , Ambisonics and Stereo. In Proceedings of the 13th International Conference on Auditory Display. 53--58. DOI: http://dx.doi.org/10.1121/1.3652887Google ScholarGoogle ScholarCross RefCross Ref
  40. Etienne Hendrickx, Peter Stitt, Jean Christophe Messonnier, Jean Marc Lyzwa, Brian Fg Katz, and Catherine de Boishéraud. 2017. Influence of head tracking on the externalization of speech stimuli for non-individualized binaural synthesis. The Journal of the Acoustical Society of America 141, 3 (2017), 2011. DOI: http://dx.doi.org/10.1121/1.4978612Google ScholarGoogle ScholarCross RefCross Ref
  41. Kristian Hentschel and Jon Francombe. 2019. Framework for Web Delivery of Immersive Audio Experiences Using Device Orchestration. In Adjunct Proceedings of ACM TVX 2019. ACM.Google ScholarGoogle Scholar
  42. Thomas R. Herzog and Patrick J. Bosley. 1992. Tranquility and preference as affective qualities of natural environments. Journal of Environmental Psychology 12, 2 (jun 1992), 115--127. DOI: http://dx.doi.org/10.1016/S0272--4944(05)80064--7Google ScholarGoogle ScholarCross RefCross Ref
  43. Mansoor Hyder, Michael Haun, and Christian Hoene. 2010. Placing the participants of a spatial audio conference call. In 2010 7th IEEE Consumer Communications and Networking Conference, CCNC 2010. IEEE, 1--7. DOI: http://dx.doi.org/10.1109/CCNC.2010.5421622Google ScholarGoogle ScholarCross RefCross Ref
  44. Robin Ince, Brian Cox, Jo Brand, Jo Dunkley, and Adam Masters. 2019. The Infinite Monkey Cage, Series 19, How We Measure the Universe. (2019). https://www.bbc.co.uk/programmes/m0002g8kGoogle ScholarGoogle Scholar
  45. Andrew F. Jarosz and Jennifer Wiley. 2014. What Are the Odds? A Practical Guide to Computing and Reporting Bayes Factors. The Journal of Problem Solving (2014). DOI: http://dx.doi.org/10.7771/1932--6246.1167Google ScholarGoogle ScholarCross RefCross Ref
  46. Robert E. Kass and Adrian E. Raftery. 1995. Bayes Factors. J. Amer. Statist. Assoc. 90, 430 (jun 1995), 773--795. DOI: http://dx.doi.org/10.1080/01621459.1995.10476572Google ScholarGoogle ScholarCross RefCross Ref
  47. Fahim Kawsar and Alastair Beresford. 2019. EarComp 2019 - 1st International Workshop on Earable Computing. (2019). www.esense.io/earcompGoogle ScholarGoogle Scholar
  48. Fahim Kawsar, Chulhong Min, Akhil Mathur, and Allesandro Montanari. 2018a. Earables for Personal-Scale Behavior Analytics. IEEE Pervasive Computing 17, 3 (jul 2018), 83--89. DOI: http://dx.doi.org/10.1109/MPRV.2018.03367740Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. Fahim Kawsar, Chulhong Min, Akhil Mathur, Alessandro Montanari, Utku Günay Acer, and Marc Van den Broeck. 2018b. eSense: Open Earable Platform for Human Sensing. In Proceedings of the 16th ACM Conference on Embedded Networked Sensor Systems SenSys '18. ACM Press, New York, New York, USA, 371--372. DOI: http://dx.doi.org/10.1145/3274783.3275188Google ScholarGoogle ScholarDigital LibraryDigital Library
  50. Richard King, Brett Leonard, Will Howie, and Jack Kelly. 2017. Real or Illusion? A Comparative Study of Captured Ambiance vs. Artificial Reverberation in Immersive Audio Applications. (may 2017). http://www.aes.org/e-lib/browse.cfm?elib=18621Google ScholarGoogle Scholar
  51. Elise Labbé, Nicholas Schmidt, Jonathan Babin, and Martha Pharr. 2007. Coping with stress: The effectiveness of different types of music. Applied Psychophysiology Biofeedback 32, 3--4 (2007), 163--168. DOI: http://dx.doi.org/10.1007/s10484-007--9043--9Google ScholarGoogle ScholarCross RefCross Ref
  52. Chris Lane and Richard Gould. 2018. Let's Test: 3D Audio Spatialization Plugins. (2018). http://designingsound.org/2018/03/29/lets-test-3d-a udio-spatialization-plugins/Google ScholarGoogle Scholar
  53. Pontus Larsson, Aleksander Väljamäe, Daniel Västfjäll, Ana Tajadura-Jiménez, and Mendel Kleiner. 2010. Auditory-Induced Presence in Mixed Reality Environments and Related Technology. Springer, London, 143--163. DOI: http://dx.doi.org/10.1007/978--1--84882--733--2_8Google ScholarGoogle ScholarCross RefCross Ref
  54. Catherine Lavandier and Boris Defréville. 2006. The contribution of sound source characteristics in the assessment of urban soundscapes. Acta Acustica united with Acustica 92, 6 (2006), 912--921. https://www.ingentaconnect.com/content/dav/aaua/2006/00000092/00000006/art00009Google ScholarGoogle Scholar
  55. Robert W. Lindeman and Haruo Noma. 2007. A classification scheme for multi-sensory augmented reality. In Proceedings of the 2007 ACM symposium on Virtual reality software and technology - VRST '07. ACM Press, New York, New York, USA, 175. DOI: http://dx.doi.org/10.1145/1315184.1315216Google ScholarGoogle ScholarDigital LibraryDigital Library
  56. Robert W. Lindeman, Haruo Noma, and Paulo Gonçalves De Barros. 2007. Hear-through and mic-through augmented reality: Using bone conduction to display spatialized audio. In 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, ISMAR. DOI: http://dx.doi.org/10.1109/ISMAR.2007.4538843Google ScholarGoogle ScholarDigital LibraryDigital Library
  57. Robert W. Lindeman, Haruo Noma, and Paulo Goncalves de Barros. 2008. An Empirical Study of Hear-Through Augmented Reality: Using Bone Conduction to Deliver Spatialized Audio. In 2008 IEEE Virtual Reality Conference. IEEE, 35--42. DOI: http://dx.doi.org/10.1109/VR.2008.4480747Google ScholarGoogle ScholarCross RefCross Ref
  58. Matthew Lombard and Theresa Ditton. 2006. At the Heart of It All: The Concept of Presence. Journal of Computer-Mediated Communication 3, 2 (jun 2006), 0--0. DOI: http://dx.doi.org/10.1111/j.1083--6101.1997.tb00072.xGoogle ScholarGoogle ScholarCross RefCross Ref
  59. Dave Madole and Durand Begault. 1995. 3-D Sound for Virtual Reality and Multimedia. Computer Music Journal 19, 4 (1995), 99. DOI: http://dx.doi.org/10.2307/3680997Google ScholarGoogle ScholarCross RefCross Ref
  60. Mapbox. 2019. Mapbox Live Location Platform. (2019). https://www.mapbox.com/Google ScholarGoogle Scholar
  61. Georgios Marentakis and Rudolfs Liepins. 2014. Evaluation of hear-through sound localization. In Conference on Human Factors in Computing Systems Proceedings. 267--270. DOI: http://dx.doi.org/10.1145/2556288.2557168Google ScholarGoogle ScholarDigital LibraryDigital Library
  62. Nick Mariette. 2007. From Backpack to Handheld: The Recent Trajectory of Personal Location Aware Spatial Audio. academia.edu (2007). http://www.academia.edu/download/698511/NickGoogle ScholarGoogle Scholar
  63. Nicholas Mariette. 2013. Human Factors Research in Audio Augmented Reality. In Human Factors in Augmented Reality Environments. Springer New York, New York, NY, 11--32. DOI: http://dx.doi.org/10.1007/978--1--4614--4205--9_2Google ScholarGoogle ScholarCross RefCross Ref
  64. Mark Weiser. 1991. The Computer for the 21st Century. Scientific American 265, 3 (1991), 94--104. https://ieeexplore.ieee.org/abstract/document/993141/ http://www.cse.nd.edu/Google ScholarGoogle ScholarDigital LibraryDigital Library
  65. Grzegorz Maziarczyk. 2018. Transhumanist Dreams and/as Posthuman Nightmares in Black Mirror. Roczniki Humanistyczne 66, 11 Zeszyt specjalny (2018), 125--136. DOI: http://dx.doi.org/10.18290/rh.2018.66.11s-10Google ScholarGoogle ScholarCross RefCross Ref
  66. Mark McGill, Daniel Boland, Roderick Murray-Smith, and Stephen Brewster. 2015a. A Dose of Reality: Overcoming Usability Challenges in VR Head-Mounted Displays. In Proc. of CHI '15. ACM Press, New York, New York, USA. DOI: http://dx.doi.org/10.1145/2702123.2702382Google ScholarGoogle ScholarDigital LibraryDigital Library
  67. Mark McGill, John Williamson, and Stephen A. Brewster. 2015b. It Takes Two (To Co-View). In Proceedings of the ACM International Conference on Interactive Experiences for TV and Online Video - TVX '15. ACM Press, 23--32. DOI: http://dx.doi.org/10.1145/2745197.2745199Google ScholarGoogle ScholarDigital LibraryDigital Library
  68. David McGookin and David. 2016. Towards ubiquitous location-based audio. In Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct MobileHCI '16. ACM Press, New York, New York, USA, 1064--1068. DOI: http://dx.doi.org/10.1145/2957265.2964196Google ScholarGoogle ScholarDigital LibraryDigital Library
  69. Siobhán McHugh. 2012. The affective power of sound: Oral history on radio. (2012). DOI: http://dx.doi.org/10.1093/ohr/ohs092Google ScholarGoogle ScholarCross RefCross Ref
  70. Frank Melchior and Darius Satongar. 2019. Spatial Audio for Broadcast. (2019). https://www.bbc.co.uk/rd/projects/periphony-for-broadcastGoogle ScholarGoogle Scholar
  71. Microsoft. 2019. Microsoft Soundscape. (2019). https://www.microsoft.com/en-us/research/product/sou ndscape/Google ScholarGoogle Scholar
  72. Chulhong Min, Akhil Mathur, and Fahim Kawsar. 2018. Exploring audio and kinetic sensing on earable devices. In Proceedings of the 4th ACM Workshop on Wearable Systems and Applications - WearSys '18. ACM Press, New York, New York, USA, 5--10. DOI: http://dx.doi.org/10.1145/3211960.3211970Google ScholarGoogle ScholarDigital LibraryDigital Library
  73. Miscellaneous. 2019a. Countryside birds 1 - ?GN00421Birds_chirp_during_the_early_spring-wxyz.wav'. (2019). http://www.spheric-collection.com/Google ScholarGoogle Scholar
  74. Miscellaneous. 2019b. VR Ambisonic Sound Library ?Waves, Mediterranean Sea, Rocky Beach, Crushing, Birds, Seagulls, Splashing, Windy, Edro 3 Shipwreck, Cyprus, Zoom H3VR,9624, 02.wav'. (2019). https://freetousesounds.com/product/vr-ambisonic-sou nd-library/Google ScholarGoogle Scholar
  75. Richard D. Morey. 2019. Computation of Bayes Factors for Common Designs [R package BayesFactor version 0.9.12--4.2]. cran (2019). https://cran.r-project.org/w eb/packages/BayesFactor/index.htmlGoogle ScholarGoogle Scholar
  76. Craig D. Murray, Paul Arnold, and Ben Thornton. 2000. Presence accompanying induced hearing loss: Implications for immersive virtual environments. Presence: Teleoperators and Virtual Environments 9, 2 (apr 2000), 137--148. DOI: http://dx.doi.org/10.1162/105474600566682Google ScholarGoogle ScholarDigital LibraryDigital Library
  77. Elizabeth D. Mynatt, Maribeth Back, Roy Want, and Ron Frederick. 1997. Audio Aura: Light-weight audio augmented reality. In UIST (User Interface Software and Technology): Proceedings of the ACM Symposium. 211--212. http://www.roywant.com/cv/papers/1997/199710(UIST97)AudioAura.pdfGoogle ScholarGoogle Scholar
  78. Takuji Narumi, Shinya Nishizaka, Takashi Kajinami, Tomohiro Tanikawa, and Michitaka Hirose. 2011. MetaCookie+. In 2011 IEEE Virtual Reality Conference. IEEE, 265--266. DOI: http://dx.doi.org/10.1109/VR.2011.5759500Google ScholarGoogle ScholarDigital LibraryDigital Library
  79. Danielle Navarro. 2018. Learning statistics with R: A tutorial for psychology students and other beginners. (Version 0.6.1). In University of New South Wales. Chapter Chapter 17. https://learningstatisticswithr.com/book/bayes.htmlGoogle ScholarGoogle Scholar
  80. Rolf Nordahl and Niels Christian Nilsson. 2014. The Sound of Being There: Presence and Interactive Audio in Immersive Virtual Reality. The Oxford Handbook of Interactive Audio March 2016 (2014), 213--233. DOI: http://dx.doi.org/10.1093/oxfordhb/9780199797226.013.013Google ScholarGoogle ScholarCross RefCross Ref
  81. Oculus. 2019. Oculus Audio SDK. (2019). https://developer.oculus.com/documentation/audiosdk/latest/concepts/book-audiosdk/Google ScholarGoogle Scholar
  82. Kenji Ozawa, Yoshihiro Chujo, Yôiti Suzuki, and Toshio Sone. 2003. Psychological factors involved in auditory presence. Acoustical Science and Technology 24, 1 (2003), 42--44. DOI: http://dx.doi.org/10.1250/ast.24.42Google ScholarGoogle ScholarCross RefCross Ref
  83. Joseph Plazak and Marta Kersten-Oertel. 2018. A survey on the affordances of 'hearables'. (jul 2018). DOI: http://dx.doi.org/10.3390/inventions3030048Google ScholarGoogle ScholarCross RefCross Ref
  84. Jussi Rämö and Vesa Välimäki. 2012. Digital Augmented Reality Audio Headset. Journal of Electrical and Computer Engineering 2012 (oct 2012), 1--13. DOI: http://dx.doi.org/10.1155/2012/457374Google ScholarGoogle ScholarCross RefCross Ref
  85. Josephine Reid, Erik Geelhoed, Richard Hull, Kirsten Cater, and Ben Clayton. 2005. Parallel Worlds: Immersion in Location-Based Experiences. In CHI '05 Extended Abstracts on Human Factors in Computing Systems (CHI EA '05). Association for Computing Machinery, 1733--1736. DOI: http://dx.doi.org/10.1145/1056808.1057009Google ScholarGoogle ScholarDigital LibraryDigital Library
  86. Resonance Audio. 2019a. Fundamental Concepts. (2019). https://resonance-audio.github.io/resonanceaudio/discover/concepts.htmlGoogle ScholarGoogle Scholar
  87. Resonance Audio. 2019b. Resonance Audio - Spatial Audio Toolkit. (2019). https://resonance-audio.github.io/resonance-audio/Google ScholarGoogle Scholar
  88. Julie Rico and Stephen Brewster. 2010. Gesture and voice prototyping for early evaluations of social acceptability in multimodal interfaces. In International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction on ICMI-MLMI '10. ACM Press, New York, New York, USA, 1. DOI: http://dx.doi.org/10.1145/1891903.1891925Google ScholarGoogle ScholarDigital LibraryDigital Library
  89. Chiara Rossitto, Louise Barkhuus, and Arvid Engström. 2016. Interweaving place and story in a location-based audio drama. Personal and Ubiquitous Computing 20, 2 (2016), 245--260. DOI: http://dx.doi.org/10.1007/s00779-016-0908-xGoogle ScholarGoogle ScholarDigital LibraryDigital Library
  90. J Saldaña. 2015. The coding manual for qualitative researchers. Sage.Google ScholarGoogle Scholar
  91. Nitin Sawhney and Chris Schmandt. 2000. Nomadic Radio: Speech and Audio Interaction for Contextual Messaging in Nomadic Environments. ACM Transactions on Computer-Human Interaction 7, 3 (2000), 353--383. DOI: http://dx.doi.org/10.1145/355324.355327Google ScholarGoogle ScholarDigital LibraryDigital Library
  92. Hanna Kathrin Schraffenberger. Arguably augmented reality: relationships between the virtual and the real. Ph.D. Dissertation. https://openaccess.leidenuniv.nl/handle/1887/67292Google ScholarGoogle Scholar
  93. T Schubert, F Friedmann, and H Regenbrecht. 2001. The experience of presence: Factor analytic insights. Presence (2001). http://ieeexplore.ieee.org/xpls/absGoogle ScholarGoogle Scholar
  94. Stefania Serafin, Michele Geronazzo, Cumhur Erkut, Niels C. Nilsson, and Rolf Nordahl. 2018. Sonic Interactions in Virtual Reality: State of the Art, Current Challenges, and Future Directions. IEEE Computer Graphics and Applications 38, 2 (mar 2018), 31--43. DOI: http://dx.doi.org/10.1109/MCG.2018.193142628Google ScholarGoogle ScholarDigital LibraryDigital Library
  95. Sheng Shen, Nirupam Roy, Junfeng Guan, Haitham Hassanieh, and Romit Roy Choudhury. 2018. MUTE: bringing IoT to noise cancellation. In Proceedings of the 2018 Conference of the ACM Special Interest Group on Data Communication - SIGCOMM '18. ACM Press, New York, New York, USA, 282--296. DOI: http://dx.doi.org/10.1145/3230543.3230550Google ScholarGoogle ScholarDigital LibraryDigital Library
  96. Adalberto L Simeone, Eduardo Velloso, and Hans Gellersen. 2015. Substitutional Reality: Using the Physical Environment to Design Virtual Reality Experiences. Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems CHI '15 (2015). DOI: http://dx.doi.org/10.1145/2702123.2702389Google ScholarGoogle ScholarDigital LibraryDigital Library
  97. Maximilian Speicher, Brian D. Hall, and Michael Nebeling. 2019. What is Mixed Reality?. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems - CHI '19. ACM Press, New York, New York, USA, 1--15. DOI: http://dx.doi.org/10.1145/3290605.3300767Google ScholarGoogle ScholarDigital LibraryDigital Library
  98. Charles Spence and Maya U. Shankar. 2010. The influence of auditory cues on the perception of, and responses to, food and drink. Journal of Sensory Studies 25, 3 (mar 2010), 406--430. DOI: http://dx.doi.org/10.1111/j.1745--459X.2009.00267.xGoogle ScholarGoogle ScholarCross RefCross Ref
  99. Evgeny Stemasov, Gabriel Haas, Michael Rietzler, and Enrico Rukzio. 2018. Augmenting Human Hearing Through Interactive Auditory Mediated Reality. (2018). DOI: http://dx.doi.org/10.1145/3266037.3266104Google ScholarGoogle ScholarDigital LibraryDigital Library
  100. D Storek, J Stuchlik, and F Rund. 2015. Modifications of the surrounding auditory space by augmented reality audio: Introduction to warped acoustic reality. (2015). https://smartech.gatech.edu/handle/1853/54143Google ScholarGoogle Scholar
  101. Miikka Tikander, Matti Karjalainen, and Ville Riikonen. 2008. An Augmented Reality Audio Headset. In Proc. of the 11th International Conference on Digital Audio Effects (DAFx-08). https://core.ac.uk/download/pdf/21721632.pdfGoogle ScholarGoogle Scholar
  102. Unity Technologies. 2019. Unity Game Engine. (2019). https://unity3d.com/Google ScholarGoogle Scholar
  103. Vesa Valimaki, Andreas Franck, Jussi Ramo, Hannes Gamper, and Lauri Savioja. 2015a. Assisted listening using a headset: Enhancing audio perception in real, augmented, and virtual environments. IEEE Signal Processing Magazine 32, 2 (mar 2015), 92--99. DOI: http://dx.doi.org/10.1109/MSP.2014.2369191Google ScholarGoogle ScholarCross RefCross Ref
  104. Vesa Valimaki, Andreas Franck, Jussi Ramo, Hannes Gamper, and Lauri Savioja. 2015b. Assisted Listening Using a Headset: Enhancing audio perception in real, augmented, and virtual environments. IEEE Signal Processing Magazine 32, 2 (mar 2015), 92--99. DOI: http://dx.doi.org/10.1109/MSP.2014.2369191Google ScholarGoogle ScholarCross RefCross Ref
  105. Valve. 2018. Steam Audio. (2018). https://valvesoftware.github.io/steam-audio/Google ScholarGoogle Scholar
  106. Daniel Västfjäll, Mendel Kleiner, and Tommy Görling. 2003. Affective Reactions to and Preference for Combinations of Interior Aircraft Sound and Vibration. The International Journal of Aviation Psychology 13, 1 (2003), 33--47. DOI: http://dx.doi.org/10.1207/S15327108IJAP1301_3Google ScholarGoogle ScholarCross RefCross Ref
  107. Javier Vilanova. 2017. Extended Reality and Abstract Objects: A pragmalinguistic approach. DOI: http://dx.doi.org/10.1515/9783110497656-003Google ScholarGoogle ScholarCross RefCross Ref
  108. James Vincent. 2016. Are bone conduction headphones good enough yet? (2016). https://www.theverge.com/2016/10/24/13383616/bone-c onduction-headphones-best-pair-aftershokzGoogle ScholarGoogle Scholar
  109. Vue. 2020. Meet Vue - Smart Glasses. (2020). https://www.enjoyvue.com/Google ScholarGoogle Scholar
  110. Jack Wallen. 2015. Earables: The next big thing. (2015). https://www.techrepublic.com/article/earablesthe-next-big-thing/Google ScholarGoogle Scholar
  111. Marian Weger, Thomas Hermann, and Robert Höldrich. 2019. Real-time Auditory Contrast Enhancement; Real-time Auditory Contrast Enhancement. (2019), 23--27. DOI: http://dx.doi.org/10.4119/unibi/2935786Google ScholarGoogle ScholarCross RefCross Ref
  112. Chris Welch. 2020. Apple AirPods Pro hands-on: the noise cancellation really works. (2020). https://www.theverge.com/2019/10/29/20938740/apple-airpodspro-hands-on-noise-cancellation-photos-featuresGoogle ScholarGoogle Scholar
  113. CD Wickens, R Martin-Emerson, and I Larish. 1993. Attentional tunneling and the head-up display. (1993). https://ntrs.nasa.gov/search.jsp?R=19950063577Google ScholarGoogle Scholar
  114. Kris Wouk. 2014. Intelligent Headset - the first big thing in wearable tech? (2014). https://www.soundguys.com/meet-intelligent-headset-h eadphones-first-big-thing-wearable-tech-676/Google ScholarGoogle Scholar
  115. Wei Yang and Jian Kang. 2005. Soundscape and Sound Preferences in Urban Squares: A Case Study in Sheffield. Journal of Urban Design 10, 1 (feb 2005), 61--80. DOI: http://dx.doi.org/10.1080/13574800500062395Google ScholarGoogle ScholarCross RefCross Ref
  116. Bin Yu, Jun Hu, Mathias Funk, and Loe Feijs. 2016. A Study on User Acceptance of Different Auditory Content for Relaxation. (2016). DOI: http://dx.doi.org/10.1145/2986416.2986418Google ScholarGoogle ScholarDigital LibraryDigital Library
  117. Bin Yu, Jun Hu, Mathias Funk, and Loe Feijs. 2017. A Model of Nature Soundscape for Calm Information Display. Interacting with Computers 29, 6 (nov 2017), 813--823. DOI: http://dx.doi.org/10.1093/iwc/iwx007Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Acoustic Transparency and the Changing Soundscape of Auditory Mixed Reality

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      CHI '20: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems
      April 2020
      10688 pages
      ISBN:9781450367080
      DOI:10.1145/3313831

      Copyright © 2020 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 23 April 2020

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      Overall Acceptance Rate6,199of26,314submissions,24%

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format .

    View HTML Format