ABSTRACT
To investigate preferences for mobile and wearable sound awareness systems, we conducted an online survey with 201 DHH participants. The survey explores how demographic factors affect perceptions of sound awareness technologies, gauges interest in specific sounds and sound characteristics, solicits reactions to three design scenarios (smartphone, smartwatch, head-mounted display) and two output modalities (visual, haptic), and probes issues related to social context of use. While most participants were highly interested in being aware of sounds, this interest was modulated by communication preference--that is, for sign or oral communication or both. Almost all participants wanted both visual and haptic feedback and 75% preferred to have that feedback on separate devices (e.g., haptic on smartwatch, visual on head-mounted display). Other findings related to sound type, full captions vs. keywords, sound filtering, notification styles, and social context provide direct guidance for the design of future mobile and wearable sound awareness systems.
Supplemental Material
Available for Download
- Beth S. Benedict and Marilyn Sass-Lehrer. 2007. Deaf and hearing partnerships: Ethical and communication considerations. American annals of the deaf 152, 3 (2007), 275--282.Google Scholar
- Kerry Bodine and Francine Gemperle. 2003. Efects of functionality on perceived comfort of wearables. Proceedings of the Seventh IEEE International Symposium on Wearable Computers (ISWC'03) (2003), 57--60. Google ScholarDigital Library
- Danielle Bragg, Nicholas Huynh, and Richard E. Ladner. 2016. A Personalizable Mobile Sound Detector App Design for Deaf and Hard-ofHearing Users. In Proceedings of the 18th International ACM SIGACCESS Conference on Computers and Accessibility. ACM Press, New York, New York, USA, 3--13. Google ScholarDigital Library
- Mohammad I Daoud, Mahmoud Al-Ashi, Fares Abawi, and Ala Khalifeh. 2015. In-house alert sounds detection and direction of arrival estimation to assist people with hearing difculties. In Computer and Information Science (ICIS), 2015 IEEE/ACIS 14th International Conference on. IEEE, 297--302.Google ScholarCross Ref
- Harvey Dillon. 2008. Hearing aids. Hodder Arnold.Google Scholar
- Karyn L. Galvin, Jan Ginis, Robert S. Cowan, Peter J. Blamey, and Graeme M. Clark. 2001. A Comparison of a New Prototype Tickle Talker with the Tactaid 7. Australian and New Zealand Journal of Audiology 23, 1 (2001), 18--36.Google ScholarCross Ref
- Jort F. Gemmeke, Daniel P. W. Ellis, Dylan Freedman, Aren Jansen, Wade Lawrence, R. Channing Moore, Manoj Plakal, and Marvin Ritter. 2017. Audio set: An ontology and human-labeled dataset for audio events. In Acoustics, Speech and Signal Processing (ICASSP), 2017 IEEE International Conference on. IEEE, 776--780.Google ScholarCross Ref
- Abraham Glasser, Kesavan Kushalnagar, and Raja Kushalnagar. 2017. Deaf, hard of hearing, and hearing perspectives on using automatic speech recognition in conversation. In Proceedings of the 19th International ACM SIGACCESS Conference on Computers and Accessibility. ACM, 427--432. Google ScholarDigital Library
- Benjamin M Gorman. 2014. VisAural: a wearable sound-localisation device for people with impaired hearing. In Proceedings of the 16th international ACM SIGACCESS conference on Computers & accessibility. ACM, 337--338. Google ScholarDigital Library
- Richard S Hallam and Roslyn Corney. 2014. Conversation tactics in persons with normal hearing and hearing-impairment. International journal of audiology 53, 3 (mar 2014), 174--81.Google ScholarCross Ref
- Eric G Hintz, Michael D Jones, M Jeannette Lawler, Nathan Bench, and Fred Mangrubang. 2015. Adoption of ASL classifers as delivered by head-mounted displays in a planetarium show. Journal of Astronomy & Earth Sciences Education (JAESE) 2, 1 (2015), 1--16.Google ScholarCross Ref
- F Wai-ling Ho-Ching, Jennifer Mankof, and James A Landay. 2003. Can You See What I Hear?: The Design and Evaluation of a Peripheral Sound Display for the Deaf. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '03). ACM, New York, NY, USA, 161--168. Google ScholarDigital Library
- Thomas K Holcomb. 2012. Introduction to American deaf culture. Oxford University Press.Google Scholar
- Sture Holm. 1979. A simple sequentially rejective multiple test procedure. Scandinavian journal of statistics (1979), 65--70.Google Scholar
- Daniel J. Hruschka, Deborah Schwartz, Daphne Cobb St. John, Erin Picone-Decaro, Richard A. Jenkins, and James W. Carey. 2004. Reliability in Coding Open-Ended Data: Lessons Learned from HIV Behavioral Research. Field Methods 16, 3 (aug 2004), 307--331.Google ScholarCross Ref
- Dhruv Jain, Leah Findlater, Christian Volger, Dmitry Zotkin, Ramani Duraiswami, and Jon Froehlich. 2015. Head-Mounted Display Visualizations to Support Sound Awareness for the Deaf and Hard of Hearing. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. 241--250. Google ScholarDigital Library
- Dhruv Jain, Rachel Franz, Leah Findlater, Jackson Cannon, Raja Kushalnagar, and Jon Froehlich. 2018. Towards Accessible Conversations in a Mobile Context for People who are Deaf and Hard of Hearing. In Proceedings of the International Conference on Computers and Accessibility (ASSETS). ACM. Google ScholarDigital Library
- Dhruv Jain, Angela Carey Lin, Marcus Amalachandran, Aileen Zeng, Rose Guttman, Leah Findlater, and Jon Froehlich. 2019. Exploring Sound Awareness in the Home for People who are Deaf or Hard of Hearing. In SIGCHI Conference on Human Factors in Computing Systems (CHI). ACM. Google ScholarDigital Library
- Michael Jones, M Jeannette Lawler, Eric Hintz, Nathan Bench, Fred Mangrubang, and Mallory Trullender. 2014. Head Mounted Displays and Deaf Children: Facilitating Sign Language in Challenging Learning Environments. In Proceedings of the 2014 Conference on Interaction Design and Children (IDC '14). ACM, New York, NY, USA, 317--320. Google ScholarDigital Library
- Y Kaneko, Inho Chung, and K Suzuki. 2013. Light-Emitting Device for Supporting Auditory Awareness of Hearing-Impaired People during Group Conversations. In Systems, Man, and Cybernetics (SMC), 2013 IEEE International Conference on. 3567--3572. Google ScholarDigital Library
- Norene Kelly. 2017. All the world's a stage: what makes a wearable socially acceptable. interactions 24, 6 (2017), 56--60. Google ScholarDigital Library
- Ki-Won Kim, Jung-Woo Choi, and Yang-Hann Kim. 2013. An Assistive Device for Direction Estimation of a Sound Source. Assistive Technology 25, 4 (oct 2013), 216--221. Preferences for Sound Awareness TechnologiesGoogle ScholarCross Ref
- Raja S Kushalnagar, Gary W Behm, Joseph S Stanislow, and Vasu Gupta. 2014. Enhancing caption accessibility through simultaneous multimodal information: visual-tactile captions. In Proceedings of the 16th international ACM SIGACCESS conference on Computers & accessibility. ACM, 185--192. Google ScholarDigital Library
- James Lee. 1997. Speechreading in Context: A Guide for Practice in Everyday Settings. Ph.D. Dissertation. Laurent Clerc National Deaf Education Center, Gallaudet University.Google Scholar
- Deborah Lupton and Wendy Seymour. 2000. Technology, selfhood, and physical disability. Social Science & Medicine 50 (2000), 1851--1862.Google ScholarCross Ref
- Marc Marschark, Harry G Lang, and John A Albertini. 2001. Educating deaf students: From research to practice. Oxford University Press.Google Scholar
- Tara Matthews, Janette Fong, F. Wai-Ling Ho-Ching, and Jennifer Mankof. 2006. Evaluating non-speech sound visualizations for the deaf. Behaviour & Information Technology 25, 4 (jul 2006), 333--351.Google ScholarCross Ref
- Tara Matthews, Janette Fong, and Jennifer Mankof. 2005. Visualizing non-speech sounds for the deaf. In Proceedings of the 7th international ACM SIGACCESS conference on Computers and Accessibility - Assets '05. ACM Press, New York, New York, USA, 52--59. Google ScholarDigital Library
- Matthias Mielke and Rainer Brück. 2015. A Pilot Study about the Smartwatch as Assistive Device for Deaf People. In Proceedings of the 17th International ACM SIGACCESS Conference on Computers & Accessibility. ACM, 301--302. Google ScholarDigital Library
- Matthias Mielke and Rainer Brueck. 2015. Design and evaluation of a smartphone application for non-speech sound awareness for people with hearing loss. In Engineering in Medicine and Biology Society (EMBC), 2015 37th Annual International Conference of the IEEE. IEEE, 5008--5011.Google ScholarCross Ref
- Donald F Moores. 2010. The history of language and communication issues in deaf education. The Oxford handbook of deaf studies, language, and education 2 (2010), 17--30.Google Scholar
- National Institute on Deafness and Other Communication Disorders (NIDCD). 2014. Cochlear Implants. http://www.nidcd.nih.gov/health/ hearing/pages/coch.aspxGoogle Scholar
- Becky Sue Parton. 2016. Glass Vision 3D: Digital Discovery for the Deaf. TechTrends (2016), 1--6.Google Scholar
- Yi-Hao Peng, Ming-Wei Hsu, Paul Taele, Ting-Yu Lin, Po-En Lai, Leon Hsu, Tzu-chuan Chen, Te-Yen Wu, Yu-An Chen, Hsien-Hui Tang, and Mike Y. Chen. 2018. SpeechBubbles: Enhancing Captioning Experiences for Deaf and Hard-of-Hearing People in Group Conversations. In SIGCHI Conference on Human Factors in Computing Systems (CHI). ACM, Paper No. 293. Google ScholarDigital Library
- Halley Profta, Reem Albaghli, Leah Findlater, Paul Jaeger, and Shaun Kane. 2016. The AT Efect: How Disability Afects the Perceived Social Acceptability of Head-Mounted Display Use. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI 2016). 4884--4895. Google ScholarDigital Library
- Halley Profta, James Clawson, Scott Gilliland, Clint Zeagler, Thad Starner, Jim Budd, and Ellen Yi-luen Do. 2013. Don't mind me touching my wrist: a case study of interacting with on-body technology in public. In Proc. ISWC'13. 89--96. Google ScholarDigital Library
- Ruiwei Shen, Tsutomu Terada, and Masahiko Tsukamoto. 2013. A system for visualizing sound source using augmented reality. International Journal of Pervasive Computing and Communications 9, 3 (2013), 227--242.Google ScholarCross Ref
- Kristen Shinohara and JO Wobbrock. 2011. In the shadow of misperception: assistive technology use and social interactions. SIGCHI Conference on Human Factors in HCI (2011), 705--714. CHI 2019, May 4--9, 2019, Glasgow, Scotland UK Google ScholarDigital Library
- Liu Sicong, Zhou Zimu, Du Junzhao, Shangguan Longfei, Jun Han, and Xin Wang. 2017. UbiEar: Bringing Location-independent Sound Awareness to the Hard-of-hearing People with Smartphones. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 1, 2 (2017), 17. Google ScholarDigital Library
- Sony. {n. d.}. Sony Access Glasses. https://bit.ly/2DgwpPKGoogle Scholar
- Martin Tomitsch and Thomas Grechenig. 2007. Design Implications for a Ubiquitous Ambient Sound Display for the Deaf.. In Conference & Workshop on Assistive Technologies for People with Vision & Hearing Impairments Assistive Technology for All Ages (CVHI 2007), M.A. Hersh (Ed.).Google Scholar
- Martin Weigel, Vikram Mehta, and Jürgen Steimle. 2014. More Than Touch: Understanding How People Use Skin As an Input Surface for Mobile Computing. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '14). ACM, New York, NY, USA, 179--188. Google ScholarDigital Library
- Jacob O Wobbrock, Leah Findlater, Darren Gergle, and James J Higgins. 2011. The Aligned Rank Transform for Nonparametric Factorial Analyses Using Only Anova Procedures. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '11). ACM, New York, NY, USA, 143--146. Google ScholarDigital Library
- Eddy Yeung, Arthur Boothroyd, and Cecil Redmond. 1988. A Wearable Multichannel Tactile Display of Voice Fundamental Frequency. Ear and Hearing 9, 6 (1988), 342--350.Google ScholarCross Ref
- Hanfeng Yuan, Charlotte M. Reed, and Nathaniel I. Durlach. 2005. Tactual display of consonant voicing as a supplement to lipreading. The Journal of the Acoustical Society of America 118, 2 (aug 2005), 1003.Google ScholarCross Ref
Index Terms
- Deaf and Hard-of-hearing Individuals' Preferences for Wearable and Mobile Sound Awareness Technologies
Recommendations
HoloSound: Combining Speech and Sound Identification for Deaf or Hard of Hearing Users on a Head-mounted Display
ASSETS '20: Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and AccessibilityHead-mounted displays can provide private and glanceable speech and sound feedback to deaf and hard of hearing people, yet prior systems have largely focused on speech transcription. We introduce HoloSound, a HoloLens-based augmented reality (AR) ...
A Taxonomy of Sounds in Virtual Reality
DIS '21: Proceedings of the 2021 ACM Designing Interactive Systems ConferenceVirtual reality (VR) leverages human sight, hearing and touch senses to convey virtual experiences. For d/Deaf and hard of hearing (DHH) people, information conveyed through sound may not be accessible. To help with future design of accessible VR sound ...
Head-Mounted Display Visualizations to Support Sound Awareness for the Deaf and Hard of Hearing
CHI '15: Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing SystemsPersons with hearing loss use visual signals such as gestures and lip movement to interpret speech. While hearing aids and cochlear implants can improve sound recognition, they generally do not help the wearer localize sound necessary to leverage these ...
Comments