skip to main content
10.1145/3613905.3650871acmconferencesArticle/Chapter ViewFull TextPublication PageschiConference Proceedingsconference-collections
Work in Progress
Free Access

What Factors Motivate Culturally Deaf People to Want Assistive Technologies?

Published:11 May 2024Publication History

Abstract

Assistive technologies see high rates of rejection, especially in Deaf communities, due to barriers including cost, discomfort, acclimatization periods, lack of aesthetic appeal, visibility of technology and/or disability, misalignment with user’s identity, loss of independence, self-stigma, social anxiety, social stigma, privacy concerns, surveillance and control of data. To explore what factors might motivate Deaf people to engage with assistive technologies, we interviewed twelve Deaf people about a proposed AI-based assistive technology: a personal assistant device which would understand and respond in Auslan, the sign language of the Australian Deaf community. Thematic analysis of the interview results revealed interviewees would be motivated to use such technology by linguistic and cultural alignment, equality, accessibility and inclusion, independence, life improvements, and aesthetic and technology appeal. We propose these motivating factors should be considered in the development of future assistive technologies, especially those intended for culturally Deaf people.

Skip 1INTRODUCTION Section

1 INTRODUCTION

The promise of assistive technologies is to improve everyday life for disabled people1. However, some assistive technologies see low rates of adoption for reasons including social stigma, self-stigma, and misalignment with users’ identities. This can be seen clearly in reactions to the iconic assistive listening devices, hearing aids and cochlear implants, which receive pushback from Deaf and Hard of Hearing people (DHH people)2.

Recent years have seen the development of new types of AI-based assistive technologies. Some have been developed to support blind and visually impaired people [18, 22], and for Deaf people with mixed success [19, 34, 36]; this will be discussed further in Section 2.1. Smart home and personal assistant technologies have also been cast in the role of assistive technologies, particularly for older adults [2] and people with ADHD [28].

We are developing a new high-tech assistive technology for Deaf people: an AI-based personal assistant, in the style of a Google Home or Amazon Alexa, to which Deaf people can sign commands and receive signed responses [23, 38]. We class this as an assistive technology because it has the potential to bridge an access gap, specifically by increasing Deaf people’s access to information and technology. We have undertaken a participatory design approach, as it is important to involving Deaf people in the development of sign language technologies [4]. We began with a series of interviews with twelve Deaf participants, to establish how familiar they are with personal assistants, what functionality they would want in an Auslan3-capable personal assistant, and what factors would motivate or act as barriers to their adoption of such technology. While analysing our data to determine requirements for this technology, we identified that some of the motivating factors could help to address known barriers to assistive technology adoption for this user group. The contribution of this paper is in identifying these factors, as elements for developers of assistive technologies to consider to overcome barriers to adoption of assistive technologies for culturally Deaf people.

Skip 2BACKGROUND Section

2 BACKGROUND

2.1 Acceptance and Rejection of Assistive Technologies

The World Health Organization defines assistive technologies (ATs) as “any product, instrument, equipment or technology adapted or specially designed for improving the functioning of a disabled person” [37]. ATs can have many benefits for users, including improving access, increasing independence, and easing communication and social interactions [7, 13]. However, there are barriers to their acceptance and use. ATs can be expensive, uncomfortable, or require a long acclimatization period or multiple adjustments [26, 29]. They may be considered aesthetically unappealing, especially if they “appear medical” [1, 26]. ATs can also make previously invisible disabilities visible, “outing” the user without their control, which can be validating; or can cause fear of social stigma ("self-stigma"), social anxiety, and social stigma [7, 13, 26, 29]. Networked and AI-based ATs, such as fall detectors, also introduce concerns about privacy, surveillance, and control of data [1].

Assistive technologies intended for DHH people can be divided into three categories: assistive listening devices, sound notification systems, and intermediate communication devices [10]. Assistive listening devices include technologies such as cochlear implants and hearing aids, which face resistance from multiple sub-groups for multiple reasons [27]. Some key examples include:

Cochlear implants and hearing aids, paired with oral-only or oral-first education, are contentious within many Deaf communities around the world. They are seen as perpetuating a narrative that deaf people are “broken”, and need to be fixed to become “normal” (i.e. hearing) [8].

DHH people who perceive HAs or CIs as visible, and therefore making their disability visible, experience self-stigma and social anxiety [7]. This is amplified for those who do not identify as disabled.

Older adults may reject all assistive technologies, including hearing-related ones, due to self-stigma and social stigma of “being old” and/or disabled when this does not align with their identity [1, 31]. Additionally, unappealing aesthetics of “too medical” assistive technologies are seen to indicate a loss of independence [1].

DHH people who enjoy sports may reject hearing ATs which are not designed to be water/sweat-proof [29].

As surgical implants, cochlear implants seem more contentious than hearing aids which can be removed at will [7].

Sound notification systems include flashing smoke alarms, baby cry monitors, shake ’n’ wake alarms and door bell alert systems. Adoption and acceptance of non-sound-based AT devices for DHH people has not been well studied to date. When reviewing sound notification alarm systems intended to wake sleeping DHH people, Smedberg et al. found that of the 35 DHH people surveyed, the most popular system, a fixed strobe light alarm, was in use by less than 40% of respondents [33].

Intermediate communication devices are systems that assist communication between a Deaf person and a hearing person. Examples of this are sign language processing systems and speech-to-text systems. Sign language processing systems include sign recognition, via video or gloves, which translate signing (or more commonly, transliterate signed spelling) to speech or text, and automatic translation of spoken/written language to sign language via avatars or generated video [34]. These technologies have received mixed reviews, with the most egregious being branded “disability dongles” [11] by Deaf communities [19, 36]. Speech-to-text systems can be seen when using applications such as Zoom or Microsoft Teams. There are also several speech-to-text applications which are web-based or can be downloaded onto mobile devices. Automatic speech to text currently lacks accuracy, especially for domain-specific words, or in environments with multiple speakers [12]. Readability of displayed text can affect comprehension [3]. Text-based assistive technologies may not be desirable for signing Deaf people due to either preference, or “literacy concerns" [15]. These concerns act as barriers to adoption, although Butler et al.[6] found that DHH students were more accepting of ASR in conjunction with other assistive technologies

Historically, the development of ATs has focused on functionality alone [26]. Designing ATs in ways which consider aesthetic appeal, users’ self-expression and importantly identity, especially along the dimensions of ability/disability and culture, may support adoption of ATs [26, 29]. This principle can be seen in the attempts of some people with hearing loss to "hack" their hearing loss, by developing strategies and technologies which support their AT use, replace ATs which did not meet their needs, or alter ATs to better suit their identity and aesthetics [29, 30].

2.2 Personal Assistants for Deaf and Hard of Hearing People

Voice-activated personal assistants are not currently accessible to many DHH people. Automatic speech recognition systems are often not trained on datasets including deaf voices, which can vary from hearing people’s voices, and from other deaf voices [14]. This means that, even for DHH speakers whose speech is considered “good" by hearing listeners, automatic speech recognition produces a high error rate and unpredictable results [14, 16, 32]. Further, for culturally Deaf people, speaking may mean using their second language. Existing research conducted in the USA and Japan has found that DHH people would be interested in a personal assistant they could sign to [16, 17, 21, 35] and signing Deaf participants were interested in a personal assistant which could sign back to them [16, 21].

Our proposed AI AT takes the form of an Auslan-capable personal assistant, which Deaf people can sign to in Auslan, and receive responses in Auslan via a photorealistic avatar [38].

Skip 3METHODOLOGY Section

3 METHODOLOGY

Table 1:
ParticipantGenderAgeAcq.EducationParentsFamilyDeaf connections
P32F583DiplomaHearingDHHCommunity gatherings
TAtkinsonM54N/AGraduate diplomaHearingHearingSocial
SabarM295BachelorsHearingHearingSport, community gatherings, religion/church, school, other
DeafGavinM50BirthDiplomaDeafDHHSport, community gatherings, school
P7F6018BachelorsHearingHearingCommunity gatherings, politics, advocacy, life-long friendships
Lobau1805M666BachelorsHearingDHHSport, community gatherings, religion/church, school
Captain AwesomeM346BachelorsHearingHearingSocial
P14F3418BachelorsHearingHearingSport, community gatherings, school
P3M19BirthDiplomaDeafDHHSport, community gatherings, school
P5F304DiplomaHearingHearingCommunity gatherings, school
Jarvis1Demographic data unavailable
P6Demographic data unavailable
  • Participants were invited to choose their own codenames, with informed consent information explaining how their data, comments and codenames would be used. Some participants selected numbers greater than the number of participants (e.g. P32). All participants involved in the study have been included in this analysis.

  • ‘Acq.’ shows the age at which participants began acquiring Auslan. ‘Deaf Connections’ shows participants’ self-reported connections to the local Deaf community. ‘Family’ shows whether the participant’s non-parent family members are all hearing, or if they have DHH family members.

  • Demographic data about participants’ race/ethnicity is not available.

Table 1: Participant demographics

  • Participants were invited to choose their own codenames, with informed consent information explaining how their data, comments and codenames would be used. Some participants selected numbers greater than the number of participants (e.g. P32). All participants involved in the study have been included in this analysis.

  • ‘Acq.’ shows the age at which participants began acquiring Auslan. ‘Deaf Connections’ shows participants’ self-reported connections to the local Deaf community. ‘Family’ shows whether the participant’s non-parent family members are all hearing, or if they have DHH family members.

  • Demographic data about participants’ race/ethnicity is not available.

We interviewed 12 Deaf people (demographics in table 1). Recruitment was through Deaf social media groups and project team members’ personal networks, as approved by our university’s LNR HREC. Participants were given information sheets, and signed consent forms which allowed them to nominate how their research data could be used and published. The interviews were conducted in Auslan, in person, by a Deaf woman, using a question set adapted from Glasser et al. [16] and localized for the Australian context. For each interview, both interviewer and intervewee were filmed. The recorded interviews were translated from Auslan into written English by two native Auslan-users who hold interpreting certifications (the Deaf woman who conducted the interviews, and a CODA4). Limitations within the translation process were resolved by engaging in discussion and cross-checking. Additionally, the two translators ensured focus was on meaning transfer.

The translated data was analysed through iterative, inductive thematic analysis [5] by the CODA and a hearing researcher. The CODA and hearing researcher also cross checked their analysis and had multiple discussions throughout the data analysis to ensure a consistent coding approach. This paper focuses on the themes which were categorised as motivating factors and barriers to adoption.

Skip 4RESULTS Section

4 RESULTS

Through our thematic analysis, we identified several factors which would motivate our participants to adopt an AI-based assistive technology such as the one we are designing. We also identified a number of barriers to adoption. We recognise some of these categories may seem to overlap –- for example, independence could be seen as one element of an improved life –- but these categories have been chosen based on terms used by the participants. Many of the quotes below are framed around our prospective AI AT, but have been included because they demonstrate a motivating factor, and can be contrasted with known barriers to AT adoption.

4.1 Linguistic and Cultural Alignment

The most significant motivational factor that emerged was how a sign language-capable AI AT would be linguistically aligned to the Deaf community. All participants (n=12) said they would be interested in an AI AT which recognised Auslan commands and responded in Auslan. All 12 participants also said if they had such a device, as long as it had features they personally desired and signed fluently, they would use it every day.

The consensus among participants was that it would be faster and easier to communicate with an Auslan-capable AT, compared with voice-activated AI technologies. This suggests linguistic alignment can provide a level of comfort for Deaf people that hearing ATs do not. One participant also commented that an Auslan-capable AI AT would give him more scope to choose when he would wear his hearing ATs, because our AT would give him more control and more options, thereby supporting his use of both types of AT.

Four participants said they had used voice-activated personal assistants. Three of those four said they had difficulties interacting with them because there is low accuracy for understanding their “Deaf voice”, which reflects [14]’s findings.

It’s important to note that although most participants suggested the device ought to only communicate in Auslan (n=7), there were suggestions among other participants that linguistic alignment with the Deaf community is broader. For example, people in the community who are Hard of Hearing or are still learning Auslan may require English subtitles. There are some Deaf people who still use Signed English5. For those who have experienced language deprivation, there could also be pictures and visual cues to facilitate understanding. One participant suggested including Braille to support Deafblind6 users.

Four participants also wanted the proposed AI AT to align with Deaf culture. One participant stated that, "You want it to behave like a Deaf person. You want it to include Deaf culture and how we communicate... You want the avatar to incorporate Deaf culture. How we sign, how we structure things. We want it to be the same," (Jarvis1). In addition to Deaf-like signing and language use, participants wanted interactions with the device to align with Deaf culture, such as suggested “wake-up" triggers mimicking interactions between Deaf people (i.e. an attention getting wave and use of a sign name) One participant noted that the data used to train the AI system must be collected from Deaf people.

4.2 Equality

Another significant motivating factor which emerged was equality for the Deaf community (n=8). Several participants stated that having an AI AT which Deaf people can communicate with in their first language would mean they are equal to hearing people, who already have access to voice-activated personal assistant technologies. For instance, one participant stated that, "I would say I would feel... more equal to hearing people. Because hearing people can speak and use [personal assistants]. I could have access [to a device] that I can sign to and ‘communicate’ with," (P3), while another reported a desire to "catch up a bit" (P14) with hearing people. Another participant stated that, "technology is often designed for the hearing world. This forced me to adjust to ‘match’ with the technology," (TAtkinson). In contrast, we are proposing an AT designed from a Deaf-first perspective, which may be able to fill access gaps; for example, four participants stated they would be able to receive news updates and emergency announcements straight away without any barriers. Some participants compared this to hearing people’s use of the radio, stating they benefit from immediate access to news and information in convenient situations, where Deaf people cannot.

4.3 Accessibility and Inclusion

Another motivational factor pointed out by participants is the potential for accessibility and inclusion (n=6). The idea of an AT which would support the whole household was appealing. Three participants pointed out that there are opportunities for accessibility and inclusion in mixed family households if the device were to be bilingual (Auslan and English), with three participants also suggesting it could be a learning tool for people not fluent in Auslan. When asked if they might use a personal assistant device for something that hearing people would not, one participant responded: “I think it would be amazing to have an Auslan only Alexa, but what if there are families with Deaf kids and hearing parents? Some CODAs are bilingual, some [deaf people] focus on English and speech. So, I think having both Auslan and English would be good.” (P6).

Two participants also brought attention to the racial, ethnic, ability, gender and sexual diversity within the Deaf community and noted that ATs should recognise and accommodate this diversity. For our AI AT, this would be through a customisable appearance and personality, and ability to recognise and produce diverse signing variants.

4.4 Independence

Five participants indicated independence was a motivating factor. Two participants mentioned they must often ask friends or family for up-to-date information. One participant expressed that, “Deaf people often felt that they ask hearing people for assistance. In the past we had no advanced technology, we had to ask for help... But now we are now becoming more independent because of better technology” (TAtkinson).

4.5 Improving Life

In addition to supporting their independence, as discussed above, five participants identified that our proposed AI AT would improve their lives in the following areas; convenience (n=2; e.g. “Most important - convenience and worthwhile to use this device that make my life easier", DeafGavin), empowerment and confidence (n=1; e.g. “Benefit of having a device, giving a commands will help to increase my confidence.... We sometimes have lots of barriers, e.g. low esteem, no confidence or nervous and shy. Being placid, that device is not a real person, so this will give us the opportunity to talk with the device, asking questions. This will help us to improve our confidence.", Captain Awesome), psychological wellbeing and companionship (n=3, e.g. “If Alexa understand Auslan.... So I can feel relieved, good for my psychological well-being and Deaf culture", TAtkinson), memory aid (n=2, e.g. “I don’t have a great memory for everything, like cooking.... If I use this device, when I ask something, it will pop up on the screen... It will show me ready how to cook them", Captain Awesome) and health education (n=2, e.g. “I think health should be included.... I have Apple watch, they tell me about exercise or about my heart beating for fitness.", P32).

4.6 Aesthetic Appeal

For our proposed AI AT, participants commented on two aesthetic aspects: the design of the physical device, and the design and actuation of the Auslan-signing avatar. Three participants commented on the appeal of some aspect of the physical device (“We can put it in the house with pride. It need to be a good design looking nice.” P7). Five participants commented specifically on the aesthetics of the avatar (“You don’t want it to look ugly or weird” P3), with four wanting some degree of customisability. Participants also reported on the importance of fluency and fluidity of signing, as noted in Section 4.1 above.

4.7 Technology Appeal

Three participants were motivated by the technological aspects of the personal assistant. This fell into two categories: three participants were excited by the potential for a smart home and control of interconnected devices; and one was excited for a new technology interaction style.

4.8 Barriers

The most common area of concern among participants was privacy and security (n=10). Seven participants stated they were concerned about the security of information that may be stored in the device. Six participants were concerned that having a device with a camera in their home would enable people to hack into it and observe them. One participant wondered where the Auslan data would be stored.

Five participants noted that a barrier to adoption could be the high cost of personal assistant devices. It was noted that assistive technologies can often be expensive. One participant was concerned that, if it would require an internet connection data, this would represent an ongoing cost. This also aligns with findings from the literature that cost is a barrier to AT adoption. We have not yet addressed this in our project, as we are still in a very early stage of design and prototype development, but it will be important for us to consider throughout the life cycle of the project.

Participants were also concerned about the device’s ability to understand Auslan (n=4), reinforcing the importance of linguistic alignment. This concern was present given the complexity of the features in Auslan including facial expression, body movement and non-lexical signs. Three participants expressed concern about the device replacing Auslan Interpreters. Given these concerns, one participant raised there may be challenges with Deaf community acceptance.

Skip 5DISCUSSION Section

5 DISCUSSION

In this section, we will discuss how the motivating factors we have identified address key barriers identified in the research literature.

5.1 ATs “fixing" Deaf people

We have identified linguistic and cultural alignment as the most important motivating factor for Deaf people’s adoption of ATs because, historically, assistive technology has been rejected by Deaf people when they perceive it as being intended to “fix” them (i.e. make them hearing) [8]. This has resulted in resistance towards the adoption of “hearing" ATs. The Deaf community is proud of their culture and language. ATs which are not seen as mutually exclusive with that culture and language are likely to see greater levels of acceptance. Involving users of an AT in the design of that AT, such as through a participatory design approach, is more likely to result in an AT that the users want to engage with, because it will ensure it aligns with their identity, e.g. [26].

5.2 Visibility of AT or Disability

While none of our participants addressed the issue of visibility directly, the fact that cultural alignment, and greater visibility of their Deaf culture, are motivating factors suggests visibility of culturally aligned ATs is a benefit, not a barrier. The only concern participants raised is whether those ATs would be accepted by the wider Deaf community.

5.3 Acclimatization and Comfort

An interactive sign language AT could have a shorter acclimatization period for Deaf people than hearing-based ATs, as it uses the native language of the Deaf community, making it comfortable and natural to use [26, 29]. Ensuring the device includes cultural norms from the Deaf community will further ease acclimatization. In the case of our sign language smart home assistant, this means we must consider how well the device recognises Auslan signs, and if there are unnatural limitations on the complexity of commands the device can recognise (which parallels problems with early commercial voice-activated personal assistants [25]); as well as how acceptable the avatar which will provide signed responses is to Deaf people as individuals and as a community. Artificial limitations on language use would introduce an acclimatization period which may impact satisfaction and adoption rates. Like ATs, signing avatars also have a long history of low acceptance among Deaf communities, in part because they have also not been linguistically and culturally aligned. Key considerations for signing avatars include accuracy of signing; appropriate use of facial expressions, mouthings and mouth gestures; appropriate timing for manual and non-manual sign components; and smoothness or naturalness of movements [24]. If these elements are not sufficiently considered in the creation of a sign language avatar, any AT which relies on that avatar will likely face rejection by the Deaf community.

Our proposed AI AT would further supports comfort by providing participants with greater control over their use of ATs in general, as noted by the one participant who felt it would allow him to choose when he wanted to wear his hearing aid, and when he wanted a signing-first environment.

5.4 Social Stigma

The only social stigma our participants were concerned about was whether or not our proposed device would be accepted within the Deaf community. As mentioned above, linguistic and cultural fit, accuracy and robustness of sign language recognition, and acceptability of the signing avatar were seen as key features of the device itself. The “bigger picture" of how the sign language smart home assistant would fit into Deaf culture – and in particular, whether or not it would compete with sign language interpreters – was also a concern.

5.5 Losing Independence

While many assistive technologies aim to provide independence by augmenting users’ abilities, the fact that they are perceived as indicating a loss of independence [1] shows developers of ATs must consider how to design and market ATs in ways that meet prospective users’ true needs and desires, such as by taking a participatory design approach. Participants saw our prospective AI AT as a way to gain independence through access to information and control over technologies without the need to rely on hearing people, which motivated their desire to engage with it.

5.6 Aesthetics

Our findings confirm the importance of ATs being aesthetically pleasing.

5.7 Privacy and Security

Our participants’ comments align with findings from the literature that networked and/or AI-based ATs raise concerns about privacy and data security. We do not have a unique solution to this concern, but recommend developers of such ATs prioritize data security in prototyping and production, and communicate transparently with users about where data is stored and processed, and who (if anyone) is able to access user data.

5.8 Cost

Our findings confirm the cost of ATs can be a barrier to adoption. This is a difficult issue to address in a capitalist system, as factors which contribute to the costs of ATs are the high levels of investment and research required to develop high quality ATs, compared to relatively small target user groups. The development of high tech ATs, such as networked and/or AI-based ATs, are also likely to have both relatively high upfront costs (for the purchase of a technology), and ongoing costs (due to the need for a network connection) as a requirement of the technology.

Skip 6CONCLUSION Section

6 CONCLUSION

Only by working with the Deaf community have we been able to discover the true motivating factors of AT adoption for this community. We propose that participatory design approaches to the development of new assistive technologies can support the creation of technologies which are culturally appropriate for this community, and therefore will generate higher levels of acceptance. In particular, we wish to highlight the reason sign language based assistive technologies are motivating for Deaf people is because they align with the language and culture of the Deaf community. This contrasts with other assistive technologies which focus on hearing and speech, and therefore experience barriers related to misalignment with the users’ identity.

6.1 Limitations and Future Work

The limitations of this paper include sample bias and the umbrella project’s early design phase. People recruited to participate are at least a little interested in this prospective technology, or they would not have wanted to be interviewed. All participants were recruited in Queensland and northern New South Wales, so the perspectives of the entire Australian Deaf community have not been captured. A broader survey or interview program could find more nuanced results.

These interviews were conducted as part of the design phase of the umbrella project, so there is lots of hope about the potential of a shiny new technology. Future work, once development of a robust prototype is completed, would involve studies to examine actual use and adoption of the technology.

The umbrella project to which this work belongs continues to use a participatory design approach to designing and developing an Auslan-capable personal assistant, with native signers (Deaf and CODA) involved in both the core research team and as participants in data gathering and co-design activities.

Skip ACKNOWLEDGMENTS Section

ACKNOWLEDGMENTS

The research for this paper received funding support from the Queensland Government through Trusted Autonomous Systems (TAS), a Defence Cooperative Research Centre funded through the Commonwealth Next Generation Technologies Fund and the Queensland Government; and from the Defence Science and Technology Group.

The authors would like to gratefully acknowledge the contributions of research assistants Julie Lyons and Bobbie Blackson, members of our Deaf Advisory Panel, and the other members of our research team, Maria Zelenskaya and Scott Whittington. Thanks to project mentors, Professor Trevor Johnston, Professor Adam Schembri, Associate Professor Marcus Gallagher, Dr Ash Rahimi, Dr Andrew Back and Dr Gabrielle Hodge. Thanks also to members of the Australian Research Council’s Centre of Excellence for the Dynamics of Language for early discussions on language technologies.

Footnotes

  1. 1 We use identity-first language rather than person-first language because that is preference of the people and communities with whom we interact.

    Footnote
  2. 2 We use Deaf to refer to the cultural identity of Deaf people, indicated by the use of sign language as first or preferred language and membership of the Deaf community [20]. Contrast with deaf, which refers to the medical sense of physical deafness. It is important to note that the intersection of linguistic minority and disability that Deaf culture embodies is complex.

    Footnote
  3. 3 Auslan is the sign language of the Australian Deaf Community. It is its own full language, and has a very different grammatical structure to English [20].

    Footnote
  4. 4 CODA stands for Child of Deaf Adults. Many CODAs grow up with a sign language as their first language.

    Footnote
  5. 5 Signed English: A sign system which maintains English grammatical structure [20].

    Footnote
  6. 6 Deafblind refers to "dual sensory disability that combines both hearing and vision loss" [9].

    Footnote
Skip Supplemental Material Section

Supplemental Material

Video Preview

Video Preview

mp4

17.3 MB

3613905.3650871-talk-video.mp4

Talk Video

mp4

47.2 MB

References

  1. Arlene J. Astell, Colleen McGrath, and Erica Dove. 2020. ’That’s for old so and so’s!’: Does identity influence older adults’ technology adoption decisions?Ageing and Society 40, 7 (2020), 1550–1576. https://doi.org/10.1017/S0144686X19000230Google ScholarGoogle ScholarCross RefCross Ref
  2. Gayathri Victoria Balasubramanian, Paul Beaney, and Ruth Chambers. 2021. Digital personal assistants are smart ways for assistive technology to aid the health and wellbeing of patients and carers. BMC Geriatrics 21, 1 (15 Nov 2021), 643. https://doi.org/10.1186/s12877-021-02436-yGoogle ScholarGoogle ScholarCross RefCross Ref
  3. Larwan Berke, Khaled Albusays, Matthew Seita, and Matt Huenerfauth. 2019. Preferred Appearance of Captions Generated by Automatic Speech Recognition for Deaf and Hard-of-Hearing Viewers. In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI EA ’19). Association for Computing Machinery, New York, NY, USA, 1–6. https://doi.org/10.1145/3290607.3312921Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Danielle Bragg, Oscar Koller, Mary Bellard, Larwan Berke, Patrick Boudreault, Annelies Braffort, Naomi Caselli, Matt Huenerfauth, Hernisa Kacorri, Tessa Verhoef, Christian Vogler, and Meredith Ringel Morris. 2019. Sign Language Recognition, Generation, and Translation: An Interdisciplinary Perspective. In Proceedings of the 21st International ACM SIGACCESS Conference on Computers and Accessibility (Pittsburgh, PA, USA) (ASSETS ’19). Association for Computing Machinery, New York, NY, USA, 16–31. https://doi.org/10.1145/3308561.3353774Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Virginia Braun and Victoria Clarke. 2006. Using thematic analysis in psychology. Qualitative Research in Psychology 3, 2 (2006), 77–101. https://doi.org/10.1191/1478088706qp063oa arXiv:https://www.tandfonline.com/doi/pdf/10.1191/1478088706qp063oaGoogle ScholarGoogle ScholarCross RefCross Ref
  6. Janine Butler, Brian Trager, and Byron Behm. 2019. Exploration of Automatic Speech Recognition for Deaf and Hard of Hearing Students in Higher Education Classes. In Proceedings of the 21st International ACM SIGACCESS Conference on Computers and Accessibility (Pittsburgh, PA, USA) (ASSETS ’19). Association for Computing Machinery, New York, NY, USA, 32–42. https://doi.org/10.1145/3308561.3353772Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Pamara F. Chang and Rachel V. Tucker. 2022. Assistive Communication Technologies and Stigma: How Perceived Visibility of Cochlear Implants Affects Self-Stigma and Social Interaction Anxiety. Proceedings of the ACM on Human-Computer Interaction 6, CSCW1 (2022), 1–16. https://doi.org/10.1145/3512924Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Robert A Crouch. 1997. Letting the deaf be Deaf: Reconsidering the use of cochlear implants in prelingually deaf children. Hastings Center ReportJuly-August (1997), 14–21.Google ScholarGoogle Scholar
  9. Deafblind Australia. 2018. About Deafblindness. https://www.deafblindinformation.org.au/about-deafblindness/Google ScholarGoogle Scholar
  10. Amandeep Singh Dhanjal and Williamjeet Singh. 2019. Tools and Techniques of Assistive Technology for Hearing Impaired People. In 2019 International Conference on Machine Learning, Big Data, Cloud and Parallel Computing (COMITCon). 205–210. https://doi.org/10.1109/COMITCon.2019.8862454Google ScholarGoogle ScholarCross RefCross Ref
  11. @elizejackson. 2019. Disability Dongle: A well intended elegant, yet useless solution to a problem we never knew we had. Disability Dongles are most often conceived of and created in design schools and at IDEO.https://twitter.com/elizejackson/status/1110629818234818570Google ScholarGoogle Scholar
  12. Lisa Elliot, Michael Stinson, James Mallory, Donna Easton, and Matt Huenerfauth. 2016. Deaf and Hard of Hearing Individuals’ Perceptions of Communication with Hearing Colleagues in Small Groups. In Proceedings of the 18th International ACM SIGACCESS Conference on Computers and Accessibility (Reno, Nevada, USA) (ASSETS ’16). Association for Computing Machinery, New York, NY, USA, 271–272. https://doi.org/10.1145/2982142.2982198Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Heather A. Faucett, Kate E. Ringland, Amanda L.L. Cullen, and Gillian R. Hayes. 2017. (In)visibility in disability and assistive technology. ACM Transactions on Accessible Computing 10, 4 (2017). https://doi.org/10.1145/3132040Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Abraham Glasser. 2019. Automatic Speech Recognition Services: Deaf and Hard-of-Hearing Usability. In CHI 2019 Student Research Competition. Glasgow, Scotland, 1–6. https://doi.org/10.1145/3290607.3308461Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Abraham Glasser, Vaishnavi Mande, and Matt Huenerfauth. 2020. Accessibility for Deaf and Hard of Hearing Users: Sign Language Conversational User Interfaces. Proceedings of the 2nd Conference on Conversational User Interfaces (2020). https://api.semanticscholar.org/CorpusID:220497573Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Abraham Glasser, Vaishnavi Mande, and Matt Huenerfauth. 2021. Understanding deaf and hard-of-hearing users’ interest in sign-language interaction with personal-assistant devices. In Proceedings of the 18th International Web for All Conference (W4A ’21), April 19-20, 2021, Ljubljana, Slovenia, Vol. 1. Association for Computing Machinery, 1–11. https://doi.org/10.1145/3430263.3452428Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Abraham Glasser, Matthew Watkins, Kira Hart, Sooyeon Lee, and Matt Huenerfauth. 2022. Analyzing Deaf and Hard-of-Hearing Users’ Behavior, Usage, and Interaction with a Personal Assistant Device that Understands Sign-Language Input. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (, New Orleans, LA, USA, ) (CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 306, 12 pages. https://doi.org/10.1145/3491102.3501987Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Martin Grayson, Anja Thieme, Rita Marques, Daniela Massiceti, Ed Cutrell, and Cecily Morrison. 2020. A dynamic AI system for extending the capabilities of blind people. Conference on Human Factors in Computing Systems - Proceedings (2020), 1–4. https://doi.org/10.1145/3334480.3383142Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Joseph Hill. 2020. Do deaf communities actually want sign language gloves?Nature Electronics 3, 9 (2020), 512–513. https://doi.org/10.1038/s41928-020-0451-7Google ScholarGoogle ScholarCross RefCross Ref
  20. Trevor A Johnston. 1989. Auslan: The Sign Language of the Australian Deaf Community. PhD. University of Sydney.Google ScholarGoogle Scholar
  21. Takashi Kato, Akihisa Shitara, Nobuko Kato, and Yuhki Shiraishi. 2021. Sign Language Conversational User Interfaces Using Luminous Notification and Eye Gaze for the Deaf and Hard of Hearing. In ACHI 2021 : The Fourteenth International Conference on Advances in Computer-Human Interactions.Google ScholarGoogle Scholar
  22. Muiz Ahmed Khan, Pias Paul, Mahmudur Rashid, Mainul Hossain, and Md Atiqur Rahman Ahad. 2020. An AI-Based Visual Aid with Integrated Reading Assistant for the Completely Blind. IEEE Transactions on Human-Machine Systems 50, 6 (2020), 507–517. https://doi.org/10.1109/THMS.2020.3027534Google ScholarGoogle ScholarCross RefCross Ref
  23. Jessica Korte, Axel Bender, Guy Gallasch, Janet Wiles, and Andrew Back. 2020. A Plan for Developing an Auslan Communication Technologies Pipeline. In Computer Vision – ECCV 2020 Workshops, Adrien Bartoli and Andrea Fusiello (Eds.). Springer International Publishing, Cham, 264–277.Google ScholarGoogle Scholar
  24. Verena Krausneker and Sandra Schügerl. 2021. Best Practice Protocol on the Use of Sign Language Avatars. Technical Report. University of Vienna, Vienna, Austria. 1–17 pages. https://avatar-bestpractice.univie.ac.at/Google ScholarGoogle Scholar
  25. Irene Lopatovska, Katrina Rink, Ian Knight, Kieran Raines, Kevin Cosenza, Harriet Williams, Perachya Sorsche, David Hirsch, Qi Li, and Adrianna Martinez. 2019. Talk to me: Exploring user interactions with the Amazon Alexa. Journal of Librarianship and Information Science 51, 4 (2019), 984–997. https://doi.org/10.1177/0961000618759414Google ScholarGoogle ScholarCross RefCross Ref
  26. Patrizia Marti and Annamaria Recupero. 2019. Is Deafness a disability?: Designing hearing aids beyond functionality. C&C ’19: Proceedings of the 2019 on Creativity and Cognition (2019), 133–143. https://doi.org/10.1145/3325480.3325491Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Abby McCormack and Heather Fortnum. 2013. Why do people fi tted with hearing aids not wear them?International Journal of Audiology 52 (2013), 360–368. https://doi.org/10.3109/14992027.2013.769066Google ScholarGoogle ScholarCross RefCross Ref
  28. Ioanna Moraiti, Anestis Fotoglou, Katerina Dona, Alexandra Katsimperi, Konstantinos Tsionakas, Zoi Karampatzaki, and Athanasios Drigas. 2022. Assistive Technology and Internet of Things for People with ADHD Education. Technium Social Sciences Journal 32 (2022), 204–222. https://heinonline.org/HOL/P?h=hein.journals/techssj32&i=204Google ScholarGoogle ScholarCross RefCross Ref
  29. A. A. O’Kane, A. Aliomar, R. Zheng, B . Schulte, and G. Trombetta. 2019. Social, Cultural Systematic Frustrations Motivating the Formation of a DIY Hearing Loss Hacking Community. In CHI 2019 - Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery (ACM). https://doi.org/10.1145/3290605.3300531Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Halley P. Profita, Abigale Stangl, Laura Matuszewska, and Sigrunn Sky. 2018. “Wear It Loud”: How and Why Hearing Aid and Cochlear Implant Users Customize Their Devices. ACM Transactions on Accessible Computing 11 (September 2018). Issue 3. https://doi.org/10.1145/3214382Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Vishakha Rawool. 2018. Denial By Patients of Hearing Loss and Their Rejection of Hearing Health Care: a Review. Journal of Hearing Science 8, 3 (2018), 9–23. https://doi.org/10.17430/906204Google ScholarGoogle ScholarCross RefCross Ref
  32. Jason Rodolitz, Evan Gambill, Brittany Willis, Christian Vogler, and Raja Kushalnagar. 2019. Accessibility of Voice-Activated Agents for People who are Deaf or Hard of Hearing. The journal on technology and people with disabilities (2019).Google ScholarGoogle Scholar
  33. Erik Smedberg, Enrico Ronchi, and Victoria Hutchison. 2022. Alarm Technologies to Wake Sleeping People Who are Deaf or Hard of Hearing. Fire Technology 58, 4 (01 Jul 2022), 2485–2507. https://doi.org/10.1007/s10694-022-01265-8Google ScholarGoogle ScholarCross RefCross Ref
  34. Stephanie Stoll, Necati Cihan Camgoz, Simon Hadfield, and Richard Bowden. 2020. Text2Sign: Towards Sign Language Production Using Neural Machine Translation and Generative Adversarial Networks. International Journal of Computer Vision 128 (2020), 891–908. Issue 4. https://doi.org/10.1007/s11263-019-01281-2Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Gabriella Wojtanowski, Colleen Gilmore, Barbra Seravalli, Kristen Fargas, Christian Vogler, and Raja Kushalnagar. 2020. "Alexa, can you see me?" Making individual personal assistants for the home accessible to Deaf consumers. The Journal on technology and people with disabilities (2020).Google ScholarGoogle Scholar
  36. World Federation of the Deaf and World Association of Sign Language Interpreters. 2018. WFD and WASLI statement on use of signing avatars. Technical Report April. Helsinki, Finland / Melbourne, Australia. https://wfdeaf.org/news/resources/wfd-wasli-statement-use-signing-avatars/Google ScholarGoogle Scholar
  37. World Health Organization. 2011. ICF Browser. https://apps.who.int/classifications/icfbrowser/Google ScholarGoogle Scholar
  38. Maria Zelenskaya, Scott Whittington, Julie Lyons, Adele Vogel, and Jessica Korte. 2023. Visual-gestural Interface for Auslan Virtual Assistant. In SIGGRAPH Asia 2023 Emerging Technologies (Sydney, NSW, Australia) (SA ’23). Association for Computing Machinery, New York, NY, USA, Article 21, 2 pages. https://doi.org/10.1145/3610541.3614566Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. What Factors Motivate Culturally Deaf People to Want Assistive Technologies?

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Conferences
          CHI EA '24: Extended Abstracts of the 2024 CHI Conference on Human Factors in Computing Systems
          May 2024
          4761 pages
          ISBN:9798400703317
          DOI:10.1145/3613905

          Copyright © 2024 Owner/Author

          Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 11 May 2024

          Check for updates

          Qualifiers

          • Work in Progress
          • Research
          • Refereed limited

          Acceptance Rates

          Overall Acceptance Rate6,164of23,696submissions,26%
        • Article Metrics

          • Downloads (Last 12 months)99
          • Downloads (Last 6 weeks)99

          Other Metrics

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        HTML Format

        View this article in HTML Format .

        View HTML Format