Skip to main content

2020 | Buch

Computers Helping People with Special Needs

17th International Conference, ICCHP 2020, Lecco, Italy, September 9–11, 2020, Proceedings, Part I

herausgegeben von: Klaus Miesenberger, Roberto Manduchi, Dr. Mario Covarrubias Rodriguez, Petr Peňáz

Verlag: Springer International Publishing

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

The two-volume set LNCS 12376 and 12377 constitutes the refereed proceedings of the 17th International Conference on Computers Helping People with Special Needs, ICCHP 2020, held in Lecco, Italy, in September 2020. The conference was held virtually due to the COVID-19 pandemic.

The 104 papers presented were carefully reviewed and selected from 206 submissions. Included also are 13 introductions. The papers are organized in the following topical sections:

Part I: user centred design and user participation in inclusive R&D; artificial intelligence, accessible and assistive technologies; XR accessibility – learning from the past, addressing real user needs and the technical architecture for inclusive immersive environments; serious and fun games; large-scale web accessibility observatories; accessible and inclusive digital publishing; AT and accessibility for blind and low vision users; Art Karshmer lectures in access to mathematics, science and engineering; tactile graphics and models for blind people and recognition of shapes by touch; and environmental sensing technologies for visual impairment

Part II: accessibility of non-verbal communication: making spatial information accessible to people with disabilities; cognitive disabilities and accessibility – pushing the boundaries of inclusion using digital technologies and accessible eLearning environments; ICT to support inclusive education – universal learning design (ULD); hearing systems and accessories for people with hearing loss; mobile health and mobile rehabilitation for people with disabilities: current state, challenges and opportunities; innovation and implementation in the area of independent mobility through digital technologies; how to improve interaction with a text input system; human movement analysis for the design and evaluation of interactive systems and assistive devices; and service and care provision in assistive environments

10 chapters are available open access under a Creative Commons Attribution 4.0 International License via link.springer.com.

Inhaltsverzeichnis

Frontmatter
Correction to: Suitable Camera and Rotation Navigation for People with Visual Impairment on Looking for Something Using Object Detection Technique

The original version of this chapter was revised. The introduction was updated because important information, such as a reference, was missing.

Masakazu Iwamura, Yoshihiko Inoue, Kazunori Minatani, Koichi Kise

User Centred Design and User Participation in Inclusive R&D

Frontmatter
User Centered Design and User Participation in Inclusive R&D
Introduction to the Special Thematic Session

This session reflects R&D on User Centered Design and Development and User Participation (in short UCD in the following) for/with people with disabilities. Better guidelines, methods, techniques and tools are needed to improve the quality of R&D and practice both in the domain of Assistive Technology (AT), eAccessibility and eInclusion itself but also for improving the quality of mainstream R&D through UCD and participation of people with disabilities. We analyze the state of the art, identify gaps and problems as well as discuss examples and new approaches presented in 5 papers. The introduction integrates the topic into the broader context of UCD in Information and Communication Technology (ICT) and Human-Computer Interaction (HCI) playing a major role for AT, eAccessibility and digital inclusion. We underline the need for ongoing and more intense R&D on UCD to improve quality and usability. By promoting this session in the ICCHP conference series we aim at establishing a comprehensive point of access to scientific work in this domain.

Klaus Miesenberger, Cordula Edler, Susanne Dirks, Christian Bühler, Peter Heumader

Open Access

My Train Talks to Me: Participatory Design of a Mobile App for Travellers with Visual Impairments

Travellers with visual impairments may face substantial information gaps on their journeys by public transport. For instance, information displayed in trains, as well as on departure boards in train stations and on platforms, are often not available in acoustic or tactile form. Digital technologies, such as smartphones or smartwatches, can provide an alternative means of access. However, these alternatives do not guarantee that the user experience is comparable in value, quality and efficiency. The present case study details a participatory design process, where travellers with visual impairments co-designed a mobile app. The goal was to tackle information gaps on journeys by public transport and to learn how participatory design can facilitate the provision of comparable experiences for users with disabilities. Travellers with visual impairments were involved in a collaborative process in all project phases, including problem identification, technical feasibility, proof of concept, design and development. Participatory design contributed to a thorough understanding of the user perspective and allowed the app to be optimised for the needs of travellers with visual impairments. Furthermore, co-design proved to be an effective method for fostering awareness and knowledge about digital accessibility at all organisational levels.

Beat Vollenwyder, Esther Buchmüller, Christian Trachsel, Klaus Opwis, Florian Brühlmann
What Do Older People Actually Want from Their Robots?

There has been a lot of research concerning robots to support older people. However, there may be some areas of robots for older people that have not been as heavily researched or that is being missed. This study aimed to reassess if existing research is addressing the needs of 22 older people and asked them “without being concerned about any limitations, what would you want from a robot?” The study also showed them pictures of different robot types and asked them which type, if any, they would prefer. It was found that the older people have a lot of daily tasks and needs that are not addressed by current research. It was also found that older people were generally intimated by humanoid robots and are concerned about their privacy with voice agents but do not have a specific preference otherwise.

Sanjit Samaddar, Helen Petrie
Accessibility of Block-Based Introductory Programming Languages and a Tangible Programming Tool Prototype

Visual programming languages (VPLs) were designed to assist children in introductory programming courses. Unfortunately, despite the positive results in teaching, VPLs are believed to be inaccessible for children with visual impairments and low vision due to the dependency of visual graphics as both input and output methods. To identify the barriers that users with visual impairments and low vision face while using Block-based programming environments, as well as to acquire feedback regarding the design of a new tangible programming tool prototype, a usability study was conducted which involved nine adult participants with visual impairments and low vision. This paper presents the findings of this usability study and provides a list of features that are needed in order to make Block-based environments accessible. Furthermore, based on observations, interviews, and post-surveys this study demonstrates that our prototype can be used by users with visual impairments and low vision and provides a guideline for the design of tangible interfaces to teach programming concepts.

Emmanuel Utreras, Enrico Pontelli
Consigliere Evaluation: Evaluating Complex Interactive Systems with Users with Disabilities

Conducting accessibility evaluations with users with disabilities is an important part of developing accessible interactive systems. Conducting such evaluations of systems which require complex domain knowledge is often impossible, as disabled users do not exist or are very rare. This paper presents a user evaluation method to address this problem, consigliere evaluation. A consigliere evaluation has the disabled user as the main participant, but they are assisted by an advisor, or consigliere, who understands the complex domain; the consigliere is in turn, monitored by an accessibility expert, who acts as an enforcer. As in all user evaluations, the disabled participant undertakes a series of tasks. But in a consigliere evaluation, if the task requires some particular domain knowledge or skill, the role of the consigliere is to provide appropriate information. However, it is easy for the consigliere, who usually does not have knowledge of the accessibility domain, to provide information not specifically about the domain, but about how to do the task in general. So the role of the enforcer, who is an accessibility expert, is to ensure this does not happen, and also to provide assistance and explanation if accessibility issues arise that the disabled participant cannot solve. The paper illustrates the consigliere method with a case study, the evaluation of Skillsforge, an online system used by a number of universities to manage progress of postgraduate students. This system requires considerable domain knowledge of the terminology and progression requirements for these students. It is used by university administrative staff, academic staff who supervise postgraduate students or are involved in monitoring students and the students themselves. The case study illustrates how the consigliere evaluation method works and some of the things which need to be considered in conducting the evaluation appropriately.

Helen Petrie, Sanjit Samaddar, Christopher Power, Burak Merdenyan
IPAR-UCD – Inclusive Participation of Users with Cognitive Disabilities in Software Development

This article presents a new inclusive research concept for research and development (R&D) IPAR-UCD. An adaptation for collaborative R&D with peer-researchers for cognitive accessibility is still missing. This inclusive research concept investigates and combines two methodological approaches: Inclusive Participatory Action Research, IPAR [1] and User-Centered Design, UCD. With this inclusive research and development method, a concept is presented that has already been successfully applied and further developed in the »Easy Reading« project (ER) itself, together with the target group [2, 14].

M. A. Cordula Edler

Artificial Intelligence, Accessible and Assistive Technologies

Frontmatter
Artificial Intelligence, Accessible and Assistive Technologies
Introduction to the Special Thematic Session

Artificial intelligence (AI) has been around for at least 70 years, as have digital technologies and yet the hype around AI in recent years has begun to make some wary of its impact on their daily lives. However, in this special thematic session authors will be illustrating how increased speed of data crunching and the use of complex algorithms have boosted the potential for systems to be used in ways that can be helpful in unexpected ways, in particular when thinking about assistive technologies. The black box nature of AI may be alarming; with its apparent lack of transparency, but it has enormous potential to make digital content, services and systems more accessible and helpful for people with disabilities. The following series of papers related to these issues propose new and innovative ways of overcoming concerning issues with positive approaches to reducing barriers for those with disabilities.

E. A. Draffan, Peter Heumader
AI and Global AAC Symbol Communication

Artificial Intelligence (AI) applications are usually built on large trained data models that can recognize and label images, provide speech output from text, process natural language for translation, and be of assistance to many individuals via the internet. For those who are non-verbal or have complex speech and language difficulties, AI has the potential to offer enhanced access to the wider world of communication that can be personalized to suit user needs. Examples include pictographic symbols to augment or provide an alternative to spoken language. However, when using AI models, data related to the use of freely available symbol sets is scarce. Moreover, the manipulation of the data available is difficult with limited annotation, making semantic and syntactic predictions and classification a challenge in multilingual situations. Harmonization between symbol sets has been hard to achieve; this paper aims to illustrate how AI can be used to improve the situation. The goal is to provide an improved automated mapping system between various symbol sets, with the potential to enhance access to more culturally sensitive multilingual symbols. Ultimately, it is hoped that the results can be used for better context sensitive symbol to text or text to symbol translations for speech generating devices and web content.

Chaohai Ding, E. A. Draffan, Mike Wald
Can a Web Accessibility Checker Be Enhanced by the Use of AI?

There has been a proliferation of automatic web accessibility checkers over the years designed to make it easier to assess the barriers faced by those with disabilities when using online interfaces and content. The checkers are often based on tests that can be made on the underlying website code to see whether it complies with the W3C Web Content Accessibility Guidelines (WCAG). However, as the type of code needed for the development of sophisticated interactive web services and online applications becomes more complex, so the guidelines have had to be updated with the adoption of new success criteria or additional revisions to older criteria. In some instances, this has led to questions being raised about the reliability of the automatic accessibility checks and whether the use of Artificial Intelligence (AI) could be helpful. This paper explores the need to find new ways of addressing the requirements embodied in the WCAG success criteria, so that those reviewing websites can feel reassured that their advice (regarding some of the ways to reduce barriers to access) is helpful and overcomes issues around false positive or negatives. The methods used include image recognition and natural language processing working alongside a visual appraisal system, built into a web accessibility checker and reviewing process that takes a functional approach.

E. A. Draffan, Chaohai Ding, Mike Wald, Harry Everett, Jason Barrett, Abhirami Sasikant, Calin Geangu, Russell Newman
Towards the Assessment of Easy-to-Read Guidelines Using Artificial Intelligence Techniques

The Easy-to-Read (E2R) Methodology was created to improve the daily life of people with cognitive disabilities, who have difficulties in reading comprehension. The main goal of the E2R Methodology is to present clear and easily understood documents. This methodology includes a set of guidelines and recommendations that affect the writing of texts, the supporting images, the design and layout of documents, and the final editing format. Such guidelines are used in the manual processes of (a) adapting existing documents and (b) producing new materials. The process of adapting existing documents is cyclic and implies three activities: analysis, transformation, and validation. All these activities are human resource consuming, due to the need of involving people with cognitive disabilities as well as E2R experts. In order to alleviate such processes, we are currently investigating the development of methods, based on Artificial Intelligence (AI) techniques, to perform the analysis and transformation of documents in a (semi)-automatic fashion. In this paper we present our AI-based method for assessing a particular document with respect to the E2R guidelines as well as an initial implementation of such a method; our research on the transformation of documents is out of the scope of this paper. We carried out a comparative evaluation of the results obtained by our initial implementation against the results of the document analysis performed by people with cognitive disabilities.

Mari Carmen Suárez-Figueroa, Edna Ruckhaus, Jorge López-Guerrero, Isabel Cano, Álvaro Cervera
Research on Book Recommendation System for People with Visual Impairment Based on Fusion of Preference and User Attention

With the development of the Internet, the information explosion problem comes into being and it is challenging for users to search for the information they needed from e-books. Although the book recommendation system can help users find their focuses, it is not applicable for visually impaired users when using ordinary visual reading methods for knowledge acquisition. Therefore, a book recommendation system that suits their behavior habits is required. In order to provide accurate and effective book sets for users, we propose an algorithm based on fusing their preferences. For intelligently ranking the candidate book sets and help users find the right book quickly, we propose a context-aware algorithm based on users’ attention. Meanwhile, we introduce an improved calculation method for users’ attention to solving the problem of inaccurate prediction on users’ current attention when their action history is cluttered. We use the self-attention to preserve the users’ reading tendencies during the reading process, analyze users’ personal features and book content features, and improve the accuracy of the recommendation by merging the feature space. Finally, the improved algorithm proposed and comparative experiments were employed on the dataset collecting from the China Blind Digital Library, and the effectiveness of the improvement is proved in each experimental comparison results.

Zhi Yu, Jiajun Bu, Sijie Li, Wei Wang, Lizhen Tang, Chuanwu Zhao
Karaton: An Example of AI Integration Within a Literacy App

Integrating AI into educational applications can have an enormous benefit for users (players/children) and educational professionals. The concept of customisation based on user preferences and abilities is not new. However, in this paper the abilities of the players of a literacy skill application are being collated and categorized, so that in the future they can automatically offer the next instructional level without external manual support. The app Karaton has been designed in such a way that there is a presumption of competence and no child should feel a failure or need to wait to be told that they can try a higher level. It has been found that this improves self-confidence and encourages independent literacy skills.

Hannes Hauwaert, Pol Ghesquière, Jacqueline Tordoir, Jenny Thomson
Can We Unify Perception and Localization in Assisted Navigation? An Indoor Semantic Visual Positioning System for Visually Impaired People

Navigation assistance has made significant progress in the last years with the emergence of different approaches, allowing them to perceive their surroundings and localize themselves accurately, which greatly improves the mobility of visually impaired people. However, most of the existing systems address each of the tasks individually, which increases the response time that is clearly not beneficial for a safety-critical application. In this paper, we aim to cover scene perception and visual localization needed by navigation assistance in a unified way. We present a semantic visual localization system to help visually impaired people to be aware of their locations and surroundings in indoor environments. Our method relies on 3D reconstruction and semantic segmentation of RGB-D images captured from a pair of wearable smart glasses. We can inform the user of an upcoming object via audio feedback so that the user can be prepared to avoid obstacles or interact with the object, which means that visually impaired people can be more active in an unfamiliar environment.

Haoye Chen, Yingzhi Zhang, Kailun Yang, Manuel Martinez, Karin Müller, Rainer Stiefelhagen
IBeaconMap: Automated Indoor Space Representation for Beacon-Based Wayfinding

Traditionally, there have been few options for navigational aids for the blind and visually impaired (BVI) in large indoor spaces. Some recent indoor navigation systems allow users equipped with smartphones to interact with low cost Bluetooth-based beacons deployed strategically within the indoor space of interest to navigate their surroundings. A major challenge in deploying such beacon-based navigation systems is the need to employ a time and labor-expensive beacon planning process to identify potential beacon placement locations and arrive at a topological structure representing the indoor space. This work presents a technique called IBeaconMap for creating such topological structures to use with beacon-based navigation that only needs the floor plans of the indoor spaces of interest.

Seyed Ali Cheraghi, Vinod Namboodiri, Kaushik Sinha

XR Accessibility – Learning from the Past, Addressing Real User Needs and the Technical Architecture for Inclusive Immersive Environments

Frontmatter
XR Accessibility – Learning from the Past and Addressing Real User Needs for Inclusive Immersive Environments
Introduction to the Special Thematic Session

XR is an acronym used to refer to the spectrum of hardware, software applications, and techniques used for virtual reality or immersive environments, augmented or mixed reality and other related technologies. The special thematic session on ‘XR Accessibility’ explores current research and development as well as presenting diverse approaches to meeting real user needs in immersive environments. The contributed research papers range from using spatial sound for object location and interaction for blind users, to alternative symbolic representation of information, Augmented Reality (AR) used in rehabilitation for stroke patients and vocational skills training for students with intellectual disabilities. The session also explores what we can learn from previous research into immersive environments – looks at opportunities for future research and collectively explores how we can together iterate accessibility standards.

Joshue O Connor, Shadi Abou-Zahra, Mario Covarrubias Rodriguez, Beatrice Aruanno
Usability of Virtual Reality Vocational Skills Training System for Students with Intellectual Disabilities

Virtual reality has been applied to education widely since technology developed quickly. In order to apply the “Virtual Reality Vocational Skills Training System” to vocational high school students with intellectual disabilities, this study simplify the operation of the original system and develop an easy-to-use version to meet the learning needs of students with intellectual disabilities. Therefore, the purpose of this study is to test the usability of the easy-to-use version through the questionnaire, and to compare the operating efficiency between the easy-to-use version and the original one. Eight students with intellectual disabilities participated in the study. The results indicated that most students expressed that the easy-to-use version had good usability, and reduced the operation time and the number of wrong actions, as well as enhanced the accuracy. Overall, this designed “Virtual Reality Vocational Skills Training System” can be applied to the training of vocational skills for students with intellectual disabilities.

Ting-Fang Wu, Yung-ji Sher, Kai-Hsin Tai, Jon-Chao Hong
Virtual and Augmented Reality Platform for Cognitive Tele-Rehabilitation Based System

Virtual and Augmented Reality systems have been increasingly studied, becoming an important complement to traditional therapy as they can provide high-intensity, repetitive and interactive treatments. Several systems have been developed in research projects and some of these have become products mainly for being used at hospitals and care centers. After the initial cognitive rehabilitation performed at rehabilitation centers, patients are obliged to go to the centers, with many consequences, as costs, loss of time, discomfort and demotivation. However, it has been demonstrated that patients recovering at home heal faster because surrounded by the love of their relatives and with the community support.

Beatrice Aruanno, Giandomenico Caruso, Mauro Rossini, Franco Molteni, Milton Carlos Elias Espinoza, Mario Covarrubias

Open Access

An Immersive Virtual Reality Exergame for People with Parkinson’s Disease

Parkinson’s disease is a neurodegenerative disorder that affects primarily motor system. Physical exercise is considered important for people with Parkinson’s disease (PD) to slow down disease progression and maintain abilities and quality of life. However, people with PD often experience barriers to exercises that causes low-level adherence to exercise plans and programs. Virtual Reality (VR) is an innovative and promising technology for motor and cognitive rehabilitation. Immersive VR exergames have potential advantages by allowing for individualized skill practice in a motivating interactive environment without distractions from outside events. This paper presents an immersive virtual reality (VR) exergame aiming at motor training on fingers and hand-and-eye coordination. The results from the usability study indicate that immersive VR exergames have potential to provide motivating and engaging physical exercise for people with PD. Through this research, we hope to contribute to evidence-based design principles for task-specific immersive VR exergames for patients with Parkinson’s Disease.

Weiqin Chen, Martin Bang, Daria Krivonos, Hanna Schimek, Arnau Naval

Open Access

Augmented Reality for People with Low Vision: Symbolic and Alphanumeric Representation of Information

Many individuals with visual impairments have residual vision that often remains underused by assistive technologies. Head-mounted augmented reality (AR) devices can provide assistance, by recoding difficult-to-perceive information into a visual format that is more accessible. Here, we evaluate symbolic and alphanumeric information representations for their efficiency and usability in two prototypical AR applications: namely, recognizing facial expressions of conversational partners and reading the time. We find that while AR provides a general benefit, the complexity of the visual representations has to be matched to the user’s visual acuity.

Florian Lang, Albrecht Schmidt, Tonja Machulla
Enhancing Interaction and Accessibility in Museums and Exhibitions with Augmented Reality and Screen Readers

Throughout the evolution of humanity, technologies have served as support for new evolutionary horizons. It is an unambiguous fact that technologies have positively influenced the masses, but they have also brought a remoteness with local cultures, often making them oblivious. Among the new technologies and forms of interaction, we have augmented reality and screen readers that allow the device to read the content. This paper presents AIMuseum. It aims to facilitate accessing and interacting with cultural environments for people with different abilities, combining the use of technologies with local museums, artworks, and exhibitions. The work was evaluated with 38 users, ranging from 16 to 41 years old, and five declared having one type of disability. They used the application and answered a questionnaire. The results showed a positive experience and improved the users’ interest in the artworks and their additional information.

Leandro Soares Guedes, Luiz André Marques, Gabriellen Vitório
Guidelines for Inclusive Avatars and Agents: How Persons with Visual Impairments Detect and Recognize Others and Their Activities

Realistic virtual worlds are used in video games, in virtual reality, and to run remote meetings. In many cases, these environments include representations of other humans, either as stand-ins for real humans (avatars) or artificial entities (agents). Presence and individual identity of such virtual characters is usually coded by visual features, such as visibility in certain locations and appearance in terms of looks. For people with visual impairments (VI), this creates a barrier to detecting and identifying co-present characters and interacting with them. To improve the inclusiveness of such social virtual environments, we investigate which cues people with VI use to detect and recognize others and their activities in real-world settings. For this, we conducted an online survey with fifteen participants (adults and children). Our findings indicate an increased reliance on multimodal information: vision for silhouette recognition; audio for the recognition through pace, white cane, jewelry, breathing, voice and keyboard typing; sense of smell for fragrance, food smell and airflow; tactile information for length of hair, size, way of guiding or holding the hand and the arm, and the reactions of a guide-dog. Environmental and social cues indicate if somebody is present: e. g. a light turned on in a room, or somebody answering a question. Many of these cues can already be implemented in virtual environments with avatars and are summarized by us in a set of guidelines.

Lauren Thevin, Tonja Machulla
Motiv’Handed, a New Gamified Approach for Home-Based Hand Rehabilitation for Post-stroke Hemiparetic Patients

This document summarizes a master thesis project trying to bring a new solution to hemiplegia rehabilitation, one of the numerous consequences of strokes. A hemiplegic patients observe paralysis on one side of their body, and as so, loses autonomy and their quality of life decreases. In this study, we decided to only focus on the hand rehabilitation aspect. However, there is a clear tendency in stroke patients to stop training regularly when returning home from the hospital and the first part of their rehabilitation is over. They often experience demotivation, having the feeling that they will never get back to a fully autonomous person ever again and tend to put their training aside, especially when they do not see clear and visible results anymore. This is also due to the supervised training becoming sparser. All of this results in patients stagnating or even worse, regressing. Thus, we decided to offer a motivating solution for hand rehabilitation at home through gamification.

Sarah Duval-Dachary, Jean-Philippe Chevalier-Lancioni, Mauro Rossini, Paolo Perego, Mario Covarrubias
Move-IT: A Virtual Reality Game for Upper Limb Stroke Rehabilitation Patients

Stroke rehabilitation plays an important role in recovering the lifestyle of stroke survivors. Although existing research proved the effectiveness and engagement of Non-immersive Virtual Reality (VR) based rehabilitation systems, however, limited research is available on the applicability of fully immersive VR-based rehabilitation systems. In this paper, we present the development and evaluation of “Move-IT” game designed for domestic upper limb stroke patients. The game incorporates the use of Oculus Rift Head Mounted Display (HMD) and the Leap Motion hand tracker. A user study of five upper limb stroke patients was performed to evaluate the application. The results showed that the participants were pleased with the system, enjoyed the game and found it was exciting and easy to play. Moreover, all the participants agreed that the game was very motivating to perform rehabilitation exercises.

Maram AlMousa, Hend S. Al-Khalifa, Hana AlSobayel

Serious and Fun Games

Frontmatter
Serious and Fun Games
Introduction to the Special Thematic Session

Serious and Fun Games Special Thematic Session aims to bring together academic scientists, researchers, Ph.D. students and research scholars to exchange and share their experiences and research results on all aspects of Game-Based Learning and Serious Games helping people with disabilities and people who need special education. The target groups of these Serious Games are blind people or people with low vision, hearing impairment, motion challenges, learning problems or children with special diets for example type 1 diabetes, or food allergy. It also provides an interdisciplinary platform for researchers, practitioners, and educators to present and discuss the most recent innovations and trends. Moreover, to share and concern practical challenges encountered, and solutions adopted in the fields of Game-Based Learning and Serious Games. High-quality research contributions describing original and unpublished results of conceptual, constructive, empirical, experimental, or theoretical work in all areas of Game-Based Learning and Serious Games were cordially invited for presentation at the STS.

Cecilia Sik-Lanyi
A Study on Gaze Control - Game Accessibility Among Novice Players and Motor Disabled People

Gaze control is a substitution for disabled people to play computer games. However, many disabled people may be inexperienced in games and/or novices using gaze-control. This study presents a game accessibility approach using gaze control modality for novice players and disabled people. A workshop was conducted involving a playtest on three games with gaze-control. The game experiences were observed, recorded, and evaluated with mixed methods. The study estimated the gaze control game accessibility by System Usability Scale (SUS), Game Experience Questionnaire (GEQ), and an open-ended questionnaire. The gaze control modality demonstrated possible game accessibility to people with motor disabilities. The results also indicate that the challenge of game mechanics and the accuracy of the gaze-control system are two significant impact factors. Further research will be conducted on gaze-control games including more disabled people, and also develop the data analysis methods for evaluating gaze-control modality for game accessibility.

Lida Huang, Thomas Westin
Accessibility of Mobile Card Games

The article describes a study aimed at developing an interaction template for mobile card games for visually impaired gamers. First, accessibility features of existing mobile card games were analyzed. Then various types of actions in common card games were studied and classified to proper categories. Next a simplified layout was proposed in a simplified form of single card view. The interaction mode also was limited to six simple gestures. This approach was used in the sample game. Finally, the new approach was evaluated obtaining satisfactory results.

Krzysztof Dobosz, Artur Adamczyk
Developing a Serious Game for Children with Diabetes

A Serious Game has been developed for preschool-age children who have been newly diagnosed with type 1 diabetes. The name of this game is “for kids with diabetes” the shorter version is “4KidsDiab”. The 4KidsDiab program consists of two parts, an editor and a game part. The editor part is for the parents and they can adjust the game according to their child’s daily allowable carbohydrate meals. Parents can upload pictures and data of meal/food into the game database. The main menu of the game contains four games for children: “True/False quiz”, “Which food has more/fewer carbs”, “Take it to your plate” and the reward game: “Feed the figure” game. This paper shows the design, development and evaluation process of the game. The evaluation process has been based on the System Usability Scale. It is an innovative game because it is useful for children who have multiple diseases e.g. diabetes and gluten or lactose sensitivity.

Cecilia Sik-Lanyi, György Erdős, Andras Sik

Open Access

An Augmented Reality Game for Helping Elderly to Perform Physical Exercises at Home

People are living longer nowadays. Unfortunately, this positive tendency is marred by various age-related health issues, which people experience. Falling is one of the most serious and common of them. Falls negatively influences elderly’ everyday living and significantly decreases quality of their life. Physical exercises is a proven method for preventing falls. However, it is only effective when training is regular and exercise techniques are correct. This paper presents a prototype of an augmented reality exergame for elderly people to perform physical exercise at home. The research is focusing on developing a solution for both above-mentioned issues: augmentation with Microsoft Kinect and various sensors assists in creating a safe game environment, which can helps to perform exercises with right technique; gamification elements contribute to users’ motivation to train regularly. A user-centered design approach was adopted to guide the design and development iterative process. User testing of the first prototype was performed and demonstrated positive attitudes from participants. Feedback from user testing will be used for the next development iterations.

Anna Nishchyk, Wim Geentjens, Alejandro Medina, Marie Klein, Weiqin Chen

Large-Scale Web Accessibility Observatories

Frontmatter
Large Scale Web Accessibility Observatories
Introduction to the Special Thematic Session

This paper is an introductory to the special thematic session “Web Accessibility Observatories”. The presented papers in this session tackle different dimensions of accessibility evaluation from different perspectives e.g. user requirements elicitation for large scale evaluation of websites, using meta data of digital artefacts such as chatbots to assess their accessibility, physical versus digital accessibility and tools to evaluate easy language on websites. Holistic web accessibility evaluation is a complex task that requires powerful cloud infrastructure to cope with the huge amount of data produced during the evaluation process according to legal frameworks.

Yehya Mohamad, Carlos A. Velasco
Preliminary Results of a Systematic Review: Quality Assessment of Conversational Agents (Chatbots) for People with Disabilities or Special Needs

People with disabilities or special needs can benefit from AI-based conversational agents, which are used in competence training and well-being management. Assessment of the quality of interactions with these chatbots is key to being able to reduce dissatisfaction with them and to understand their potential long-term benefits. This will in turn help to increase adherence to their use, thereby improving the quality of life of the large population of end-users that they are able to serve. We systematically reviewed the literature on methods of assessing the perceived quality of interactions with chatbots, and identified only 15 of 192 papers on this topic that included people with disabilities or special needs in their assessments. The results also highlighted the lack of a shared theoretical framework for assessing the perceived quality of interactions with chatbots. Systematic procedures based on reliable and valid methodologies continue to be needed in this field. The current lack of reliable tools and systematic methods for assessing chatbots for people with disabilities and special needs is concerning, and may lead to unreliable systems entering the market with disruptive consequences for users. Three major conclusions can be drawn from this systematic analysis: (i) researchers should adopt consolidated and comparable methodologies to rule out risks in use; (ii) the constructs of satisfaction and acceptability are different, and should be measured separately; (iii) dedicated tools and methods for assessing the quality of interaction with chatbots should be developed and used to enable the generation of comparable evidence.

Maria Laura de Filippis, Stefano Federici, Maria Laura Mele, Simone Borsci, Marco Bracalenti, Giancarlo Gaudino, Antonello Cocco, Massimo Amendola, Emilio Simonetti
Comp4Text Checker: An Automatic and Visual Evaluation Tool to Check the Readability of Spanish Web Pages

One important requirement for a web page to be accessible for all, according to the current international recommendations from the W3C Accessibility Initiative is that the text should be readable and understandable to the broadest audience possible. Nowadays, unfortunately, the information included in the web pages are not easy to read and understand to everybody. This paper introduces the Comp4Text online readability evaluation tool, which is able to calculate the readability level of a web page based on classical linguistic measures (sentence to sentence) and detect unusual words and abbreviations. Moreover, it provides recommendations to solve the readability problems and show everything in a very visual way. Thanks to this tool, the web page designers and writers could improve their sites, being easier to be read and understand for all. Currently, Comp4Text is based on the Spanish language, but it can be easily extended to other languages if the readability metrics and easy-to-read rules are well-known.

Ana Iglesias, Ignacio Cobián, Adrián Campillo, Jorge Morato, Sonia Sánchez-Cuadrado
Towards Cross Assessment of Physical and Digital Accessibility

Our digital and physical worlds are becoming increasingly interconnected. Digital services reduce the need to physically move hence to have to face physical accessibility barriers, but it becomes then more critical to make sure they are not replaced by digital accessibility barriers. In order to assess the interplay of both worlds from the accessibility perspective, we collected available data and used automated tools from three different perspectives: one starting from physically accessible places and looking at the digital accessibility of their online services, the second going the other way and finally a representative sample of services inside a smart city. Globally, we found a good combined level of accessibility in about one third of the places. Mutual strengthening could also be observed, usually greater on the digital accessibility side and revealing that awareness actions in one field also contribute to improve the other.

Christophe Ponsard, Jean Vanderdonckt, Vincent Snoeck

Open Access

Requirements for Large Scale Web Accessibility Evaluation

The recent European legislation emphasizes the importance of enabling people with disabilities to have access to online information and services of public sector bodies. To this regard, automatic evaluation and monitoring of Web accessibility can play a key role for various stakeholders involved in creating and maintaining over time accessible products. In this paper we present the results of elicitation activities that we carried out in a European project to collect experience and feedback from Web commissioners, developers and content authors of websites and web applications. The purpose was to understand their current practices in addressing accessibility issues, identify the barriers they encounter when exploiting automatic support in ensuring the accessibility of Web resources, and receive indications about what functionalities they would like to exploit in order to better manage accessibility evaluation and monitoring.

Fabio Paternò, Francesca Pulina, Carmen Santoro, Henrike Gappa, Yehya Mohamad

Accessible and Inclusive Digital Publishing

Frontmatter
STS on Accessible and Inclusive Digital Publishing
Introduction to the Special Thematic Session

The special thematic session on Accessible and inclusive Digital Publishing consists of a wide range of publications in this area. It will show efforts undertaken to understand the role of accessibility within a company, practical methods for accessing and structuring digital content and ways of improving the evaluation of digitally created content.

Reinhard Ruemer, Valentin Salinas López

Open Access

How Web Professionals Perceive Web Accessibility in Practice: Active Roles, Process Phases and Key Disabilities

Providing usable web information and services to as many people as possible confronts web professionals with a challenging task. The present study delivers insights about how Web accessibility is perceived in practice. Using a survey, a total of 163 web professionals in various roles reported their evaluation of Web accessibility implementation in their projects with regard to three aspects: the professional roles primarily responsible for Web accessibility, key phases in the development process, and the types of disabilities primarily considered. Results show that non-technical professional roles are perceived to be less involved in the development process, that Web accessibility considerations are mainly restricted to the design and implementation phases of projects, and that efforts focus predominantly on the needs of people with visual impairments.

Beat Vollenwyder, Klaus Opwis, Florian Brühlmann
Towards More Efficient Screen Reader Web Access with Automatic Summary Generation and Text Tagging

Readers with 20/20 vision can easily read text and quickly perceive information to get an overview of the information within a text. This is more challenging for readers who rely on screen readers. This study investigated factors affecting successful screen reading in order to shed light on what contributes towards the improvement of screen reading access. Text extraction, summarization, and representation techniques were explored. The goal of this work leads to the development of a new summarization technique, referred to as On Demand Summary Generation and Text Tagging (ODSG&TT). This technique makes use of a summarization algorithm and a text tagging algorithm developed by Algorithmia, which enables on the fly and on-demand summarization of text and keyword generation. The focus of the screen reader is transferred to the keywords using a button control. The intention is to provide summaries with minimum user navigation effort to simplify the screen reading process.

Usama Sarwar, Evelyn Eika
A Series of Simple Processing Tools for PDF Files for People with Print Disabilities

This paper presents simple processing tools for PDF files for people with print disabilities. They consist of the following three tools: “PDFcontentEraser”, “PDFfontChanger” and “PDFcontentExtracter.” PDFcontentEraser is a tool to remove a certain type of elements in a PDF file. PDFfontChanger is a tool to change a selection of fonts in a document. PDFcontentExtracter is a tool to retrieve the components of a PDF file.

Shunsuke Nakamura, Kento Kohase, Akio Fujiyoshi
Layout Analysis of PDF Documents by Two-Dimensional Grammars for the Production of Accessible Textbooks

This paper proposes the use of two-dimensional context-free grammars (2DCFGs) for layout analysis of PDF documents. In Japan, audio textbooks have been available for students with print disabilities in compulsory education. In order to create accessible textbooks including audio textbooks, it is necessary to obtain the information of structure and the reading order of documents of regular textbooks in PDF. It is not simple task because most PDF files only have the information how to print them out, and page-layouts of most textbooks are complex. By using 2DCFGs, we could obtain useful information of regular textbooks in PDF for the production of accessible textbooks.

Kento Kohase, Shunsuke Nakamura, Akio Fujiyoshi
A Multi-site Collaborative Sampling for Web Accessibility Evaluation

Many sampling methods have been used for web accessibility evaluation. However, due to the difficulty of web page feature extraction and the lack of unsupervised clustering algorithm, the result is not very good. How to optimize the manual workload of different websites under the premise of ensuring that the overall manual workload remains the same during multi-site collaborative sampling is an important issue at present. To resolve the above problems, we propose a multi-site collaborative sampling method to obtain the final sampling result of each website. The effectiveness of the two sampling methods proposed in this paper is proved by experiments on real website datasets.

Zhi Yu, Jiajun Bu, Chao Shen, Wei Wang, Lianjun Dai, Qin Zhou, Chuanwu Zhao

AT and Accessibility for Blind and Low Vision Users

Frontmatter
An Overview of the New 8-Dots Arabic Braille Coding System

Considering the rapid technological development and especially for assistive technology, the six-point Braille system has become insufficient to meet the needs of the blind and enable them to read, write content, and, to publish accessible documents. This system is not sufficient to write and produce scientific contents that contain several symbols. Despite this need, the Arabic language still lacks an eight-point coding system. In this context, this paper aims to present a unified eight-point Braille system and present it to Arab communities to get benefit from it in developing digital content for blind people. The Arabic language differs from the Latin and other languages in the number of letters and diacritics, which makes the coding system different from the one used in these languages. In this work, we studied the symbols used in the Arabic language and the current Braille system and looked for methods and recommendations regarding the design of the eight-point Braille system. A methodology and a set of principles have been identified that have been adopted in preparing the system, and rules for coding have been established.

Oussama El Ghoul, Ikrami Ahmed, Achraf Othman, Dena A. Al-Thani, Amani Al-Tamimi
Image-Based Recognition of Braille Using Neural Networks on Mobile Devices

Braille documents are part of the collaboration with blind people. To overcome the problem of learning Braille as a sighted person, a technical solution for reading Braille would be beneficial. Thus, a mobile and easy-to-use system is needed for every day situations. Since it should be a mobile system, the environment cannot be controlled, which requires modern computer vision algorithms. Therefore, we present a mobile Optical Braille Recognition system using state-of-the-art deep learning implemented as an app and server application.

Christopher Baumgärtner, Thorsten Schwarz, Rainer Stiefelhagen
Developing a Magnification Prototype Based on Head and Eye-Tracking for Persons with Low Vision

Severe visual impairments make it difficult for users to work on a computer. For this reason, there is great demand for new technical aids on the computer to compensate for these limitations. Current magnification software makes it possible to adjust the screen content, but due to the lack of overview and the time-consuming use of the mouse it is sometimes difficult to find the right content. If another physical disability is involved, working on a computer often becomes even more difficult. In this paper, we present the development of an affordable magnification system based on a low-cost eye-tracking device, which can be adjusted to the visual impairment without the need for a mouse or keyboard by using the line of vision derived from eye or head movements. Two studies with experts and potential users showed the usefulness of the system.

Thorsten Schwarz, Arsalan Akbarioroumieh, Giuseppe Melfi, Rainer Stiefelhagen
Numeric Key Programming: Programmable Robot Kit for both Visually Impaired and Sighted Elementary School Students

In the informational society, it seems that elementary school students should learn programming and a robot kit such as LEGO is used as one of adequate programming materials. However, almost all programming tools for beginners employ graphical user interface, so visually impaired students cannot use such programming tools. To reduce the problem, we have proposed a new programming material only using a numeric keypad and a mobile toy robot. In this paper, we show the architecture of our programming environment. And we had experimental classes, which were focused on the ease of use it for both visually impaired and sighted students. As a result, visually impaired students were able to obtain the programming skill within 15 min at maximum from their first touch of the robot. On the other hands, sighted students spent only 5 min to use the robot.

Yoshihiko Kimuro, Takafumi Ienaga, Seiji Okimoto

Art Karshmer Lectures in Access to Mathematics, Science and Engineering

Frontmatter

Open Access

AUDiaL: A Natural Language Interface to Make Statistical Charts Accessible to Blind Persons

This paper discusses the design and evaluation of AUDiaL (Accessible Universal Diagrams through Language). AUDiaL is a web-based, accessible natural language interface (NLI) prototype that allows blind persons to access statistical charts, such as bar and line charts, by means of free-formed analytical and navigational queries expressed in natural language. Initial evaluation shows that NLIs are an innovative, promising approach to accessibility of knowledge representation graphics, since, as opposed to traditional approaches, they do not require of additional software/hardware nor user training while allowing users to carry out most tasks commonly supported by data visualization techniques in an efficient, natural manner.

Tomas Murillo-Morales, Klaus Miesenberger
EuroMath: A Web-Based Platform for Teaching of Accessible Mathematics

One of the main goals of students’ education is the acquisition of skills that will determine their functioning in the so-called community of knowledge and their success in the labour market. In 2006, the European Parliament (EP) described, defined, and issued recommendations concerning the acquisition of key competences in individual subjects and general knowledge by young people completing their compulsory education. Among the four subject-defined competences are those pertaining to mathematics and basic scientific and technical skills, as well as IT competences. The need to acquire mathematical abilities concerns amassing the aptitude to develop and use mathematical thinking in solving problems arising from everyday situations, with an emphasis on process, action, and knowledge. However, for many persons who are either blind or vision-impaired, there remains considerable barriers to equal participation in disciplines which rely on mathematical content. This paper describes the EuroMath project which has, over the past three years, developed a web-based solution to enable mathematical communication between teachers and students. Note that we do not stipulate whether the student or teacher is the individual with the visual disability. Rather, we assume that said individual can fulfil either role. To this end, the EuroMath platform has been designed to enable the person who is blind or vision-impaired to create mathematical content or acquire it from others.

Donal Fitzpatrick, Azadeh Nazemi, Grzegorz Terlikowski
Multidisciplinary Experience Feedback on the Use of the HandiMathKey Keyboard in a Middle School

There is a poorly addressed input area in the accessibility field that deals with the input of scientific elements including mathematical formulas. Few studies have addressed this issue although Word and Open Office editors offer input interfaces consisting of button bars associated with mathematical symbols and an “input sheet”. The analysis of input activity with these tools with disabled children has revealed that the use of these bars is complex and tiring. HandyMathKey is virtual keyboard co-designed by specialized teacher and human-interaction researchers to address the difficulties of numerical mathematical input tools. The purpose of this paper is to describe the observation method implemented in a 4th grade class at the Centre Jean Lagarde in Toulouse and to report few results on usability of HMK.

Frédéric Vella, Nathalie Dubus, Cécile Malet, Christine Gallard, Véronique Ades, William Preel, Nadine Vigouroux
Rainbow Math
A Case Study of Using Colors in Math for Students with Moderate to Severe Dyslexia

The goal of Rainbow Math is to investigate what font-related changes can be made to aid students with dyslexia and other learning disabilities. As part of the initial step, we developed software that allows students do customize coloring of text, modify the text’s spacing and style on a per-character basis. Additionally, students can use color to visually distinguish what is between parentheses, brackets, etc. Testing with 13 middle school students showed that most students liked larger fonts, extra spacing between operators, bold fonts, and highlighting of parenthesized expressions. Their self-chosen preferences resulted in decreased reading times and decreased errors.

Neil Soiffer, Jennifer L. Larson

Open Access

On Automatic Conversion from E-born PDF into Accessible EPUB3 and Audio-Embedded HTML5

As a promising method to make digital STEM books in PDF accessible, a new assistive technology to convert inaccessible PDF into accessible digital books in some different-type formats are shown. E-born PDF is initially converted into text-based EPUB3, and then, it is converted into audio-embedded HTML5 with JavaScript (ChattyBook). In the conversion, various local languages can be chosen for reading out STEM contents.

Masakazu Suzuki, Katsuhito Yamaguchi

Tactile Graphics and Models for Blind People and Recognition of Shapes by Touch

Frontmatter

Open Access

Development of Tactile Globe by Additive Manufacturing

To understand geographical positions, globes adapted for tactile learning is needed for people with visual impairments. Therefore, we created three-dimensional (3D) tactile models of the earth for the visually impaired, utilizing the exact topography data obtained by planetary explorations. Additively manufactured 3D models of the earth can impart an exact shape of relief on their spherical surfaces. In this study, we made improvements to existing models to satisfy the requirements of tactile learning. These improvements were the addition of the equator, prime meridian, and two poles to a basis model. Hence, eight types of model were proposed. The equator and the prime meridian were expressed by the belt on four models (i.e., B1, B2, B3, and B4). The height of their belt was pro-vided in four stages. The equator and the prime meridian were expressed by the gutter on four models (i.e., C1, C2, C3, and C4). The width of their gutter was provided in four stages. The north pole was expressed by a cone, while the south pole was expressed by a cylinder. The two poles have a common shape in all of the eight models. Evaluation experiments revealed that the Earth models developed in this study were useful for tactile learning of the visually impaired.

Yoshinori Teshima, Yohsuke Hosoya, Kazuma Sakai, Tsukasa Nakano, Akiko Tanaka, Toshiaki Aomatsu, Kenji Yamazawa, Yuji Ikegami, Yasunari Watanabe
Touch Explorer: Exploring Digital Maps for Visually Impaired People

This paper describes an interaction concept for persons with visual impairments to explore digital maps. Mobile map applications like Google Maps have become an important instrument for navigation and exploration. However, existing map applications are highly visually oriented, making them inaccessible to users with visual impairments. This ongoing research project aims to develop an accessible digital map application in which information is presented in a non-visual way. Analysis of existing market solutions shows that information retention is highest when a combination of different output modalities is used. As a result, a prototype app has been created using three major non-visual modalities: Voice output (speech synthesis), everyday sounds (e.g. car traffic), and vibration feedback. User tests were performed, and based on the test results, the Touch Explorer app was developed. Initial usability tests are described in this paper.

Alireza Darvishy, Hans-Peter Hutter, Markus Grossenbacher, Dario Merz
Development of TARS Mobile App with Deep Fingertip Detector for the Visually Impaired

We propose TARS mobile applications that uses a smartphone with a camera and deep learning fingertip detector for easier implementation than using a PC or a touch panel. The app was designed to recognize the user’s hand touching the images with the rear camera and provide voice guidance with the information on the images that the index finger is touching as a trigger. When performing gestures with either the index finger or thumb, the app was able to detect and output the fingertip point without delay, and it was effective as a trigger for reading. Thumb gestures are assumed to have reduced detection variances of 68% in the lateral direction because they rarely move the other four fingers compared to index finger gestures. By performing multiple detections in the application and outputting the median, the variances of detection can be reduced to 73% in the lateral direction and 70% in the longitudinal direction, which shows the effectiveness of multiple detections. These techniques are effective in reducing the variance of fingertip detection. We also confirmed that if the tilt of the device is between −3.4 mm and 4 mm, the current app could identify a 12 mm difference with an accuracy of 85.5% as an average in both of the lateral and longitudinal directions. Finally, we developed a basic model of TARS mobile app that allows easier installation and more portability by using a smart phone camera rather than a PC or a touch panel.

Yoichi Hosokawa, Tetsushi Miwa, Yoshihiro Hashimoto
TouchPen: Rich Interaction Technique for Audio-Tactile Charts by Means of Digital Pens

Audio-tactile charts have the potential to improve data analysis with tactile charts for blind people. Enhancing tactile charts with audio feedback can replace Braille labels and provide more structured information than pure tactile graphics. Current approaches lack especially in support of gestural interaction to develop useful interaction concepts for audio-tactile charts. Many approaches make use of non-standard hardware or are less mobile. That is why we investigated digital pens and their capability to increase data analysis with tactile charts. We compared two digital pens, in particular, the TipToi $$^{\textregistered }$$ pen and the Neo SmartPen M1. First, we evaluated the implementation and feasibility of five basic gestures. While the TipToi $$^{\textregistered }$$ is not suitable to support rich touch gestures, the Neo SmartPen showed in a pilot study good support of single-tap, double-tap as well as hold and line gestures. On that basis, we implemented the first prototype to demonstrate the potential of digital pens to support data analysis tasks with audio-tactile scatterplots. Afterwards, we evaluated the prototype in a pilot study with one participant. The study shows high indications for the usefulness of the presented system. The usage of the digital pen can improve the readability of a tactile chart. Our system provides audio-feedback for given tactile scatterplots in an accessible and automatic way. As a result, blind users were able to produce and use audio-tactile charts on their own by using an Android application and the Neo SmartPen.

Christin Engel, Nadja Konrad, Gerhard Weber

Environmental Sensing Technologies for Visual Impairment

Frontmatter
A Multi-scale Embossed Map Authoring Tool for Indoor Environments

We introduce a multi-scale embossed map authoring tool (M-EMAT) that produces tactile maps of indoor environments from the building’s structural layout and its 3D-scanned interiors on demand. Our tool renders indoor tactile maps at different spatial scales, representing a building’s structure, a zoomed-in of a specific area, or an interior of a room. M-EMAT is very easy to use and produces accurate results even in the case of complex building layouts.

Viet Trinh, Roberto Manduchi
A Real-Time Indoor Localization Method with Low-Cost Microwave Doppler Radar Sensors and Particle Filter

We propose a novel method of localization based on low-cost continuous-wave unmodulated doppler microwave radar sensors. We use both velocity measures and distance estimations with RSS from radar sensors. We also implement a particle filter for real time localization. Experiments show that, with a reasonable initial estimate, it is possible to track the movements of a person in a room with enough accuracy for considering using this type of devices for monitoring a person or indoor guiding applications.

Sylvain Ferrand, François Alouges, Matthieu Aussal
An Audio-Based 3D Spatial Guidance AR System for Blind Users

Augmented reality (AR) has great potential for blind users because it enables a range of applications that provide audio information about specific locations or directions in the user’s environment. For instance, the CamIO (“Camera Input-Output”) AR app makes physical objects (such as documents, maps, devices and 3D models) accessible to blind and visually impaired persons by providing real-time audio feedback in response to the location on an object that the user is touching (using an inexpensive stylus). An important feature needed by blind users of AR apps such as CamIO is a 3D spatial guidance feature that provides real-time audio feedback to help the user find a desired location on an object. We have devised a simple audio interface to provide verbal guidance towards a target of interest in 3D. The experiment we report with blind participants using this guidance interface demonstrates the feasibility of the approach and its benefit for helping users find locations of interest .

James M. Coughlan, Brandon Biggs, Marc-Aurèle Rivière, Huiying Shen
An Indoor Navigation App Using Computer Vision and Sign Recognition

Indoor navigation is a major challenge for people with visual impairments, who often lack access to visual cues such as informational signs, landmarks and structural features that people with normal vision rely on for wayfinding. Building on our recent work on a computer vision-based localization approach that runs in real time on a smartphone, we describe an accessible wayfinding iOS app we have created that provides turn-by-turn directions to a desired destination. The localization approach combines dead reckoning obtained using visual-inertial odometry (VIO) with information about the user’s location in the environment from informational sign detections and map constraints. We explain how we estimate the user’s distance from Exit signs appearing in the image, describe new improvements in the sign detection and range estimation algorithms, and outline our algorithm for determining appropriate turn-by-turn directions.

Giovanni Fusco, Seyed Ali Cheraghi, Leo Neat, James M. Coughlan

Open Access

Suitable Camera and Rotation Navigation for People with Visual Impairment on Looking for Something Using Object Detection Technique

For people with visual impairment, smartphone apps that use computer vision techniques to provide visual information have played important roles in supporting their daily lives. However, they can be used under a specific condition only. That is, only when the user knows where the object of interest is. In this paper, we first point out the fact mentioned above by categorizing the tasks that obtain visual information using computer vision techniques. Then, in looking for something as a representative task in a category, we argue suitable camera systems and rotation navigation methods. In the latter, we propose novel voice navigation methods. As a result of a user study comprised of seven people with visual impairment, we found that (1) a camera with a wide field of view such as an omnidirectional camera was preferred, and (2) users have different preferences in navigation methods.

Masakazu Iwamura, Yoshihiko Inoue, Kazunori Minatani, Koichi Kise
Expiry-Date Recognition System Using Combination of Deep Neural Networks for Visually Impaired

Many drink packages have expiry dates written in dot matrix characters (digits and non-digits, e.g., slashes or dots). We collected images of these packages and trained two existing deep neural networks (DNNs) to combine and form a system for detecting and recognizing expiry dates on drink packages. One of the DNNs is an object-detection DNN and the other is a character-recognition DNN. The object-detection DNN alone can localize the characters written on a drink package but its recognition accuracy is not sufficient. The character-recognition DNN alone cannot localize characters but has good recognition accuracy. Because the system is a combination of these two DNNs, it improves the recognition accuracy. The object-detection DNN is first used to detect and recognize the expiry date by localizing and obtaining the size of the character. It then scans the expiry-date region and clips the image. The character-recognition DNN then recognizes the characters from the clipped images. Finally, the system uses both DNNs to obtain the most accurate recognition result based on the spacing of the digits. We conducted an experiment to recognize the expiry dates written on the drink package. The experimental results indicate that the recognition accuracy of the object-detection DNN alone was 90%, that of the character-recognition DNN alone was also 90%, and that combining the results of both DNNs was 97%.

Megumi Ashino, Yoshinori Takeuchi
Indoor Query System for the Visually Impaired

Scene query is an important problem for the visually impaired population. While existing systems are able to recognize objects surrounding a person, one of their significant shortcomings is that they typically rely on the phone camera with a finite field of view. Therefore, if the object is situated behind the user, it will go undetected unless the user spins around and takes a series of pictures. The recent introduction of affordable, panoramic cameras solves this problem. In addition, most existing systems report all “significant” objects in a given scene to the user, rather than respond to a specific user-generated query as to where an object located. The recent introduction of text-to-speech and speech recognition capabilities on mobile phones paves the way for such user-generated queries, and for audio response generation to the user. In this paper, we exploit the above advancements to develop a query system for the visually impaired utilizing a panoramic camera and a smartphone. We propose three designs for such a system: the first is a handheld device, and the second and third are wearable backpack and ring. In all three cases, the user interacts with our systems verbally regarding whereabouts of objects of interest. We exploit deep learning methods to train our system to recognize objects of interest. Accuracy of our system for the disjoint test data from the same buildings in the training set is 99%, and for test data from new buildings not present in the training data set is 53%.

Lizhi Yang, Ilian Herzi, Avideh Zakhor, Anup Hiremath, Sahm Bazargan, Robert Tames-Gadam
SelfLens: A Personal Assistive Technology to Support the Independence of People with Special Needs in Reading Information on Food Items

Grocery shopping or handling food items (e.g. packets, boxes, etc.) can be a very difficult task for people with special needs. Object labels may contain much information that can be difficult to read because the data shown is a lot, and the text is difficult to read by many people. Blind people are unable to get that information autonomously, and many sighted persons (e.g. elderly people and visually-impaired) may have a lot of difficulty in reading labels. Several tools or applications are available on the market or have been proposed in the literature to support this type of activity (e.g. barcode or QR code reading), but they are limited and may require specific skills by the user. Moreover, repeatedly using an application to read label contents or to get additional information on a product can require numerous actions on a touch-screen device. This can make their use inaccessible or unusable for many users, especially while shopping or cooking. In this work, a portable tool is proposed to support people in simply reading the contents of labels and getting additional information, while they are at home or at the shop. Our study aims to propose a portable assistive technology which can be used by everyone both at home and in the shopping, independently from the personal skills and without requiring no smartphone or complex device, and that is a low-cost solution for the user. Such a product could be very useful for the people independence in a period like that one we are living due to the lockdown required by the Covid-19 situation.

Giulio Galesi, Luciano Giunipero, Barbara Leporini, Franco Pagliucoli, Antonio Quatraro, Gianni Verdi
Backmatter
Metadaten
Titel
Computers Helping People with Special Needs
herausgegeben von
Klaus Miesenberger
Roberto Manduchi
Dr. Mario Covarrubias Rodriguez
Petr Peňáz
Copyright-Jahr
2020
Electronic ISBN
978-3-030-58796-3
Print ISBN
978-3-030-58795-6
DOI
https://doi.org/10.1007/978-3-030-58796-3