Skip to main content
main-content
Top

About this book

The two volume set LNCS 9758 and 9759, constitutes the refereed proceedings of the 15th International Conference on Computers Helping People with Special Needs, ICCHP 2015, held in Linz, Austria, in July 2016.

The 115 revised full papers and 48 short papers presented were carefully reviewed and selected from 239 submissions. The papers included in the second volume are organized in the following topics: environmental sensing technologies for visual impairments; tactile graphics and models for blind people and recognition of shapes by touch; tactile maps and map data for orientation and mobility; mobility support for blind and partially sighted people; the use of mobile devices by individuals with special needs as an assistive tool; mobility support for people with motor and cognitive disabilities; towards e-inclusion for people with intellectual disabilities; At and inclusion of people with autism or dyslexia; AT and inclusion of deaf and hard of hearing people; accessible computer input; AT and rehabilitation for people with motor and mobility disabilities; HCI, AT and ICT for blind and partially sighted people.

Table of Contents

Frontmatter

Erratum to: Experimenting with Tactile Sense and Kinesthetic Sense Assisting System for Blind Education

Junji Onishi, Tadahiro Sakai, Msatsugu Sakajiri, Akihiro Ogata, Takahiro Miura, Takuya Handa, Nobuyuki Hiruma, Toshihiro Shimizu, Tsukasa Ono

Environmental Sensing Technologies for Visual Impairment

Frontmatter

Ball Course Detection Function for the Blind Bowling Support System Using a Depth Sensor

To realize a blind bowling support system that tells information to a blind player using a voice synthesizer, a function of ball course detection is being developed after implementation of a function of counting remaining pins. The new function detects a position of thrown ball on an area of arrow marks on the bowling lane using a depth sensor. The sensor is connected to a pipe frame that bridged over the lane. Based on the evaluation by a blind bowling player, it can be shown that the function works basically well, although there is still potential to improve its stability.

Makoto Kobayashi

Catching the Right Bus - Improvement of Vehicle Communication with Bluetooth Low Energy for Visually Impaired and Blind People

Visually impaired and blind people have major difficulties in locating and communicating with public transport vehicles due to their restriction of vision. They must rely on other people’s help or technical supports. In this paper we show how direct communication with the bus driver via Bluetooth Low Energy (BLE) is possible. A person with visual restriction is able to send and receive messages via an accessible smartphone app directly to the bus driver. With the help of the suggested system traveling with public transport gets easier and the person’s independent mobility is improved.

Elmar Krainz, Werner Bischof, Markus Dornhofer, Johannes Feiner

Navi Rando

GPS-IMU Smart Phone Application for Helping Visually Impaired People Practicing Hiking

The GPS devices adapted for the visually impaired people are used most of the time in urban areas. Additionally, the heading of the GPS is not updated correctly in pedestrian navigation. If the person hesitates at low speed and losses his orientation, he has to walk some tens of meters in order to have updated information. Another technical problem is that the GPS signal can be lost because of bad reception conditions. To reduce the technical problems, the authors propose a GPS-IMU (accelerometers and gyrometers) coupled to a compass app. In our approach, this system is used to give the possibility to the visually impaired people to hike without sighted intervention. The application has two parts; one part to save the GPS coordinates of the path in situ with the corresponding comments and the other part to navigate. The tests were done with three visually impaired (two partially blind people and a fully blind person) experts in the use of the app during the night in 4.65 km road passing from urban areas and rustic paths. The results were presented taking into considerations the rate between the average speed in doing the turnings of the path and the average speed in straight lines. The results are very encouraging because the three visually impaired people arrived to the final destination without any kind of sighted help even in extreme conditions of the tests (the rustic path was not clear enough to be detected with the white cane and the tests were done in a rainy day during the night).

Jesus Zegarra Flores, Laurence Rasseneur, Clément Gass, René Farcy

Scene Text Detection and Tracking for Wearable Text-to-Speech Translation Camera

Camera-based character recognition applications equipped with voice synthesizer are useful for the blind to read text messages in the environments. Such applications in the current market and/or similar prototypes under research require users’ active reading actions, which hamper other activities. We presented a different approach at ICCHP2014; the user can be passive, while the device actively finds useful text in the scene. Text tracking feature was introduced to avoid duplicate reading of the same text. This report presents an improved system with two key components, scene text detection and tracking, that can handle text in various languages including Japanese/Chinese and resolve some scene analysis problems such as merging of text lines. We have employed the MSER (Maximally Stable Extremal Regions) algorithm to obtain better text images, and developed a new text validation filter. Some technical challenges for future device design are presented as well.

Hideaki Goto, Kunqi Liu

Zebra Crossing Detection from Aerial Imagery Across Countries

We propose a data driven approach to detect zebra crossings in aerial imagery. The system automatically learns an appearance model from available geospatial data for an examined region. HOG as well as LBPH features, in combination with a SVM, yield state of the art detection results on different datasets. We also use this classifier across datasets obtained from different countries, to facilitate detections without requiring any additional geospatial data for that specific region. The approach is capable of searching for further, yet uncharted, zebra crossings in the data. Information gained from this work can be used to generate new zebra crossing databases or improve existing ones, which are especially useful in navigational assistance systems for visually impaired people. We show the usefulness of the proposed approach and plan to use this research as part of a larger guidance system.

Daniel Koester, Björn Lunt, Rainer Stiefelhagen

Sound of Vision – 3D Scene Reconstruction from Stereo Vision in an Electronic Travel Aid for the Visually Impaired

The paper presents the preliminary results for the parametrization of 3D scene for sonification purposes in an electronic travel aid (ETA) system being built within the European Union’s H2020 Sound of Vision project. The ETA is based on the concept of sensory substitution, in which visual information is transformed into either acoustic or haptic stimuli. In this communication we concentrate on vision-to-audio conversion i.e. employing stereovision for reconstruction of 3D scenes and building a spatial model of the environment for sonification. Two prerequisite approaches for the sonification are proposed. One involves the direct sonification of the so-called “U-disparity” representation of the depth map of the environment, while the other relies on the processing of the depth map to extract obstacles present in the environment and presenting them to the user as auditory icons reflecting specific size and location of the sonified object.

Mateusz Owczarek, Piotr Skulimowski, Pawel Strumillo

Experiments with a Public Transit Assistant for Blind Passengers

Public transportation is key to independence for many blind persons. Unfortunately, in spite of recent accessibility progress, use of public transportation remains challenging without sight. In this contribution, we describe a system that provides enhanced travel-related information access to a blind bus passenger. Users of this system can select a specific bus line and desired destination on a regular Android smartphone or tablet; are notified when the bus arrives; once in the bus, they are informed of its progress during the route; and are given ample advance notice when the bus is approaching their destination. This system was tested with four blind participants in realistic conditions.

German Flores, Roberto Manduchi

Tactile Graphics and Models for Blind People and Recognition of Shapes by Touch

Frontmatter

Electromagnetic Microactuator-Array Based Virtual Tactile Display

This paper describes the development and evaluation of a novel tactile display assembled from a 4 by 5 array of electromagnetic, voice-coil type micro-actuators. Each actuator is separately controlled and operates at the optimum human tactile recognition vibrating frequency and amplitude as vibrotactile actuators (tactors). As a preprogrammed, meaningful sequence of micro-actuators is actuated, the user recognizes the vibro-tactile pattern on his/her fingertip and identifies it as a single alpha-numeric character. Human subject studies have been conducted where the actuators are vibrating vertically between their resting position and the surface of the involved fingertip in a predefined sequence, which creates the tactile perception of continuous curves. The efficiency analysis by which these curves are identified as characteristic shapes by the subjects shows an average of over 70 % recognition performance.

Zoltan Szabo, Eniko T. Enikov

Empowering Low-Vision Rehabilitation Professionals with “Do-It-Yourself” Methods

Teachers and educators working with visually impaired students need to create numerous adapted learning materials that represent maps, shapes, objects, concepts, etc. This process usually relies on tactile document makers. It is time consuming and expensive. Recent “Do-It-Yourself” (DIY) techniques, including 3D printing and low-cost rapid prototyping, may enable to easily and quickly create learning materials that is versatile, interactive and cheap. In this study, we first analyzed the needs of many professionals at an institute for visually impaired children. It appeared that many of the students also present associated behavioral disorders, which has consequences on the design of adapted materials. In a second step, we used a focus group with design probes made with regular or 3D printed objects, and low-cost microcontrollers. At the end of the focus group, we identified four specific scenarios with different teachers and students. We created four low-cost interactive prototypes, and we observed how they were used during learning sessions. In conclusion, DIY methods appear to be a valuable solution for enabling professionals to quickly design new adapted materials or modify existing ones. Furthermore, DIY methods provide a collaborative framework between teachers and visually impaired students, which has a positive impact on their motivation.

Stéphanie Giraud, Christophe Jouffrais

Expansion Characteristic of Tactile Symbols on Swell Paper

Effects of Heat Setting, Position and Area of Tactile Symbols

Swell paper is one method of making tactile graphics. Its expansion heights are affected by several factors. Of them, in this study, we changed three parameters: the heat setting and the position and the size of the tactile images. The expansion heights of the tactile images with various parameters were measured by using a 3D measurement system. As the results, we found the quantitative effects of these parameters on the expansion heights of tactile images.

Takahiro Hashimoto, Tetsuya Watanabe

Tactile Identification of Embossed Raised Lines and Raised Squares with Variable Dot Elevation by Persons Who Are Blind

We present a study on the identification accuracy of embossed tactile lines and squares in eight dot elevations and two dot densities. The results of correct and misclassified matched stimuli by ten congenitally blind participants are presented in confusion matrixes for the raised-dot lines and squares test stimuli. Moreover, the overall mean response time of the identification task is provided. Participants identify better the lower three dot elevations for both lines and squares on 20 or 10 dpi, with an exception for 20 dpi squares where the highest dot elevation is third in the order of recognition. The application of a multilevel model fitting to the data indicated significant effects for the role of the DOTs (raised dot lines versus raised dot squares) with the raised dot squares being associated with significantly elevated correct responding.

Georgios Kouroupetroglou, Aineias Martos, Nikolaos Papandreou, Konstantinos Papadopoulos, Vassilios Argyropoulous, Georgios D. Sideridis

Early Stimulation with Tactile Devices of Visually Impaired Children

Vision plays an essential role in development, and in recent years our understanding of the close relationship between vision and other areas of development has increased considerably. Touch-system for Visually Impaired Children (TouchVIC) is a Mobile application designed to be used as a support tool in the early stimulation of visually impaired children. It includes nine kinds of different activities for iPad, which are intended to stimulate cognitive, emotional, sensorial and motor aspects; as well as an authoring tool which allows the customization and configuration of exercises in order to adapt them to the child’s interests, needs and abilities at all times. It also has options which allow the creation of customized agendas, where activities and evaluation sessions can be sequenced and planned. Another outstanding feature of TouchVic is that it is one of the first Apps of its kind in that it is really inclusive and accessible for professionals or family members who are visually impaired.

María Visitación Hurtado Torres, María Luisa Rodríguez Almendros, María José Rodríguez Fórtiz, Carlos Rodríguez Domínguez, María Bermúdez-Edo

Experimenting with Tactile Sense and Kinesthetic Sense Assisting System for Blind Education

In most of cases, communications based on multimedia form is inaccessible to the visually impaired. Thus, persons lacking eyesight are eager for a method that can provide them with access to progress in technology. We consider that the main important key for inclusive education is to real-timely provide materials which a teacher shows in a lesson. In this study, we present tactile sense and kinesthetic sense assisting system in order to provide figure or graphical information without an any assistant. This system gives us more effective teaching under inclusive education system.

Junji Onishi, Tadahiro Sakai, Msatsugu Sakajiri, Akihiro Ogata, Takahiro Miura, Takuya Handa, Nobuyuki Hiruma, Toshihiro Shimizu, Tsukasa Ono

Locating Widgets in Different Tactile Information Visualizations

Large tactile displays demand for novel presentation and interaction strategies. In this paper, different tactile view types and orientation tools are evaluated with 13 blind users. The study has shown that the different view types are usable for different tasks. Orientation can be kept best in view types with Braille output but these are often not sufficient for graphical tasks. The usage of planar orientation tools, such as structure region or minimap, need to be trained to allow for an efficient support of two-dimensional tactual exploration.

Denise Prescher, Gerhard Weber

A Concept for Re-useable Interactive Tactile Reliefs

We introduce a concept for a relief-printer, a novel production method for tactile reliefs, that allows to reproduce bas-reliefs of several centimeters height difference. In contrast to available methods, this printer will have a much smaller preparation time, and does not consume material nor produce waste, since it is based on a re-usable medium, suitable for temporary printouts. Second, we sketch a concept for the autonomous, interactive exploration of tactile reliefs, in the form of a gesture-controlled audio guide, based on recent depth cameras. Especially the combination of both approaches promises rapid tactile accessibility to 2.5D spatial information in a home or education setting, to on-line resources, or as a kiosk installation in museums.

Andreas Reichinger, Anton Fuhrmann, Stefan Maierhofer, Werner Purgathofer

Three-Dimensional Models of Earth for Tactile Learning

Three-Dimensional (3D) tactile models of Earth were constructed for the visually impaired. We utilized exact topography data obtained by planetary explorations. Therefore, 3D models of Earth by additive manufacturing possess exact shape of relief on their spherical surfaces. Several improvements were given to models to suit tactile learning. Experimental results showed that the Earth models developed in this study by additive manufacturing were useful for tactile learning of the globe of the visually impaired.

Yoshinori Teshima, Yasunari Watanabe, Yohsuke Hosoya, Kazuma Sakai, Tsukasa Nakano, Akiko Tanaka, Toshiaki Aomatsu, Tatsuyoshi Tanji, Kenji Yamazawa, Yuji Ikegami, Mamoru Fujiyoshi, Susumu Oouchi, Takeshi Kaneko

Tactile Maps and Map Data for Orientation and Mobility

Frontmatter

Augmented Reality Tactile Map with Hand Gesture Recognition

Paper tactile maps are regarded as a very useful tool for pre-journey learning for visually-impaired people. In order to provide the solution for the perceptual difficulties and the limitation of the amount of contents of the paper tactile maps, we propose the augmented reality (AR) tactile map system. With our AR tactile map, the physical tactile map can be augmented by audio and visual feedbacks which are enlarging/enhancing the focused area and voice over of the POR/POI according to the user’s input. As the interface for the AR tactile map, we adopt intuitive user interfaces with hand gesture recognition by using an RGB-D camera. We implemented a prototype according to the requirements determined by the discussion including the visually-impaired people.

Ryosuke Ichikari, Tenshi Yanagimachi, Takeshi Kurata

Blind Friendly Maps

Tactile Maps for the Blind as a Part of the Public Map Portal (Mapy.cz)

Blind people can now use maps located at Mapy.cz, thanks to the long-standing joint efforts of the ELSA Center at the Czech Technical University in Prague, the Teiresias Center at Masaryk University, and the company Seznam.cz. Conventional map underlays are automatically adjusted so that they could be read through touch after being printed on microcapsule paper, which opens a whole new perspective in the use of tactile maps. Users may select an area of their choice in the Czech Republic (only within its boundaries, for the time being) and also the production of tactile maps, including the preparation of the map underlays, takes no more than several minutes.

Petr Červenka, Karel Břinda, Michaela Hanousková, Petr Hofman, Radek Seifert

BlindWeb Maps – An Interactive Web Service for the Selection and Generation of Personalized Audio-Tactile Maps

Tactile maps may contribute to the orientation of blind people or alternatively be used for navigation. In the past, the generation of these maps was a manual task which considerably limited their availability. Nowadays, similar to visual maps, tactile maps can also be generated semi-automatically by tools and web services. The existing approaches enable users to generate maps by entering a specific address or point of interest. This can in principle be done by a blind user. However, these approaches actually show an image of the map on the users display which cannot be read by screen readers. Consequently, the blind user does not know what is on the map before it is printed. Ideally, the map selection process should give the user more information and freedom to select the desired excerpt. This paper introduces a novel web service for blind people to interactively select and automatically generate tactile maps. It adapts the interaction concept for map selection to the requirements of blind users whilst supporting multiple printing technologies. The integrated audio review of the map’s contents allows earlier feedback to review if the currently selected map extract corresponds to the desired information need. Changes can be initiated before the map is printed which, especially for 3D printing, saves much time. The user is able to select map features to be included in the tactile map. Furthermore, the map rendering can be adapted to different zoom levels and supports multiple printing technologies. Finally, an evaluation with blind users was used to refine our approach.

Timo Götzelmann, Laura Eichler

CapMaps

Capacitive Sensing 3D Printed Audio-Tactile Maps

Tactile maps can be useful tools for blind people for navigation and orientation tasks. Apart from static maps, there are techniques to augment tactile maps with audio content. They can be used to interact with the map content, to offer extra information and to reduce the tactile complexity of a map. Studies show that audio-tactile maps can be more efficient and satisfying for the user than pure tactile maps without audio feedback. A major challenge of audio-tactile maps is the linkage of tactile elements with audio content and interactivity. This paper introduces a novel approach to link 3D printed tactile maps with mobile devices, such as smartphones and tablets, in a flexible way to enable interactivity and audio-support. By integrating conductive filaments into the printed maps it seamlessly integrates into the 3D printing process. This allows to automatically recognize the tactile map by a single press at its corner. Additionally, the arrangement of the tactile map on the mobile device is flexible and detected automatically which eases the use of these maps. The practicability of this approach is shown by a dedicated feasibility study.

Timo Götzelmann

Empirical Study on Quality and Effectiveness of Tactile Maps Using HaptOSM System

This paper covers an empirical study on the quality and effectiveness of tactile maps using the HaptOSM system. HaptOSM is a combination of specialised hardware and software using OpenStreetMap data to create individual tactile maps for blind and visually impaired people. The almost entirely automated manufacturing process makes a single copy per map possible. The study tests the overall quality of HaptOSM tactile maps and compares the suitability of writing film and Braille paper against each other.

Daniel Hänßgen, Nils Waldt, Gerhard Weber

Specification of Symbols Used in Audio-Tactile Maps for Individuals with Blindness

The implementation of multisensory environments in the field of map construction for individuals with visual impairments can be a challenging area for both users and designers of orientation and mobility aids. Audio-tactile maps can utilize a large amount of spatial information represented by audio symbols, tactile symbols, audio-tactile symbols (combined) and Braille labels. In regard to audio-tactile maps an important clarification needs to be elaborated and in particular what needs to be carefully examined is the basic query of which information should be presented in haptic mode and which information should be presented in audio or audio-haptic mode. In practice this means that a reasoned process of defining the appropriate symbols for audio-tactile maps should be implemented. The fundamental aim of project “ATMAPS” - Specification of symbols used on Audio-Tactile Maps for individuals with blindness” presented in this paper is the specification of symbols to be used in audio-tactile maps for individuals with blindness.

Konstantinos Papadopoulos, Konstantinos Charitakis, Eleni Koustriava, Lefkothea Kartasidou, Efstratios Stylianidis, Georgios Kouroupetroglou, Suad Sakalli Gumus, Karin Müller, Engin Yilmaz

User Requirements Regarding Information Included in Audio-Tactile Maps for Individuals with Blindness

The aim of this study is to investigate the user requirements of young adults with blindness regarding the information to be included/ mapped in two different types of audio-tactile mobility maps: (a) audio-tactile maps of indoors, and (b) audio-tactile maps of campuses. Forty young adults (aged from 18 years to 30 years) with blindness took part in the research. Participants came from four countries: 14 from Greece, 2 from Cyprus, 18 from Turkey, and 6 from Germany. The researchers developed two lists of information to be included in the two types of audio-tactile maps (indoor and campus) respectively. Participants were asked to evaluate the information, regarding: (a) the significance of the information in regard to safety, location of services, way finding and orientation during movement, and (b) the frequency the participants meet the information (within their surrounding and the environment they move in). The first list of information to be evaluated, related to the maps of indoor places consisted of 136 different information, and the second list of information to be evaluated, related to the campus maps consisted of 213 different information. The result of the study is the definition of the most important information that should be included in each one of the two different types of audio-tactile maps. Thus, the findings of the present study will be particularly important for designers of orientation and mobility (O&M) aids for individuals with blindness. Moreover, the findings can be useful for O&M specialists, rehabilitation specialists, and teachers who design and construct O&M aids for their students with blindness.

Konstantinos Papadopoulos, Konstantinos Charitakis, Lefkothea Kartasidou, Georgios Kouroupetroglou, Suad Sakalli Gumus, Efstratios Stylianidis, Rainer Stiefelhagen, Karin Müller, Engin Yilmaz, Gerhard Jaworek, Christos Polimeras, Utku Sayin, Nikolaos Oikonomidis, Nikolaos Lithoxopoulos

Mobility Support for Blind and Partially Sighted People

Frontmatter

Obstacle Detection and Avoidance for the Visually Impaired in Indoors Environments Using Google’s Project Tango Device

A depth-data based obstacle detection and avoidance application for VI users to assist them in navigating independently in previously unmapped indoors environments is presented. The application is being developed for the recently introduced Google Project Tango Tablet Development Kit equipped with a powerful processor (NVIDIA Tegra K1 with 192 CUDA cores) as well as var-ious sensors which allow it track its motion and orientation in 3D space in real-time. Depth data for the area in front of the users, obtained using the tablet’s in-built infrared–based depth sensor, is analyzed to detect obstacles and audio-based navigation instructions are provided accordingly. A visual display option is also offered for users with low vision. The aim is to develop a real-time, affordable, aesthetically acceptable, mobile assistive stand-alone application on a cutting-edge device, adopting a user-centered approach, which allows VI users to micro-navigate autonomously in possibly unfamiliar indoor surroundings.

Rabia Jafri, Marwa Mahmoud Khan

System Supporting Independent Walking of the Visually Impaired

This paper proposes an integrated navigation system that supports the independent walking of the visually impaired. It supports route guidance and adjustments, zebra-crossing detection and guidance, pedestrian traffic signal detection and discrimination, and localization of the entrance doors of the buildings of destinations. This system was implemented on an Android smartphone. As a result of experiments, our system’s detection rate for zebra crossings was about 72 % and about 80 % for traffic signals.

Mitsuki Nishikiri, Takehiro Sakai, Hiroaki Kudo, Tetsuya Matsumoto, Yoshinori Takeuchi, Noboru Ohnishi

Path Planning for a Universal Indoor Navigation System

Many researches on indoor navigation systems have been done in the last decades. Most of them consider either people without disabilities or a specific type of disabilities. In this paper, we propose a new model based on a universal design concept. Our approach employs a novel method for modeling indoor environment and introduce a new optimization criterion : minimizing the arduousness of the path. This criterion is based on the user’s profile and the inherent characteristics of amenities which may affect the displacement of the person. The performance of the proposed methods was tested and validated in a university building through an application for smart-phone.

Elie Kahale, Pierre-Charles Hanse, Valéria Destin, Gérard Uzan, Jaime Lopez-Krahe

Supporting Pedestrians with Visual Impairment During Road Crossing: A Mobile Application for Traffic Lights Detection

Many traffic lights are still not equipped with acoustic signals. It is possible to recognize the traffic light color from a mobile device, but this requires a technique that is stable under different illumination conditions. This contribution presents TL-recognizer, an application that recognizes traffic lights from a mobile device camera. The proposed solution includes a robust setup for image capture as well as an image processing technique. Experimental results give evidence that the proposed solution is practical.

Sergio Mascetti, Dragan Ahmetovic, Andrea Gerino, Cristian Bernareggi, Mario Busso, Alessandro Rizzi

Sound of Vision - Spatial Audio Output and Sonification Approaches

The paper summarizes a number of audio-related studies conducted by the Sound of Vision consortium, which focuses on the construction of a new prototype electronic travel aid for the blind. Different solutions for spatial audio were compared by testing sound localization accuracy in a number of setups, comparing plain stereo panning with generic and individual HRTFs, as well as testing different types of stereo headphones vs custom designed quadrophonic proximaural headphones. A number of proposed sonification approaches were tested by sighted and blind volunteers for accuracy and efficiency in representing simple virtual environments.

Michal Bujacz, Karol Kropidlowski, Gabriel Ivanica, Alin Moldoveanu, Charalampos Saitis, Adam Csapo, György Wersenyi, Simone Spagnol, Omar I. Johannesson, Runar Unnthorsson, Mikolai Rotnicki, Piotr Witek

The Use of Mobile Devices by Individuals with Special Needs as an Assistive Tool

Frontmatter

Parents’ and Teachers’ Perspectives on Using IPads with Students with Developmental Disabilities

Applications for Universal Design for Learning

Cumming, Strnadová, and Singh postulated 2014, that the Universal Design for Learning (UDL) framework promotes “access and inclusion through the development of flexible learning environments comprised of multiple means of representation, engagement and expression”. Therefore UDL provides a suitable theoretical framework for research examining use of mobile technology by students with developmental disabilities. This paper details the authors’ research work with students, parents, and teachers and their experiences with using mobile technology (specifically iPads) within the UDL framework. It explores the ways that iPads were used to support the learning of students with developmental disabilities, in both general and special education settings. Results, implications for practice and future research are discussed.

Therese M. Cumming, Iva Strnadová

The iPad Helping Preschool Children with Disabilities in Inclusive Classrooms

A review of the literature has revealed that the learning of preschool children with disabilities can be enhanced through play that includes the use of digital technologies. The iPad offers the possibility of exploration in a new way. However, little information exists on how this technology can be utilized effectively. The focus of this study was to look at the use of iPads by preschool children with disabilities participating in an inclusive classroom. The study looked at the learning the children demonstrated across curriculum areas over twenty- one weeks, the apps that the children chose to use, and parent and teacher perceptions on the use of the iPad by the children.

Linda Chmiliar

Easy Access to Social Media: Introducing the Mediata-App

The Mediata app is a mobile application providing easy access to online platforms and social media for persons with acquired brain injury. Special focus is put on communication with friends and family and the use of mainstream social networks and communication platforms. The main functionality of the application can be used without assistance. In this way Mediata can enable self-determined use of ICT and increase participation and independence of persons with acquired brain injury. This paper reports the findings from two user requirements studies and the resulting design and implementation of the app.

Christian Bühler, Susanne Dirks, Annika Nietzio

A Tool to Improve Visual Attention and the Acquisition of Meaning for Low-Functioning People

Students with Autism spectrum disorder (ASD) have difficulties in social interaction and communication, social skills and acquisition of knowledge. Individuals affected by low-functioning autism are not able to manage the level of abstraction that the use of language requires. Besides, in the most severe cases they have problems in recognizing representations of objects from the real world. In order to intervene in these aspects, it is necessary to facilitate the learning of the processes of the acquisition of meaning, associating the meanings to signifiers at the visual and verbal levels. We have designed a computer-assisted tool called SIGUEME to enhance the development of the perceptive-visual process and the cognitive-visual process. We have performed a pilot study of the use of SIGUEME by 125 children from Spain, including an evaluation based on pre/post testing. This study suggests significant improvements in children’s attention span.

María Luisa Rodríguez Almendros, Marcelino Cabrera Cuevas, Carlos Rodríguez Domínguez, Tomás Ruiz López, María Bermúdez-Edo, María José Rodríguez Fórtiz

Mobility Support for People with Motor and Cognitive Disabilities

Frontmatter

A Mobile Travel Companion Based on Open Accessibility Data

Nowadays, both the quantity and quality of online information about accessibility are improving thanks to the development of Open Data and the use of crowdsourcing. However citizens with reduced mobility still often have to combine multiple sources of information to prepare their trips and can thus hardly do it on the move. The purpose of this paper is to address this problem by proposing a mobile travel companion. On the backend side, a number of available Open Data about public transportation (bus, train, parking) as well as accessibility of public infrastructures are consolidated both from experts and a crowdsourcing initiative. On the user interface side, it demonstrates how to design an end-to-end view on the accessibility information covering both the travel and the visited infrastructures. The interface organisation is compatible with mobile terminals and makes intelligent use of geolocalisation and proximity information. The whole concept is validated on a complete set of data from a major Belgian city.

Christophe Ponsard, Fabrice Estiévenart, Valery Ramon, Alexandre Rosati, Emilie Goffin, Vincent Snoeck, Stéphanie Hermans

Mobility Support for People with Dementia

Mobility support exists for a variety of target groups. Most of these approaches do not consider people with dementia although mobility is crucial to preserve physical abilities and slow down the course of disease. This paper presents an approach which tackles the consequences of dementia on mobility on different levels. The preparation phase gains self-confidence. A smart device based solution informs the traveler and checks if she is in a state of confusion. Stakeholder involvement, like informal and formal caregivers, and an incremental safety net ensure a subtle and effective resolution of situations of confusion.

Reinhard Koutny, Klaus Miesenberger

Community Engagement Strategies for Crowdsourcing Accessibility Information

Paper, Wheelmap-Tags and Mapillary-Walks

Social innovations are increasingly being seen as a way of compensating for insufficiencies of both, state and market to create inclusive and accessible environments. In this paper we explore crowdsourcing accessibility information as a form of social innovation, requiring adequate engagement strategies that fit the skills of the intended group of volunteers and ensure the needed levels of data accuracy and reliability. The tools that were used for crowdsourcing included printed maps, mobile apps for collective tagging, blogs for reflection and visualizations of changing mapping statuses.

Christian Voigt, Susanne Dobner, Mireia Ferri, Stefan Hahmann, Karsten Gareis

Sharing Real-World Accessibility Conditions Using a Smartphone Application by a Volunteer Group

Although the rapid progress of real-world accessibility improvements affects the migration pathway of people with mild/severe visual/physical impairments to their destination, up-to-date accessibility information is difficult to obtain quickly because of delays to open information for public and local disclosure. Therefore, it is necessary to develop a comprehensive system that appropriately acquires and arranges scattered accessibility information, and then presents this information intuitively. However, these systems present volunteers with difficulties when they are gathering accessibility conditions and then arranging them. In this work, our goal is to extract the elements that enable accessibility-sharing applications to collect real-world conditions efficiently. Particularly, we developed a smartphone-based application for sharing accessibility conditions and carried out events to share accessibility information in cooperation with a local volunteer group.

Takahiro Miura, Ken-ichiro Yabu, Takeshi Noro, Tomoko Segawa, Kei Kataoka, Akihito Nishimuta, Masaya Sanmonji, Atsushi Hiyama, Michitaka Hirose, Tohru Ifukube

Development and Evaluation of Navigation System with Voice and Vibration Output Specialized for Persons with Higher Brain Dysfunction

Higher brain dysfunction (HBD) is an umbrella term used for the aftereffects of conditions such as traumatic brain injuries, cerebrovascular disturbances, and encephalitis. Approximately 60 % of persons with HBD lose topographical orientation very easily, which prevents them from walking outdoors without a caregiver. Persons with HBD find existing smartphone navigation applications difficult to master even with extended periods of training; therefore, a smartphone application that can be used by persons with HBD to facilitate independent walking was developed. The new application is simple and easy to use and routes can be specified by caregivers. The system outputs messages via dialogs and voice with vibration, when attention is necessary in cases where the user has arrived at certain sub-goals or has gone off-route. Experiments were conducted with subjects without a disability and HBD subjects to evaluate the effectiveness of the voice and vibration functions. The results showed that the subjects felt that the system was effective and highly usable. However, the average result from the eye mark recorder showed that the viewpoint was concentrated on the smartphone. The result of a second trial conducted with HBD subjects revealed that the average value of time spent observing the device, the number of times the device was observed, and the percentage of time tend to reduce.

Akihiko Hanafusa, Tsuyoshi Nojiri, Tsuyoshi Nakayama

SIMON: Integration of ICT Solutions for Mobility and Parking

Mobility and parking in urban areas are often difficult for people with disabilities. Obstacles include lack of accessible information on routes, transport alternatives and parking availability, as well as fraud in the use of the specific services intended for these citizens. The SIMON project aims to improve this situation through the integration of different ICT solutions, including a new model for the European Parking Card for disable people with contactless technologies to support user unique identification in existing parking areas whilst preserving privacy. SIMON has also developed solutions for mobility including information, navigation and access to restricted areas.

Alberto Ferreras, José Solaz, Eva María Muñoz, Manuel Serrano, Antonio Marqués, Amparo López, José Laparra

Towards e-Inclusion for People with Intellectual Disabilities

Frontmatter

Criteria of Barrier-Free Websites for the Vocational Participation of People with Cognitive Disabilities. An Expert Survey Within the Project “Online-Dabei”

The project “Online-Dabei” contributes to the refinement of user-oriented standards for barrier-free websites for people with cognitive disabilities. In this project, the user-oriented standards prescribed by the German federal ordinance on barrier-free information technology (BITV 2.0) are examined with respect to the needs of people with cognitive disabilities. Results of a pilot study indicate that information needs related to the transition from school to work are crucial for the vocational participation of adults with cognitive disabilities. Follow-up research will specify the requirements for web site design that meets these needs using expert interviews.

Elena Brinkmann, Lena Bergs, Marie Heide, Mathilde Niehaus

Easy Reader – or the Importance of Being Understood

With current advancements in technologies like natural language processing engines and image recognition software the development of a tool that would automatically translate content, that is too difficult to understand for the individual person with cognitive disabilities, into an easier to understand, alternative format seems possible. In this paper we want to describe the idea to create a flexible extensible framework that would help people with cognitive disabilities to better understands and navigate web-content.

Peter Heumader, Cordula Edler, Klaus Miesenberger, Andrea Petz

Potentials of Digital Technology for Participation of Special Needs Children in Kindergarten

This paper presents research results from an ethnographical action research study about digital media usage in kindergarten. A case study conducted with a 5 year-old child with CP points out beneficial effects of technological innovations for kindergarten children with special needs. These include not only intended learning about cause-and-effect or visual focusing, but encompass a rise in social interaction among children, extended concentration, reduced boredom, decline of unsocial behavior. It concludes that efforts to rise media literacy among kindergarten educators is needed to exploit digital medias potentials.

Isabel Zorn, Jennifer Justino, Alexandra Schneider, Jennifer Schönenberg

Reordering Symbols: A Step Towards a Symbol to Arabic Text Translator

Graphic symbols can be used as an alternative way of communication. Translating from a message composed of symbols to a fluent sentence will enable symbol users to be understood by those who may not be familiar with the use of symbols. Symbol messages may not match the target language in terms of order and syntax. This paper describes an attempt to reorder symbols, based on their labels, to match the target language namely Modern Standard Arabic. An initial experiment has been conducted using a SMT decoder with two n-gram models to reorder words in general then discuss its application to symbols. The output was evaluated using BLEU: an automatic evaluation metric used in machine translation. The average score of the output has improved over the input. Further improvements are suggested and will be carried out in future experiments.

Lama Alzaben, Mike Wald, E. A. Draffan

SAMi: An Accessible Web Application Solution for Video Search for People with Intellectual Disabilities

In this paper an accessible Web application that uses icons instead of text to performed YouTube video search, called SAMi, is presented. With this iconic interaction Web application (SAMi), we aimed to develop universal access on the Web, by presenting an alternative way of Web search (without using text); to be a starting point for the definition of an accessible interaction metaphor, based on universal design iconography for digital environments; and ultimately, to contribute to the democratization of access to the Web for all users, regardless of the degree of literacy. The main results obtained with the user test evaluation were: first-rate performance, higher satisfaction and total autonomy in their interaction with SAMi.

Tânia Rocha, Hugo Paredes, João Barroso, Maximino Bessa

Target Group Questionnaire in the “ISG for Competence” Project

This paper introduces the “Intelligent Serious Games for Social and Cognitive Competence” project. The aim of these games are to teach youth with disabilities on creativity. The development of interactive mobile games and 3D simulations helps the social integration and personal development of children and youth with disabilities. The project targets to improve the quality of education and trainings to gain more efficiency. To enhance creativity and innovation the project uses serious games and 3D simulations this way teaching and learning becomes interesting, playful, attractive and efficient.

Szilvia Paxian, Veronika Szücs, Shervin Shirmohhamadi, Boris Aberšek, Andrean Lazarov, Karel Van Isacker, Cecilia Sik-Lanyi

The Development and Evaluation of an Assistance System for Manual Order Picking - Called Pick-by-Projection - with Employees with Cognitive Disabilities

The present paper focuses on conducting research in the field of technical support by assistance systems in order picking for people with cognitive disabilities. One of the goals is to present the prototype of an assistance system for manual order picking (called pick-by-projection), which is the result of an interdisciplinary and user-centered process with and for people with cognitive disabilities. Additionally this paper aims at presenting results of a first evaluation with 24 employees with cognitive disabilities, who were testing pick-by-projection in comparison to three methods of the current state of the art.

Andreas Baechler, Liane Baechler, Sven Autenrieth, Peter Kurtz, Georg Kruell, Thomas Hoerz, Thomas Heidenreich

The Use and Impact of an Assistance System for Supporting Participation in Employment for Individuals with Cognitive Disabilities

The UN Convention on the Rights of Persons with Disabilities implies an increase in participation in employment for individuals with cognitive disabilities. In this process, assistive technology plays an important role. This paper shows results of how a technical assistance system providing cognitive support can promote participation in the field of employment.

Liane Baechler, Andreas Baechler, Markus Funk, Sven Autenrieth, Georg Kruell, Thomas Hoerz, Thomas Heidenreich

AT and Inclusion of People with Autism or Dyslexia

Frontmatter

Internal Validity in Experiments for Typefaces for People with Dyslexia

A Literature Review

In recent years, designers claim to have created typefaces that help people with dyslexia, but what evidence supports these claims? We look at studies involving these fonts to see evidence for or against them. The studies try to be scientific, but lack internal validity; i.e., the studies don’t eliminate the possibility that something else could explain the result. We provide a short summary of the studies and why they do not provide internal validity.

Trenton Schulz

Characterization of Programmers with Dyslexia

Computer programmers with dyslexia can be found in a range of academic and professional scenarios. Dyslexia may have the effect on a computer programmer of degrading the expected results during collaborative software development. These people may perform better using visual programming languages. However, we need to understand what programmers with dyslexia experience in order to be able to come up with possible solutions. We have conducted an analysis of existing literature and a survey on dyslexia and programming. This paper reports the preliminary results based on the data gathered so far and the key characteristics and needs of this group with the aim of defining the profile of computer programmers with dyslexia.

José L. Fuertes, Luis F. González, Loïc Martínez

What Technology for Autism Needs to be Invented? Idea Generation from the Autism Community via the ASCmeI.T. App

In autism and technology research, technologies are often developed by researchers targeting specific social and communication difficulties experienced by individuals with autism. In some technology-based projects, children and adults with autism as well as parents, carers, teachers, and other professionals, are involved as users, informers, and (more rarely) as co-designers. However, much less is known about the views of the autism community about the needs they identify as areas that could be addressed through innovative technological solutions. This paper describes the ASCmeI.T. project which encourages members of the autism community to download a free app to answer the question: If there was one new technology to help people with autism, what would it be? This project provides a model of e-participation in which people from the autism community are involved from the start so that new developments in digital technologies can be better matched to support the needs of users.

Sarah Parsons, Nicola Yuill, Judith Good, Mark Brosnan, Lisa Austin, Clarence Singleton, Benoît Bossavit, Barnabear

Using Mind Mapping Software to Initiate Writing and Organizing Ideas for Students with SLD and ADHD

The article is a summary of research conducted in the field of planning functions of postsecondary students with Specific Learning Disabilities (SLD) and/or Attention Deficit Hyperactive Disorder (ADHD). The article provides an overview of the students difficulties initiating writing tasks due to their disabilities. Model of planning functions of students with SLD provides insight into the contribution of motivating factors to initializing four planning functions. The review also presents the advantages of using mind mapping software to initiate writing tasks and organizing ideas for this population. Relevant academic literature shows that use of mind mapping software may assist students with SLD and/or ADHD to initiate writing tasks, overcome their difficulties in better organize information, and develop learning and cognitive skills.

Betty Shrieber

Data Quality as a Bottleneck in Developing a Social-Serious-Game-Based Multi-modal System for Early Screening for ‘High Functioning’ Cases of Autism Spectrum Condition

Our aim is to explore raw data quality in the first evaluation of the first fully playable prototype of a social-serious-game-based, multi-modal, interactive software system for screening for high functioning cases of autism spectrum condition at kindergarten age. Data were collected from 10 high functioning children with autism spectrum condition and 10 typically developing children. Mouse and eye-tracking data, and data from automated emotional facial expression recognition were analyzed quantitatively. Results show a sub-optimal level of raw data quality and suggest that it is a bottleneck in developing screening/diagnostic/assessment tools based on multi-mode behavioral data.

Miklos Gyori, Zsófia Borsos, Krisztina Stefanik, Judit Csákvári

Interpersonal Distance and Face-to-face Behavior During Therapeutic Activities for Children with ASD

This study proposed a quantitative estimation method for interpersonal distance by using a prototype measurement system. With the aid of motion capture technology and marker caps, we estimated the body position and orientation of children with autism spectrum disorders (ASD) and their therapists. A prototype measurement system was introduced in practicing therapy rooms and captured behavior during ongoing therapy for children with ASD. This study confirmed that approaching behavior and, to a lesser extent, interpersonal distance can be effectively estimated using the proposed motion capture system. Additional system improvements are required to capture face-to-face behavior.

Airi Tsuji, Soichiro Matsuda, Kenji Suzuki

AT and Inclusion of Deaf and Hard of Hearing People

Frontmatter

Support System for Lecture Captioning Using Keyword Detection by Automatic Speech Recognition

We propose a support system for lecture captioning. The system can detect the keywords of a lecture and present them to captionists. The captionists can understand what an instructor said even when they cannot understand the keywords, and can input keywords rapidly by pressing the corresponding function key. The system detects the keywords by automatic speech recognition (ASR). To improve the detection rate of keywords, we adapt the language model of ASR using web documents. We collect 2,700 web documents, which include 1.2 million words and 5,800 sentences. We conducted an experiment to detect keywords of a real lecture and showed that the system can achieve higher F-measure of 0.957 than that of a base language model (0.871).

Naofumi Ikeda, Yoshinori Takeuchi, Tetsuya Matsumoto, Hiroaki Kudo, Noboru Ohnishi

Compensating Cocktail Party Noise with Binaural Spatial Segregation on a Novel Device Targeting Partial Hearing Loss

The ability of focusing on a single conversation in the middle of a crowded environment is usually referred at as the cocktail party effect. This skill exploits binaural cues and spectral features of a target speaker. Unfortunately, traditional acoustic prostheses tend to modify these cues in ways that the brain cannot recover. Social isolation is an inevitable consequence. In this work we tested the Glassense, an intelligent pair of glasses. Binaural input from microphones arrays is processed to spatially segregate the soundscape surrounding the listener, so that frontal speech sources are preserved, while competing sources from the sides and the back are attenuated, just as an “acoustical lens”. We report an increase in speech intelligibility by about 4 dB, measured as reception threshold, under severe noisy conditions. Our device can be a complementary input to existing acoustic prostheses, aimed at increasing spatial awareness of persons affected by partial hearing loss.

Luca Giuliani, Sara Sansalone, Stefania Repetto, Federico Traverso, Luca Brayda

CoUnSiL: Collaborative Universe for Remote Interpreting of Sign Language in Higher Education

In this paper, we report on CoUnSiL (Collaborative Universe for Sign Language), our ongoing project on the videoconferencing environment for remote interpreting of sign language. Our work is motivated by the lack of qualified interpreters capable of interpreting highly specialized courses at universities. We present the tool that ensures low latency and sustainable framerate of multiple video and audio streams. We describe both the user interface of the CoUnSiL application followed by the background technologies. Next, we present the three evaluations with users.

Vít Rusňák, Pavel Troubil, Svatoslav Ondra, Tomáš Sklenák, Desana Daxnerová, Eva Hladká, Pavel Kajaba, Jaromír Kala, Matej Minárik, Peter Novák, Christoph Damm

Classifiers in Arab Gloss Annotation System for Arabic Sign Language

As we know deaf people present about 70 million of the person in the word. 17 million of this community is only in Arabic word. Therefore this community of person require more and more attention from researchers and precisely SLMT (Sign Language Machine Translation) researchers to be able to practice their natural right which is communication with other person. In this context the research laboratory LaTICE of the University of Tunis lunched science many years the project WebSign [1] aiming to translate automatically a written text to sign language whatever the language as input (English, French, Arabic, etc.). WebSign is a Web application. It is based on the technology of avatar (animation in virtual world). The input of the system is a text in natural language. The output is a real-time and online interpretation in sign language. This interpretation is constructed thanks to a dictionary of word and signs. The creation of this dictionary can be made in an incremental way by users who propose signs corresponding to words [2]. Our work as a part of this project aims to develop a translation module from Arabic text to Sign Language to be integrated in the WebSign project. This module offers to Arab Deaf and hearing people a tool facilitating their communication. Anyone can use this tool to translate an Arabic written text to Arabic Sign Language (ArSL). In fact in this level, it’s very useful to define a transcription system for Arabic Sign Language based on Arabic Gloss. This intermediate annotation system is a textual representation of sign language that covers the different parameters of the sign with a simplified representation to avoid the complexity of understanding [3].

Nadia Aouiti, Mohamed Jemni

Image-Based Approach for Generating Facial Expressions

In this paper we present a new approach to automatically generate realistic facial expressions by using image based techniques. The proposed approach aims to combine a deformation method with the Phong illumination model. We use real images of real person to generate 2D animations. However, we created an algorithm which consists of changing the color of a party of face in order to give a lot of realism to the deformation methods. The purpose is to propose methods to improve the process of creating realistic facial animations and to facilitate the communication between deaf and hearing impaired people.

Ibtissem Talbi, Oussama El Ghoul, Mohamed Jemni

Developing eLecture Materials for Hearing Impaired Students and Researchers: Designing Multiple-Video Programs and Usability Assessment

eLecture materials for deaf and hard-of-hearing scholars with sign language (SL) interpretation inevitably include multiple-video content. A usability assessment of such a program was conducted, contrasting presentations with 3 media and 6 media views. The preference of Deaf researchers for SL interpretation to subtitles was confirmed, and the need for different arrangements depending on the needs of users was discovered. A prototype system was developed based on the results.

Ritsuko Kikusawa, Motoko Okumoto, Takuya Kubo, Laura Rodrigo

Effect of Vibration on Listening Sound for a Person with Hearing Loss

We examined the effect of tactile stimuli for rhythm discrimination and hearing impression with ABX methods. We used semantic differential scale composed of 10 impression word pairs. We set the experiment condition with Audio (A), Audio+Vibration (AV), and Vibration (V). In the rhythm discrimination test, the hearing loss group showed better performance in (AV) than (A). The without hearing loss group showed that (AV) differed non-significantly from (A). We focused to “Enjoyable - Not fun” in impression evaluation for the effect of tactile stimuli. As a result of “Enjoyable - Not fun” impression evaluation showed significant differences with hearing loss group and without hearing loss group in (AV) between (A).

Junichi Kanebako, Toshimasa Yamanaka, Miki Namatame

Designing a Collaborative Interaction Experience for a Puppet Show System for Hearing-Impaired Children

In this study we have developed a puppet shows system for hearing-impaired children. It is difficult for hearing-impaired children to experience a puppet show. One of the reasons that hearing-impaired children have difficulty experiencing a puppet show is because the performance is a collaborative interaction experience. Collaborative interaction experiences encourage immersive viewing toward an empathetic understanding of the characters. This paper aims to design a collaborative interaction experience function for hearing-impaired children. This function provides collaborative interaction for the audience to work with the characters to resolve issues in the story by using body motion. From the results of the evaluation experiment, we understood that the collaborative interaction experience function generally supported an immersive puppet show experience for the audience.

Ryohei Egusa, Tsugunosuke Sakai, Haruya Tamaki, Fusako Kusunoki, Miki Namatame, Hiroshi Mizoguchi, Shigenori Inagaki

SingleScreenFocus for Deaf and Hard of Hearing Students

Deaf and hard of hearing (DHH) students who use sign language interpreters have to simultaneously watch both the slides and interpreter. In addition, it is difficult to scan, pick and watch new information, as the the teacher, interpreter and slides tend to be spatially distributed around the room. In addition, they become tired more quickly than their hearing peers as they have to track the movement of the teacher and interpreter. We developed and evaluated a system to address the simultaneous challenges of multiple distributed visuals (teacher, slides and interpreter), and the teacher or interpreter can move, which adds to the difficulty. Our SingleScreenFocus system has two parts: a viewing system, and a tracking and recording system. The tracking and recording system uses a iPad mounted on a tracking device to automatically track and record the teacher, and another similar system to track and record the interpreter. The recordings are then automatically streamed to the student’s large viewing screen. Our evaluation indicated that deaf and hard of hearing students prefer this system over a regular view in large classrooms because it reduced their need to split the attention between visuals and to search for details in the visuals.

Raja S. Kushalnagar, Poorna Kushalnagar, Fadi Haddad

A Web Application for Geolocalized Signs in Synthesized Swiss German Sign Language

In this paper, we report on the development of a web application that displays Swiss German Sign Language (DSGS) signs for places with train stations in Switzerland in synthesized form, i.e., by means of a signing avatar. Ours is the first platform to make DSGS place name signs accessible in geolocalized form, i.e., by linking them to a map, and to use synthesized signing. The latter mode of display is advantageous over videos of human signers, since place name signs for any sign language are subject to language change. Our web application targets both deaf and hearing DSGS users. The underlying programming code is freely available. The application can be extended to display any kind of geolocalized data in any sign language.

Anna Jancso, Xi Rao, Johannes Graën, Sarah Ebling

Accessible Computer Input

Frontmatter

Evaluation of a Mobile Head-Tracker Interface for Accessibility

FaceMe is an accessible head-tracker vision-based interface for users who cannot use standard input methods for mobile devices. We present two user studies to evaluate FaceMe as an alternative to touch input. The first presents performance and satisfaction results for twelve able-bodied participants. We also describe a case study with four motor-impaired participants with multiple sclerosis. In addition, the operation details of the software are described.

Maria Francesca Roig-Maimó, Cristina Manresa-Yee, Javier Varona, I. Scott MacKenzie

Replacement of the Standard Computer Keyboard and Mouse by Eye Blinks

In this work, a system is presented, replacing the standard computer keyboard and mouse with a headband carrying piezoelectric sensors. The novel system allows the user to enter text and select on-screen objects (mouse function) by just using eye blinks. It has been tested by one disable and five able-bodied volunteers. Combined they have shown an average typing speed of 9.1 characters per minutes (CPM).

Muhammad Bilal Saif, Torsten Felzer

SWIFT: A Short Word Solution for Fast Typing

In this paper, we study one specic problem linked to text input techniques based on prediction and deduction lists; namely, short-word problems. Indeed, while prediction is fast and can be easily made effective for long words (e.g. more than 4 characters), short words take longer to be typed with prediction: the time spent browsing a list-based interaction slows the user down. The present study compares two possible approaches where the user selects inside a prediction list of short words versus tactile exploration (native Voive Over for Apple). Results of our comparative study reveal that our technique reduces the overall typing time and the error rate by 38 % compared to tactile exploration.

Philippe Roussille, Mathieu Raynal

An Empirical Evaluation of MoonTouch: A Soft Keyboard for Visually Impaired People

This article presents a new text entry method for Visually Impaired people called MoonTouch. This method uses an enhanced version of the Moon system to help and assist visually impaired people to enter text into their touchscreen devices using simple gestures. A clinical tests of the method with a population of visually impaired participants showed that MoonTouch was perceived well by the users and presented a good learning curve.

Saber Heni, Wajih Abdallah, Dominique Archambault, Gérard Uzan, Mohamed Salim Bouhlel

Zoning-Based Gesture Recognition to Enable a Mobile Lorm Trainer

In this work, a mobile learning tool for the Lorm-alphabet is developed. A person who is deaf-blind is lorming by finger spelling on another person’s palm and fingers. We aim to provide an easy and anywhere to use Lorm trainer for caregivers, companions, and the general public. A robust gesture recognition utilizing zoning techniques and matching of symbol sequences has been developed for touch sensitive mobile devices. Tests with three users of the target group were conducted and qualitative evaluation of three experts was obtained. Overall, our development got positive feedback and a broad demand for the application was communicated. It is promising not only to support students of Lorm in their training process, but to widen the application of Lorm, therefore, diminishing social isolation of deaf-blind.

Michael Schmidt, Cathleen Bank, Gerhard Weber

HandiMathKey: Mathematical Keyboard for Disabled Person

To type mathematical formula is a tedious task for all of us with the usual applications. Moreover, this task is very tiring for motor impairment. This paper describes the user centred methodology used to design the HandiMathKey virtual keyboard to type more easily the mathematical formula. Then we present a case study that compares the entry time between HandiMathKey and Word office for the mathematic formulas typing. This study shows that the HandiMathkey is easier to use and more efficient.

Elodie Bertrand, Damien Sauzin, Frédéric Vella, Nathalie Dubus, Nadine Vigouroux

A Study of an Intention Communication Assisting System Using Eye Movement

In this paper, we propose a new intention communication assisting system that uses eye movement. The proposed method solves the problems associated with a conventional eye gaze input method. A hands-free input method that uses the behavior of the eye, including blinking and line of sight, has been used for assisting the intention communication of people with severe physical disabilities. In particular, a line-of-sight input device that uses eye gazes has been used extensively because of its intuitive operation. In addition, this device can be used by any patient, except those with weak eye. However, the eye gaze method has disadvantages such as a certain level of input time is required for determining the eye gaze input, or it is necessary to present the information for fixation when performing input. In order to solve these problems, we propose a new line-of-sight input method, eye glance input method. Eye glance input can be performed in four directions by detecting reciprocating movement (eye glance) in the oblique direction. Using the proposed method, it is possible to perform rapid environmental control with simple measurements. In addition, we developed an evaluation system using electrooculogram based on the proposed method. The evaluation system experimentally evaluated the input accuracy of 10 subjects. As a result, an average accuracy of approximately 84.82 % was determined, which confirms the effectiveness of the proposed method. In addition, we examined the application of the proposed method to actual intention communication assisting systems.

Shogo Matsuno, Yuta Ito, Naoaki Itakura, Tota Mizuno, Kazuyuki Mito

A Review of Computer-Based Gesture Interaction Methods for Supporting Disabled People with Special Needs

Gesture interaction is currently a very emerging field in computer science and engineering. This is since it is able to allow humans to communicate interactively with the machine via numerical linear algebra and mathematical techniques. In this paper, we discuss various modern state-of-the-art techniques the academic researchers including the author have attempted in recent years in order to achieve the gesture recognition and interaction in a robust way interactively. This paper is divided into three main parts. First, we introduce hand gesture recognition and body gesture recognition for general purposes using computer vision technology. These include a fast learning mechanism from an accurate six-degrees-of-freedom pose tracker, a real-time extended distance transform for the hand model, and a robust integration of support vector machine and superpixels. Second, recent gesture interaction methods, more specifically, for helping disabled people with special needs are reviewed using human-computer interaction and sensor technology. These methods include combinatorial approach recognizer (CAR), hand skeleton recognizer (HSR) and Viewpoint Feature Histogram (VFH). Third, we discuss the advantages and disadvantages of the aforementioned gesture interaction methods. By understanding the state-of-the-art approaches for computer-based gesture interaction presented recently by leading researchers, this would advance beneficially the interactions that persons with disabilities would conveniently, practically and easily have with modern recognition technology.

Chutisant Kerdvibulvech

Optimizing Vocabulary Modeling for Dysarthric Speech Recognition

Imperfection in articulation of dysarthric speech results in the deterioration on the performance of speech recognition. In this paper, the effect of the articulating class of phonemes in the dysarthric speech recognition results is analyzed using generalized linear mixed models (GLMMs). The model with the features categorized according to the manner of articulation and the place of tongue is selected as the best one by the analysis. Recognition accuracy score for each word is predicted based on its pronunciation and the GLMM. The vocabulary optimized by selecting words with the maximum score shows a 16.4 % relative error reduction in dysarthric speech recognition.

Minsoo Na, Minhwa Chung

Comparison of Two Methods to Control the Mouse Using a Keypad

This paper presents a user study comparing two methods for keyboard-driven mouse replacement: CKM, an active Conventional Keyboard Mouse, and DualMouse, an innovative keyboard technique allowing stepwise, recursive target acquisition. Both strategies are implemented in the pointing component of OnScreenDualScribe, a comprehensive assistive software system that turns a compact keypad into a universal input device. The study involves eight non-disabled participants and a single user with Friedreich Ataxia. The results reveal that CKM yields about 60 % higher throughput that DualMouse. However, the DualMouse technique is preferable for certain specific tasks. Our intention with this research is to gain new insights into OnScreenDualScribe and to inspire future developers of mouse-replacement interfaces for persons with physical disabilities.

Torsten Felzer, Ian Scott MacKenzie, John Magee

AT and Rehabilitation for People with Motor and Mobility Disabilities

Frontmatter

Kalman-Based Approach to Bladder Volume Estimation for People with Neurogenic Dysfunction of the Urinary Bladder

People with neurogenic dysfunction of urinary bladder often require daily catheterism because of their impairment. This issue is particularly critical for those that have not the urinary stimulus, because they have not the ability to understand when the bladder is full or not. From user’s point of view, the absence of a urinary conscious stimulus can cause refluxes, damaging patient’s health and his psychological status. For such necessities, most patients require professional nursing, increasing the work of the staff and the overall medical costs. Furthermore, catheterism itself applied every day for a long period can cause infection in the urinary tract. The authors propose a non-invasive bladder monitoring system based on real-time bioimpedance measurement. A Klaman filter was developed in order to estimate the bladder volume due to the intrinsic-uncertainly of the model itself and to remove the artifacts due to patient’s movements by using accelerometer by monitoring it’s activity. Theoretical analysis, in-system measurements and experimentations prove the effectiveness of the proposed solution.

Alessandro Palla, Claudio Crema, Luca Fanucci, Paolo Bellagente

Sound Feedback Assessment for Upper Limb Rehabilitation Using a Multimodal Guidance System

This paper describes the implementation of a Multimodal Guidance System (MGS) for upper limb rehabilitation through vision, haptic and sound. The system consists of a haptic device that physically renders virtual path of 2D shapes through the point-based approach, while sound technology provides audio feedback inputs about patient’s actions while performing a manual task as for example: starting and/or finishing an sketch; different sounds related to the hand’s velocity while sketching. The goal of this sonification approach is to strengthen the patient’s understanding of the virtual shape which is used in the rehabilitation process, and to inform the patient about some attributes that could otherwise remain unseen. Our results provide conclusive evidence that the effect of using the sound as additional feedback increases the accuracy in the tasks operations.

Mario Covarrubias Rodriguez, Mauro Rossini, Giandomenico Caruso, Gianluca Samali, Chiara Giovanzana, Franco Molteni, Monica Bordegoni

Personal Mobility Vehicle for Assisting Short-Range Transportation

Personal mobility vehicle (PMV) for the people with limited mobility has been developed. This PMV is propelled by kicking off the ground by foot and electric motor assist the gliding. It aims to assist short distance transportation in urban area e.g. moving from a home to a train station. Folding mechanism makes it possible to carry PMV on public transportations and will help to extend the area of the user’s activities. In this paper, the overview of our developed PMV, simulation model and the validation according to the results of preliminary experiments are mentioned.

Yoshiyuki Takahashi

Multimodal Sequential Modeling and Recognition of Human Activities

Video-based recognition of activities of daily living (ADLs) is being used in ambient assisted living systems in order to support independent living of old people. In this work, we propose a new multimodal ADL recognition method by modeling the correlation between motion and object information. We encode motion using dense interest point trajectories which are robust to occlusion and speed variability. We formulate the learning problem using a two-layer SVM hidden conditional random field (HCRF) recognition model that is particularly relevant for multimodal sequence recognition. This hierarchical classifier optimally combines the discriminative power of SVM and the long-range feature dependencies modeling by the HCRF.

Mouna Selmi, Mounîm A. El-Yacoubi

Android Games for Developing Fine Coordination of Movement Skills

This paper introduces two serious games, “Cars Racing” and “Labyrinth” game. These games are planned to use and test within the “Intelligent Serious Games for Social and Cognitive Competence” project. The aim of these games are to teach youth with disabilities on creativity.

Tibor Guzsvinecz, Veronika Szücs, Szilvia Paxian, Cecilia Sik Lanyi

HCI, AT and ICT for Blind and Partially Sighted People

Frontmatter

Extending the Technology Enhanced Accessible Interaction Framework Method for Thai Visually Impaired People

This paper focuses on extending the Technology Enhanced Accessible Interaction Framework Method for visual impairment based on interviews with people with visual impairment to help developers develop accessible technology solutions to help people with visual impairment to interact with people, technologies and objects.

Kewalin Angkananon, Mike Wald

Types of Problems Elicited by Verbal Protocols for Blind and Sighted Participants

Verbal protocols are often used in user-based studies of interactive technologies. This study investigated whether different types of problems are revealed by concurrent and retrospective verbal protocols (CVP and RVP) for blind and sighted participants. Eight blind and eight sighted participants undertook both CVP and RVP on four websites. Overall, interactivity problems were significantly more frequent in comparison to content or information architecture problems. In addition, RVP revealed significantly more interactivity problems than CVP for both user groups. Finally, blind participants encountered significantly more interactivity problems than sighted participants. The findings have implications for which protocol is appropriate, depending on the purpose of a particular study and the user groups involved.

Andreas Savva, Helen Petrie, Christopher Power

Multimodal Attention Stimulator

Multimodal attention stimulator was proposed and tested for improving auditory and visual attention, including pupils with developmental dyslexia. Results of the conducted experiments shown that the designed stimulator can be used in order to improve comprehension during reading tasks. The changes in the visual attention, observed in reading test results, translate into the overall reading performance.

Andrzej Czyzewski, Bozena Kostek, Lukasz Kosikowski

Assessing Braille Input Efficiency on Mobile Devices

Our team has conducted a research on how today’s Braille input methods suit the needs of blind smartphone users. Hungarian blind volunteers (all active Braille users) were invited to participate. The research consisted of a survey on the participants’ relation to Braille and a series of input tests based on short Hungarian and multilingual texts both in grade 1 and 2 Braille using different devices and methods. Results showed that experienced Braille users can achieve remarkably high speeds and accuracy and that the use of contracted Braille further increases input efficiency. This paper also discusses the characteristics of typos occuring and their manual or automated correction during Braille input on mobile devices. Adding adequate automated correction mechanisms optimized for Braille typos may further increase the input speed nearing or even surpassing the speed of sighted people using ordinary on-screen input methods.

Norbert Márkus, Szabolcs Malik, András Arató

Smart Glasses for the Visually Impaired People

People with visual impairment face various problems in their daily life as the modern assistive devices are often not meeting the consumer requirements in term of price and level of assistance. This paper presents a new design of assistive smart glasses for visually impaired students. The objective is to assist in multiple daily tasks using the advantage of wearable design format. As a proof of concept, this paper only presents one example application, i.e. text recognition technology that can help reading from hardcopy materials. The building cost is kept low by using single board computer raspberry pi 2 as the heart of processing and the raspberry pi 2 camera for image capturing. Experiment results demonstrate that the prototype is working as intended.

Esra Ali Hassan, Tong Boon Tang

EasyTrans: Accessible Translation System for Blind Translators

This paper presents the design and implementation of EasyTrans, an accessible translation web application system for Blind Translators (BT). EasyTrans runs entirely on a web server and utilizes many web services, allowing BT users to perform their translation tasks online, thus relieving them from installing any software. EasyTrans has a simple and intuitive user interface with several dictionaries to support BT in their translation tasks. Usability evaluation of EasyTrans showed that BT were satisfied by its performance and provided further suggestion for enhancement.

Dina Al-Bassam, Hessah Alotaibi, Samira Alotaibi, Hend S. Al-Khalifa

An Accessible Environment to Integrate Blind Participants into Brainstorming Sessions

User Studies

This paper presents user studies done for a system supporting blind people to take part in co-located brainstorming meetings. For supporting blind people, visual information exchange has to be made accessible to them. This visual information exchange takes place in two ways (a) by using artefacts to hold and share visual information (e.g. text on blackboards, content of mind-map nodes) (b) by non-verbal communication exchange (e.g. nodding to agree to someone’s arguments, pointing to highlight some important artefacts). The presented prototype uses Leap Motion to detect pointing gestures as a representative example for non-verbal communication elements, while for the artefact layer a mind-map is used. A so-called “blind user interface” serializes the star structure of this min-map and allows accessing it by the blind user through a regular screen reader.

Stephan Pölzer, Andreas Kunz, Ali Alavi, Klaus Miesenberger

Elements of Adaptation in Ambient User Interfaces

In the “Design4All” project, a hardware and software architecture is under development for the implementation of adaptable and adaptive applications aimed to support all people in carrying out an independent life at home. In this paper, the problems of interactions with applications implemented in the Android platform, chosen for the experiments of interaction with the developed applications, are discussed with main emphasis on two main aspects: (i) the use of facilities supporting accessibility available in the most commonly used operating systems (mainstreaming) and (ii) the portability of solutions across different platforms.

Laura Burzagli, Fabio Gori, Paolo Baronti, Marco Billi, Pier Luigi Emiliani

Backmatter

Additional information