Skip to main content

2012 | Buch

Computers Helping People with Special Needs

13th International Conference, ICCHP 2012, Linz, Austria, July 11-13, 2012, Proceedings, Part II

herausgegeben von: Klaus Miesenberger, Arthur Karshmer, Petr Penaz, Wolfgang Zagler

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

The two-volume set LNCS 7382 and 7383 constiutes the refereed proceedings of the 13th International Conference on Computers Helping People with Special Needs, ICCHP 2012, held in Linz, Austria, in July 2012. The 147 revised full papers and 42 short papers were carefully reviewed and selected from 364 submissions. The papers included in the second volume are organized in the following topical sections: portable and mobile systems in assistive technology; assistive technology, HCI and rehabilitation; sign 2.0: ICT for sign language users: information sharing, interoperability, user-centered design and collaboration; computer-assisted augmentative and alternative communication; easy to Web between science of education, information design and speech technology; smart and assistive environments: ambient assisted living; text entry for accessible computing; tactile graphics and models for blind people and recognition of shapes by touch; mobility for blind and partially sighted people; and human-computer interaction for blind and partially sighted people.

Inhaltsverzeichnis

Frontmatter

Portable and Mobile Systems in Assistive Technology

A Multimodal Approach to Accessible Web Content on Smartphones

Mainstream smartphones can now be used to implement efficient speech-based and multimodal interfaces. The current status and continued development of mobile technologies opens up for possibilities of interface design for smartphones that were unattainable only a few years ago. Better and more intuitive multimodal interfaces for smartphones can provide access to information and services on the Internet through mobile devices, thus enabling users with different abilities to access this information at any place and at any time. In this paper we present our current work in the area of multimodal interfaces on smartphones. We have implemented a multimodal framework, and has used it as a foundation for development of a prototype which have been used in a user test. There are two main contributions: 1) How we have implemented W3C’s multimodal interaction framework on smartphones running the Android OS, and 2) the results from user tests and interviews with blind and visually impaired users.

Lars Emil Knudsen, Harald Holone
Mobile Vision as Assistive Technology for the Blind: An Experimental Study

Mobile computer vision is often advocated as a promising technology to support blind people in their daily activities. However, there is as yet very little experience with mobile vision systems operated by blind users. This contribution provides an experimental analysis of a sign-based wayfinding system that uses a camera cell phone to detect specific color markers. The results of our experiments may be used to inform the design of technology that facilitates environment exploration without sight.

Roberto Manduchi
Camera-Based Signage Detection and Recognition for Blind Persons

Signage plays an important role for wayfinding and navigation to assist blind people accessing unfamiliar environments. In this paper, we present a novel camera-based approach to automatically detect and recognize restroom signage from surrounding environments. Our method first extracts the attended areas which may content signage based on shape detection. Then, Scale-Invariant Feature Transform (SIFT) is applied to extract local features in the detected attended areas. Finally, signage is detected and recognized as the regions with the SIFT matching scores larger than a threshold. The proposed method can handle multiple signage detection. Experimental results on our collected restroom signage dataset demonstrate the effectiveness and efficiency of our proposed method.

Shuihua Wang, Yingli Tian
The Crosswatch Traffic Intersection Analyzer: A Roadmap for the Future

The “Crosswatch” project is a smartphone-based system developed by the authors for providing guidance to blind and visually impaired pe-destrians at traffic intersections. Building on past work on Crosswatch functionality to help the user achieve proper alignment with the crosswalk and read the status of Walk lights to know when it is time to cross, we outline the direction Crosswatch should take to help realize its potential for becoming a practical system: namely, augmenting computer vision with other information sources, including geographic information systems (GIS) and sensor data, to provide a much larger range of information about traffic intersections to the pedestrian.

James M. Coughlan, Huiying Shen
GPS and Inertial Measurement Unit (IMU) as a Navigation System for the Visually Impaired

The current GPS (Sirf 3) devices do not give the right heading when their speed is less than 10 km/h. This heading is also less reliable when the GPS is used in the big cities where it is surrounded by buildings. Another important problem is that the change of orientation of the visually impaired needs a long delay to be detected by the GPS due to the fact that the GPS must reach certain speed for obtaining the new heading. It can take from 2 seconds to 15 seconds depending on the GPS signal conditions. In order to avoid these problems, we have proposed the use of one GPS coupled to the IMU (inertial measurement unit). This IMU has one 3 axis compass, a one axis gyroscope and one 3 axis accelerometer. With this system, we can update the heading information every second. The user Interface is developed in the Smart Phone which gives the information of heading and distance to the destination. In this paper, we are also going to describe the advantages of using the heading and distance to the final destination, updated every second, to navigate in cities.

Jesus Zegarra, René Farcy
Visual Nouns for Indoor/Outdoor Navigation

We propose a local orientation and navigation framework based on visual features that provide location recognition, context augmentation, and viewer localization information to a human user. Mosaics are used to map local areas to ease user navigation through streets and hallways, by providing a wider field of view (FOV) and the inclusion of more decisive features. Within the mosaics, we extract “visual noun” features. We consider 3 types of visual noun features: signage, visual-text, and visual-icons that we propose as a low-cost method for augmenting environments.

Edgardo Molina, Zhigang Zhu, Yingli Tian
Towards a Real-Time System for Finding and Reading Signs for Visually Impaired Users

Printed text is a ubiquitous form of information that is inaccessible to many blind and visually impaired people unless it is represented in a non-visual form such as Braille. OCR (optical character recognition) systems have been used by blind and visually impaired persons for some time to read documents such as books and bills; recently this technology has been packaged in a portable device, such as the smartphone-based kReader Mobile (from K–NFB Reading Technology, Inc.), which allows the user to photograph a document such as a restaurant menu and hear the text read aloud. However, while this kind of OCR system is useful for reading documents at close range (which may still require the user to take a few photographs, waiting a few seconds each time to hear the results, to take one that is correctly centered), it is not intended for signs. (Indeed, the KNFB manual, see knfbreader.com/upgrades_mobile.php , lists “posted signs such as signs on transit vehicles and signs in shop windows” in the “What the Reader Cannot Do” subsection.) Signs provide valuable location-specific information that is useful for wayfinding, but are usually viewed from a distance and are difficult or impossible to find without adequate vision and rapid feedback.

We describe a prototype smartphone system that finds printed text in cluttered scenes, segments out the text from video images acquired by the smartphone for processing by OCR, and reads aloud the text read by OCR using TTS (text-to-speech). Our system detects and reads aloud text from video images, and thereby provides

real-time feedback

(in contrast with systems such as the kReader Mobile) that helps the user find text with minimal prior knowledge about its location. We have designed a novel audio-tactile user interface that helps the user hold the smartphone level and assists him/her with locating any text of interest and approaching it, if necessary, for a clearer image. Preliminary experiments with two blind users demonstrate the feasibility of the approach, which represents the first real-time sign reading system we are aware of that has been expressly designed for blind and visually impaired users.

Huiying Shen, James M. Coughlan
User Requirements for Camera-Based Mobile Applications on Touch Screen Devices for Blind People

This paper presents user requirements for camera-based mobile applications in touch screen devices for blind people. We conducted a usability testing for a color reading application on Android OS. In the testing, participants were asked to evaluate three different types of interfaces of the application in order to identify user requirements and preferences. The results of the usability testing presented that (1) users preferred short depth of menu hierarchy, (2) users needed both manual and automatic camera shooting modes although they preferred manual to automatic mode, (3) the initial audio help was more useful for users than in–time help, (4) users wanted the OS supported screen reader function to be turned off during the color reading, and (5) users required tactile feedback to identify touch screen boundary.

Yoonjung Choi, Ki-Hyung Hong
A Route Planner Interpretation Service for Hard of Hearing People

The advancement of technology over the past fifteen years has opened many new doors to make our daily life easier. Nowadays, smart phones provide many services such as everywhere access to the social networks, video communication through 3G networks and the GPS (global positioning system) service. For instance, using GPS technology and Google maps services; user can find a route planner for traveling by foot, car, bike or public transport. Google map is based on KML which contains textual information to describe streets or places name and this is not accessible to persons with special needs like hard of hearing people. However, hearing impairment persons have very specific needs related to the learning and understanding process of any written language. Consequently, this service is not accessible to them. In this paper we propose a new approach that makes accessible KML information on android mobile devices. We rely on cloud computing and virtual agent technology subtitled with SignWriting to interpret automatically textual information on the map according to the user current position.

Mehrez Boulares, Mohamed Jemni
Translating Floor Plans into Directions

Project Mobility supports blind and low-vision people in exploring and wayfinding indoors. Facility operators are enabled to annotate floor plans to provide accessible content. An accessible smartphone app is developed for presenting spatial information and directions on the go, regarding the user’s position. This paper describes some of the main goals and features of the system and the results of first user tests we conducted at a large airport.

Martin Spindler, Michael Weber, Denise Prescher, Mei Miao, Gerhard Weber, Georgios Ioannidis
Harnessing Wireless Technologies for Campus Navigation by Blind Students and Visitors

Navigating around a university campus can be difficult for visitors and incoming students/staff, and is a particular challenge for vision-impaired students and staff. University College Cork (UCC), like most other universities and similar institutions worldwide, relies mainly on sign-posts and maps (available from the college website) to direct students and visitors around campus. However, these are not appropriate for vision-impaired users. UCC’s Disability Support Service provides mobility training to enable blind and vision-impaired students and staff to safely and independently navigate around the campus. This training is time-consuming for all parties and is costly to provide. It is also route-specific: for example, if a blind student who has already received mobility training is required to attend lectures in a building they have not previously visited, they may require further training on the new route. It is not feasible to provide this kind of training for blind/visually-impaired visitors. A potential solution to these problems is to provide navigation data using wireless and mobile technology. Ideally this should be done using technologies that are (or will shortly be) widely supported on smart-phones, thus ensuring that the system is accessible to one-time visitors as well as regular users.

A study was conducted in order to identify user-requirements. It was concluded that there is no off-the-shelf system that fully meets UCC’s requirements. Most of the candidates fall short either in terms of the accuracy or reliability of the localization information provided, ability to operate both indoors and outdoors, or in the nature of the feedback provided. In the light of these findings, a prototype system has been developed for use on the UCC campus. This paper describes the development of the system and ongoing user-testing to assess the viability of the interface for use by vision-impaired people.

Tracey J. Mehigan, Ian Pitt
Eyesight Sharing in Blind Grocery Shopping: Remote P2P Caregiving through Cloud Computing

Product recognition continues to be a major access barrier for visually impaired (VI) and blind individuals in modern supermarkets. R&D approaches to this problem in the assistive technology (AT) literature vary from automated vision-based solutions to crowdsourcing applications where VI clients send image identification requests to web services. The former struggle with run-time failures and scalability while the latter must cope with concerns about trust, privacy, and quality of service. In this paper, we investigate a mobile cloud computing framework for remote caregiving that may help VI and blind clients with product recognition in supermarkets. This framework emphasizes remote teleassistance and assumes that clients work with dedicated caregivers (helpers). Clients tap on their smartphones’ touchscreens to send images of products they examine to the cloud where the SURF algorithm matches incoming image against its image database. Images along with the names of the top 5 matches are sent to remote sighted helpers via push notification services. A helper confirms the product’s name, if it is in the top 5 matches, or speaks or types the product’s name, if it is not. Basic quality of service is ensured through human eyesight sharing even when image matching does not work well. We implemented this framework in a module called EyeShare on two Android 2.3.3/2.3.6 smartphones. EyeShare was tested in three experiments with one blindfolded subject: one lab study and two experiments in Fresh Market, a supermarket in Logan, Utah. The results of our experiments show that the proposed framework may be used as a product identification solution in supermarkets.

Vladimir Kulyukin, Tanwir Zaman, Abhishek Andhavarapu, Aliasgar Kutiyanawala
Assessment Test Framework for Collecting and Evaluating Fall-Related Data Using Mobile Devices

With an increasing population of older people the number of falls and fall-related injuries is on the rise. This will cause changes for future health care systems, and fall prevention and fall detection will pose a major challenge. Taking the multimodal character of fall-related parameters into account, the development of adequate strategies for fall prevention and detection is very complex. Therefore, it is necessary to collect and analyze fall-related data.

This paper describes the development of a test framework to perform a variety of assessment tests to collect fall-related data. The aim of the framework is to easily set up assessment tests and analyze the data regarding fall-related behaviors. It offers an open interface to support a variety of devices. The framework consists of a Web service, a relational database and a Web-based backend. In order to test the framework, a mobile device client recording accelerometer and gyroscope sensor data is implemented on the iOS platform. The evaluation, which includes three mobility assessment tests, demonstrates the sensor accuracy for movement analysis for further feature extraction.

Stefan Almer, Josef Kolbitsch, Johannes Oberzaucher, Martin Ebner
NAVCOM – WLAN Communication between Public Transport Vehicles and Smart Phones to Support Visually Impaired and Blind People

Visually impaired and blind people want to move or travel on their own but they depend on public transport systems. This is sometimes challenging. Some problems are to find the right vehicle, signalling their wish to enter or leave the vehicle and getting information of the upcoming stations. To solve these problem very specialized equipment was develop. In this paper we show a solution with standard WLAN components and a standard smart phone, that might solve these problems. Hopefully this raises the life quality for the people with special needs.

Werner Bischof, Elmar Krajnc, Markus Dornhofer, Michael Ulm
Mobile-Type Remote Captioning System for Deaf or Hard-of-Hearing People and the Experience of Remote Supports after the Great East Japan Earthquake

Mobile-type Remote Captioning System which we proposed is to realize advanced support for sightseeing tours using an uttering guide or practical field trip outside of class. Our syetem utilizes the mobile phone network provided by Japanese mobile phone carriers, the monthly flat-rate voice call and data transfer services. By using these services, deaf or hard-of-hearing student could use real-time captioning while walking. On March 11, 2011, the Great East Japan Earthquake shook Japan. After the quake, there was a great lack of the volunteer students(captionists) inside of the affection areas. Universities outside of the affection areas supported remotely to cover the volunteer work. In order to realize such remote support, the system reported by this paper was used.

Shigeki Miyoshi, Sumihiro Kawano, Mayumi Shirasawa, Kyoko Isoda, Michiko Hasuike, Masayuki Kobayashi, Midori Umehara
Handheld “App” Offering Visual Support to Students with Autism Spectrum Disorders (ASDs)

iPrompts® is a software application for handheld devices that provides visual support to individuals with Autism Spectrum Disorders (ASDs). Caregivers use the application to create and present visual schedules, visual countdown timers, and visual choices, to help individuals with ASDs stay organized, understand upcoming events, and identify preferences. The developer of the application, HandHold Adaptive, LLC, initially introduced iPrompts on the iPhone and iPod Touch in May of 2009. The research team from the Center of Excellence on Autism Spectrum Disorders at Southern Connecticut State University conducted a study of iPrompts in 2010, investigating its use by educators working with students with ASDs. Among other findings, educators indicated a desire to present visual supports on a larger, “tablet”-sized display screen, leading the developer to produce an iPad-specific product, iPrompts® XL. Described in this paper are the research effort of iPrompts and subsequent development effort of iPrompts XL.

Bogdan Zamfir, Robert Tedesco, Brian Reichow
Cloud-Based Assistive Speech-Transcription Services

Real-time speech transcription is a service of potentially tre- mendous positive impact on quality of life of the hearing-impaired. Recent advances in technologies of mobile networks, cloud services, speech transcription and mobile clients allowed us to build eScribe, a ubiquitiously available, cloud-based, speech-transcription service. We present the deployed system, evaluate the applicability of automated speech recognition using real measurements and outline a vision of the future enhanced platform, crowdsourcing human transcribers in social networks.

Zdenek Bumbalek, Jan Zelenka, Lukas Kencl
Developing a Voice User Interface with Improved Usability for People with Dysarthria

This paper describes the development of a voice user interface (VUI) for Korean users with dysarthria. The development process, from target application decisions to prototype system evaluation, focuses on improving the usability of the interface by reflecting user needs. The first step of development is to decide target VUI application and its functions. 25 dysarthric participants (5 middle school students and 20 adults) are asked to list the devices they want to use with a VUI interface and what purposes they would use VUI devices for. From this user study, SMS sending, web searching and voice dialing on mobile phones and tablet PCs are decided as the target application and its functions. The second step is to design the system of the target application in order to improve usability. 120 people with dysarthria are asked to state the main problems of currently available VUI devices, and it is found that speech recognition failure (88%) is the main problem. This result indicates high speech recognition rate will improve usability. Therefore, to improve the recognition rate, an isolated word recognition based system with a customizable command list and a built-in word prediction function is designed for the target VUI devices. The final step is to develop and evaluate a prototype system. In this study, a prototype is developed for Apple iOS and Android platform devices, and then the system design is modified based on the evaluation results of 5 dysarthric evaluators.

Yumi Hwang, Daejin Shin, Chang-Yeal Yang, Seung-Yeun Lee, Jin Kim, Byunggoo Kong, Jio Chung, Sunhee Kim, Minhwa Chung
Wearable Range-Vibrotactile Field: Design and Evaluation

Touch is one of the most natural methods of navigation available to the blind. In this paper, we propose a method to enhance a person’s use of touch by placing range sensors coupled with vibrators throughout their body. This would allow them to be able to feel objects and obstacles in close proximity to them, without having to physically touch them. In order to make effective use of this vibrotactile approach, it is necessary to discern the perceptual abilities of a person wearing small vibrators on different parts of their body. To do this, we designed a shirt with small vibrators placed on the wrists, elbows, and shoulders, and ran an efficient staircase PEST algorithm to determine their sensitivities on those parts of their body.

Frank G. Palmer, Zhigang Zhu, Tony Ro
System Supporting Speech Perception in Special Educational Needs Schoolchildren

The system supporting speech perception during the classes is presented in the paper. The system is a combination of portable device, which enables real-time speech stretching, with the workstation designed in order to perform hearing tests. System was designed to help children suffering from Central Auditory Processing Disorders.

Adam Kupryjanow, Piotr Suchomski, Piotr Odya, Andrzej Czyzewski
Designing a Mobile Application to Record ABA Data

Applied Behavior Analysis (ABA) is a scientific method for modelling human behavior, successfully applied in the context of educating autistic subjects. ABA’s scientific approach relies on recording measurable data derived from the execution of structured programs. In this paper we describe an application designed to support the work of ABA tutors with autistic subjects. Specifically, we describe an Android application for gathering data from ABA sessions with a patient and sharing information among his/her ABA team. Tablets allow mobility and ease of interaction, enabling efficient data collection and processing, and automating tasks previously carried out by recording notes on paper. However, reduced screen size poses challenges for user interface design.

Silvia Artoni, Maria Claudia Buzzi, Marina Buzzi, Claudia Fenili, Barbara Leporini, Simona Mencarini, Caterina Senette

Assistive Technology, HCI and Rehabilitation

Creating Personas with Disabilities

Personas can help raise awareness among stakeholders about users’ needs. While personas are made-up people, they are based on facts gathered from user research. Personas can also be used to raise awareness of universal design and accessibility needs of people with disabilities. We review the current state of the art of the personas and review some research and industry projects that use them. We outline techniques that can be used to create personas with disabilities. This includes advice on how to get more information about assistive technology and how to better include people with disabilities in the persona creation process. We also describe our use of personas with disabilities in several projects and discuss how it has helped to find accessibility issues.

Trenton Schulz, Kristin Skeide Fuglerud
Eye Controlled Human Computer Interaction for Severely Motor Disabled Children
Two Clinical Case Studies

This paper presents two case studies of two children with severe motor disabilities. After years of no effective feedback from them, an interdisciplinary approach had been explored with the use of an eye controlled computer. A multidisciplinary team in clinical environment included a specialist in physical and rehabilitation medicine, an occupational therapist, a speech therapist and an engineer. Several applications were tested to establish feedback from the users, using the only movement they were capable of: eye movement. Results have shown significant improvement in interaction and communication for both users. Some differences were present, possibly due to the age difference. Preparation of content for augmented and alternative communication is in progress for both users. We realized that awareness of the existent advanced assistive technology (AT) is crucial for more independent and qualitative life, from parents or care givers to all AT professionals, working in clinical environment.

Mojca Debeljak, Julija Ocepek, Anton Zupan
Gravity Controls for Windows

This paper presents the concept and a prototype of “Gravity Controls”. “Gravity Controls” makes standard Graphical User Interface (GUI) controls “magnetic” to allow overcoming impacts of motor problems (e.g. tremor of users leading to unsmooth movements and instable positioning of the cursor (e.g. for clicking or mouse over events). “Gravity Controls” complements and enhances standard or Assistive Technology (AT) based interaction with the GUI by supporting the process of reaching a control and better keeping the position for interaction.

Peter Heumader, Klaus Miesenberger, Gerhard Nussbaum
Addressing Accessibility Challenges of People with Motor Disabilities by Means of AsTeRICS: A Step by Step Definition of Technical Requirements

The need for Assistive Technologies in Europe is leading to the development of projects which aim is to research and develop technical solutions for people with long term motor disabilities. The Assistive Technology Rapid Integration & Construction Set (AsTeRICS) project funded by the 7th Framework Programme of the EU (Grant Agreement 247730) aims to develop a supporting multiple device integrated system to help people with upper limbs impairment. To this end, AsTeRICS is following the User Centred Design methods to gather the user requirements and develop solutions in an iterative way. This paper reports requirements prioritization procedures. These procedures are described in order to illustrate the user requirements transformation into technical requirements for the system development.

Alvaro García-Soler, Unai Diaz-Orueta, Roland Ossmann, Gerhard Nussbaum, Christoph Veigl, Chris Weiss, Karol Pecyna
Indoor and Outdoor Mobility for an Intelligent Autonomous Wheelchair

A smart wheelchair was developed to provide users with increased independence and flexibility in their lives. The wheelchair can be operated in a fully autonomous mode or a hybrid brain-controlled mode while the continuously running autonomous mode may override the user-generated motion command to avoid potential dangers. The wheelchair’s indoor mobility has been demonstrated by operating it in a dynamically occupied hallway, where the smart wheelchair intelligently interacted with pedestrians. An extended operation of the wheelchair for outdoor environments was also explored. Terrain recognition based on visual image processes and multi-layer neural learning network was demonstrated. A mounted Laser Range Finder (LRF) was used to determine terrain drop-offs and steps and to detect stationary and moving obstacles for autonomous path planning. Real-time imaging of the outdoor scenes using the oscillating LRF was attempted; however, the overhead in generating a three-dimensional point cloud exceeded the onboard computer capability.

C. T. Lin, Craig Euler, Po-Jen Wang, Ara Mekhtarian
Comparing the Accuracy of a P300 Speller for People with Major Physical Disability

A Brain-Computer Interface (BCI) can provide an additional option for a person to express himself/herself if he/she suffers a disorder like amyotrophic lateral sclerosis (ALS), brainstem stroke, brain or spinal cord injury or other diseases affecting the motor pathway. For a P300 based BCI a matrix of randomly flashing characters is presented to the participant. To spell a character the person has to attend to it and to count how many times the character flashes. The aim of this study was to compare performance achieved by subjects suffering major motor impairments with that of healthy subjects. The overall accuracy of the persons with motor impairments reached 70.1% in comparison to 91% obtained for the group of healthy subjects. When looking at single subjects, one interesting example shows that under certain circumstances, when the patient finds difficult to concentrate on one character for a long period of time, reduce the number of flashes can increase the accuracy. Furthermore, the influence of several tuning parameters is discussed as it shows that for some participant’s adaptations for achieving valuable spelling results are required. Finally, exclusion criteria for people who are not able to use the device are defined.

Alexander Lechner, Rupert Ortner, Fabio Aloise, Robert Prückl, Francesca Schettini, Veronika Putz, Josef Scharinger, Eloy Opisso, Ursula Costa, Josep Medina, Christoph Guger
Application of Robot Suit HAL to Gait Rehabilitation of Stroke Patients: A Case Study

We have developed the Robot Suit HAL (Hybrid Assistive Limb) to actively support and enhance human motor functions. The HAL provides physical support according to the wearer’s motion intention. In this paper, we present a case study of the application of the HAL to gait rehabilitation of a stroke patient. We applied the HAL to a male patient who suffered a stroke due to cerebral infarction three years previously. The patient was given walking training with the HAL twice a week for eight weeks. We evaluated his walking speed (10 m walking test) and balance ability (using a functional balance scale) before and after the 8-week rehabilitation with the HAL. The results show an improvement in the gait and balance ability of a patient with chronic paralysis after gait training with the HAL, which is a voluntarily controlled rehabilitation device.

Kanako Yamawaki, Ryohei Ariyasu, Shigeki Kubota, Hiroaki Kawamoto, Yoshio Nakata, Kiyotaka Kamibayashi, Yoshiyuki Sankai, Kiyoshi Eguchi, Naoyuki Ochiai

Sign 2.0: ICT for Sign Language Users: Information Sharing, Interoperability, User-Centered Design and Collaboration

Sign 2.0: ICT for Sign Language Users: Information Sharing, Interoperability, User-Centered Design and Collaboration
Introduction to the Special Thematic Session

Deaf people have always been early adopters of everything ICT has to offer. Many barriers however remain, that make it difficult for Deaf sign language users to use their preferred, and for some: only accessible language, when and where they want. In this session, some of the current R&D efforts for sign language users are presented, with the objective to promote information sharing and collaboration, so that recent threats can be dealt with productively and converted into opportunities.

Liesbeth Pyfers
Toward Developing a Very Big Sign Language Parallel Corpus

The Community for researchers in the field of sign language is facing a serious problem which is the absence of a large parallel corpus for signs language. The ASLG-PC12 project, conducted in our laboratory, proposes a rule-based approach for building big parallel corpus between English written texts and American Sign Language Gloss. In this paper, we present a new algorithm to transform a part of English-speech sentence to ASL gloss. This project was started in the beginning of 2011 and it offers today a corpus containing more than one hundred million pairs of sentences between English and ASL gloss. It is available online for free in order to develop and design new algorithms and theories for Sign Language processing, for instance, statistical machine translation and any related fields. We present, in particular, the tasks for generating ASL sentences from the corpus Gutenberg Project that contains only English written texts.

Achraf Othman, Zouhour Tmar, Mohamed Jemni
Czech Sign Language – Czech Dictionary and Thesaurus On-Line

The paper deals with a monolingual (explanatory) and bilingual dictionary of the spoken and sign language, which in each of the languages provides grammatical, stylistic and semantic characteristics, contextual quotes, information about hyponyms and hypernyms, transcription, and audio/video recording (front-facing and sideview captures). The dictionary also serves as a basic didactic aid for teaching deaf and hearing users and their specialized work with academic texts at Masaryk University (MU) and for this reason it also, besides the basic vocabulary, includes specialized terminology of study programs provided at MU in the Czech sign language. Another aim of this dictionary is to build a centralized on-line dictionary of newly created terminology and the existing vocabulary.

Jan Fikejs, Tomáš Sklenák
The Dicta-Sign Wiki: Enabling Web Communication for the Deaf

The paper provides a report on the user-centred showcase prototypes of the DICTA-SIGN project (http://www.dictasign.eu/), an FP7-ICT project which ended in January 2012. DICTA-SIGN researched ways to enable communication between Deaf individuals through the development of human-computer interfaces (HCI) for Deaf users, by means of Sign Language. Emphasis is placed on the Sign-Wiki prototype that demonstrates the potential of sign languages to participate in contemporary Web 2.0 applications where user contributions are editable by an entire community and sign language users can benefit from collaborative editing facilities.

Eleni Efthimiou, Stavroula-Evita Fotinea, Thomas Hanke, John Glauert, Richard Bowden, Annelies Braffort, Christophe Collet, Petros Maragos, François Lefebvre-Albaret
Sign Language Multimedia Based Interaction for Aurally Handicapped People

People with hearing disabilities still do not have a satisfactory access to Internet services. Since sign language is the mother tongue of deaf people, and 80% of this social group cannot successfully understand the written content, different ways of using sign language to deliver information via the Internet should be considered. In this paper, we provide a technical overview of solutions to this problem that we have designed and tested in recent years, along with the evaluation results and users’ experience reports. The solutions discussed prioritize sign language on the Internet for the deaf and hard of hearing using a multimodal approach to delivering information, including video, audio and captions.

Matjaž Debevc, Ines Kožuh, Primož Kosec, Milan Rotovnik, Andreas Holzinger
Meeting Support System for the Person with Hearing Impairment Using Tablet Devices and Speech Recognition

In this paper, we propose a support system for hearing impaired person who attends a small meeting in which other members are hearing people. In such a case, to follow a discussion is difficult for him/her. To solve the problem, the system is designed to show what members are speaking in real time. The system consists of tablet devices and a PC as a server. The PC equips speech recognition software and distributes the recognized results to tablets. The main feature of this system is a method to correct initial speech recognition results that is considered not to be perfectly recognized. The method is handwriting over the tablet device written by meeting members themselves, not by supporting staffs. Every meeting member can correct every recognized result in any time. By this means, the system has possibility to be low cost hearing aids because it does not require extra support staffs.

Makoto Kobayashi, Hiroki Minagawa, Tomoyuki Nishioka, Shigeki Miyoshi
Dubbing of Videos for Deaf People – A Sign Language Approach

Deaf people have their own language and they use the sign language to communicate. Movies are synchronized into a lot of different languages so that almost everyone is able to understand it, but sign language is always missing. This project makes a first step to close the gap by developing a ”how to produce sign language based synchronization” guide for movies and a video player, which plays and shows two different movies at once. Methodical steps include modelling of sign language movie, conversion between spoken language, noise, music and sign language, development of a video player, system architecture for the distribution of the sign language movie and qualitative and quantitative examination of the approaches with an expert group.

Franz Niederl, Petra Bußwald, Georg Tschare, Jürgen Hackl, Josef Philipp
Towards a 3D Signing Avatar from SignWriting Notation

Many transcription systems, like SignWriting, have been suggested in the last decades to describe sign language in a written form. But, these systems have some limitations, they are not easily understood and adopted by the members of deaf community who usually use video and avatar-based systems to access information. In this context, we present in this paper a new tool for automatically generating 3D animation sequences from SW notation. The SW notation is provided as input in an XML based format called SWML (SignWriting Markup Language). This tool aims to improve the reading and writing capabilities of a deaf person who has no special training to read and write in SL.

Yosra Bouzid, Maher Jbali, Oussama El Ghoul, Mohamed Jemni
Sign Language Computer-Aided Education: Exploiting GSL Resources and Technologies for Web Deaf Communication

The paper discusses the potential of exploitation of sign language (SL) monolingual or multilingual resources in combination with lately developed Web technologies in order to answer the need for creation of SL educational content. The reported use case comprises tools and methodologies for creating educational content for the teaching of Greek Sign Language (GSL), by exploiting resources, originally created and annotated in order to support sign recognition and sign synthesis technologies in the framework of the FP7 DICTA-SIGN project, along with a Wiki-like environment that makes possible creation, modification and presentation of SL content.

Stavroula-Evita Fotinea, Eleni Efthimiou, Athanasia-Lida Dimou
SignMedia
Interactive English Learning Resource for Deaf Sign Language Users Working in the Media Industry

An increasing number of deaf graduates and professionals enter media related careers. In the media industry it is a common practice to communicate in written English. Since English discourse can prove a barrier to sign language users, the interactive learning resource SignMedia teaches written English through national sign languages. Learners immerse in a virtual media environment where they perform tasks taken from various stages of the production process of a TV series to reinforce their English skills at intermediate level. By offering an accessible English for Specific Purposes (ESP) course for the media industry, the SignMedia learning tool supports career progression of deaf media professionals.

Luzia Gansinger
SignAssess – Online Sign Language Training Assignments via the Browser, Desktop and Mobile

SignAssess is a web-based e-learning resource for online sign language training assignments simultaneously accessible to desktop and mobile applications. SignAssess was developed to meet the sign language training industry need for an e-learning standard-based online video assignment solution compatible with Course Management Systems and not reliant on local user media recording or storage resources instead to include browser-based media recording and remote storage of content streamed to users on demand.

Christopher John

Computer-Assisted Augmentative and Alternative Communication (CA-AAC)

Towards General Cross-Platform CCF Based Multi-modal Language Support

The AEGIS project aims to contribute a framework for, and building blocks for, an infrastructure for “open accessibility everywhere”. One of many objectives has been to research, prototype and test freely available software services for inclusive graphical symbol support as part of mainstream ICT environments. Based on the Concept Coding Framework (CCF) technology, a “CCF-SymbolServer” has been developed. It can be installed locally on any of the major desktop platforms (GNU/Linux, MacOS X and Windows) to provide its multilingual and multi-modal representation services, or online to support many kinds of web services and networked mobile systems. The three current AEGIS applications will be presented: 1) CCF-SymbolWriter, an extension for symbol support in LibreOffice/OpenOffice Writer, 2) the new CCF supported version of Special Access to Windows (SAW6), 3) CCF-SymbolDroid, an AAC app for Android mobile devices. User evaluations and future perspectives will be discussed.

Mats Lundälv, Sandra Derbring
Developing an Augmentative Mobile Communication System

The widespread use of smartphones and the inclusion of new tech-nologies as Near Field Communications (NFC) in the mobile devices offer a chance to turn the classic Augmentative and Alternative Communication (AAC) boards into Hi-Tech AAC systems with lower costs. This paper presents the development of an augmentative communication system based on Android mobile devices with NFC technology, named BOARD (Book Of Activities Regardless of Disabilities) that not only enables direct communication with voice synthesis, but also through SMS and expands the functionality of AAC systems allowing control of the smartphone and home appliances, all in a simple way just by bringing the phone next to the pictogram.

Juan Bautista Montalvá Colomer, María Fernanda Cabrera-Umpiérrez, Silvia de los Ríos Pérez, Miguel Páramo del Castrillo, María Teresa Arredondo Waldmeyer
The Korean Web-Based AAC Board Making System

The purpose of this study was to develop a Korean web-based customized AAC board making system that is easily accessible and compatible across devices in Korean cultural/linguistic contexts. Potential users of this system are individuals with communication disorders and their parents/teachers. Board making users can make customized symbol boards using either built-in or customized symbols. The AAC users can access their own AAC page generated by personalized AAC page application using any devices only if they can access to a web-browser. We expect that this system plays a role for Korean AAC users to generate customized AAC boards on the web and to use the boards in meaningful environments to meet their unique communication needs.

Saerom Choi, Heeyeon Lee, Ki-Hyung Hong
SymbolChat: Picture-Based Communication Platform for Users with Intellectual Disabilities

We introduce a multimodal picture-based communication platform for users with intellectual disabilities, and results from our user evaluation carried out with the target user group representatives and their assistants. Our current prototype is based on touchscreen input and symbol and text-to-speech output, but supports also mouse and keyboard interaction. The prototype was evaluated in a field study with the help of nine users with varying degrees of intellectual and motor disabilities. Based on our findings, the picture-based approach and our application, SymbolChat, show great potential in providing a tool for users with intellectual disabilities to communicate with other people over the Internet, even without prior knowledge of symbols. The findings highlighted a number of potential improvements to the system, including providing even more input methods for users with physical disabilities, and functionality to support the development of younger users, who are still learning vocabulary and developing their abilities.

Tuuli Keskinen, Tomi Heimonen, Markku Turunen, Juha-Pekka Rajaniemi, Sami Kauppinen
Developing AAC Message Generating Training System Based on Core Vocabulary Approach

Alphabetical-based message generating method is one of the essential features for an augmentative and alternative communication (AAC) system, although it is not the most efficient method. For the Mandarin Chinese AAC users, Chinese text entry is very important if they are expected to say what they want to say. However, a user is required to assemble specific keys to generate a Chinese character. Users need to learn a specific text entry method before they could generate any Chinese character. This study aims to develop a web-based training system which comprises the core Chinese characters to assist the individual with disabilities to learn to generate Mandarin Chinese message more efficiently. This study conducted a usability evaluation to exam the system. In addition this paper also recruited children with learning disabilities and mental retardation to test the training system.

Ming-Chung Chen, Cheng-Chien Chen, Chien-Chuan Ko, Hwa-Pey Wang, Shao-Wun Chen
New Features in the VoxAid Communication Aid for Speech Impaired People

For speech impaired persons even daily communication may cause problems. In many common situations, where speech ability would be necessary, they are not able to hold on. An application that uses Text-To-Speech (TTS) conversion is usable not only in daily routine, but in treatment of speech impaired persons as a therapeutic application as well. The VoxAid framework from BME-TMIT gives solutions for these scenarios. This paper introduces the latest improvements of the Voxaid framework, including user tests and evaluation.

Bálint Tóth, Péter Nagy, Géza Németh
AAC Vocabulary Standardisation and Harmonisation
The CCF and BCI Experiences

The Concept Coding Framework (CCF) effort, started in the European WWAAC project and now continued in the European AEGIS project, as well as the current vocabulary efforts within BCI (Blissymbolics Communication International), highlight that issues of AAC vocabulary content, management and interoperability are central. This paper outlines some stages of this work so far, including the important role of the Authorised Blissymbol Vocabulary (BCI-AV) and its relation to resources like the Princeton WordNet lexical database and the ARASAAC symbol library. The work initiated to link Blissymbolics and other AAC symbol vocabularies, as well as the CCF concept ontologies, to the ISO Concept Database (ISO/CDB) and the work of ISO Technical Committee 37 (ISO TC 37), will be discussed. In this context the long-term ambition to establish an ISO standardised Unicode font for Blissymbolics will also be brought to the fore. We’ll stress the importance of clarified and, when possible, harmonised licensing conditions.

Mats Lundälv, Sandra Derbring
Speaking and Understanding Morse Language, Speech Technology and Autism

The language nature of morse is discussed, showing similarities and differences with spoken language. The radio amateur club working at the Laboratory of Speech Technology for Rehabilitation was used to educate and investigate behavior of a stuttering autistic boy. Morse codes’ (phonemes’) perception accuracy was measured changing the speed of the phonemes. A hypothesis is described that the language elements have to be fixed at different speeds for quick recognition. Experiments with a non-speaking autistic girl using tablet PC are also described.

András Arató, Norbert Markus, Zoltan Juhasz
Reverse-Engineering Scanning Keyboards

Scanning

or

soft keyboards

are alternatives to physical computer keyboards that allow users with motor disabilities to compose text and control the computer using a small number of input actions. In this paper, we present the

reverse Huffman algorithm

(RHA)

,

a novel Information Theoretic method that extracts a

representative latent probability distribution

from a given scanning keyboard design. By calculating the

Jensen-Shannon Divergence

(JSD)between the extracted probability distribution and the probability distribution that represents the body of text that will be composed by the scanning keyboard, the efficiency of the design can be predicted and designs can be compared with each other. Thus, using RHS provides a novel

a priori

context-aware method for reverse-engineering scanning keyboards.

Foad Hamidi, Melanie Baljko
A Communication System on Smart Phones and Tablets for Non-verbal Children with Autism

We designed, developed and evaluated an Augmentative and Alternative Communication (AAC) system,

AutVisComm,

for children with autism that can run on smart phones and tablets. An iterative design and development process was followed, where the prototypes were developed in close collaboration with the user group, and the usability testing was gradually expanded to larger groups. In the last evaluation stage described here, twenty-four children with autism used

AutVisComm

to learn to request the desired object. We measured their learning rates and correlated them with their behavior traits (as observed by their teachers) like joint attention, symbolic processing and imitation. We found that their ability for symbolic processing did not correlate with the learning rate, but their ability for joint attention did. This suggests that this system (and this class of AACs) helps to compensate for a lack of symbolic processing, but not for a lack of joint-attention mechanism.

Harini Sampath, Bipin Indurkhya, Jayanthi Sivaswamy
Assessment of Biosignals for Managing a Virtual Keyboard

In this paper we propose an assessment of biosignals for handling an application based on virtual keyboard and automatic scanning. The aim of this work is to measure the effect of using such application, through different interfaces based on electromyography and electrooculography, on cardiac and electrodermal activities. Five people without disabilities have been tested. Each subject wrote twice the same text using an electromyography interface in first test and electrooculography in the second one. Each test was divided into four parts: instruction, initial relax, writing and final relax. The results of the tests show important differences in the electrocardiogram and electrodermal activity among the parts of tests.

Manuel Merino, Isabel Gómez, Alberto J. Molina, Kevin Guzman
Applying the Principles of Experience-Dependent Neural Plasticity: Building up Language Abilities with ELA®-Computerized Language Modules

In this paper, a computerized language therapy program that aims at supplying the required dose of practice for PWAs will be presented, namely the ELA®-Language Modules. The rationale and underlying principles for each linguistic level and the linguistic structure of the language tasks for the word, sentence and text level and for dialogues will be explained and how the compo-nents of the ELA®-Language Modules adhere to the principles of experience-dependent neural plasticity. First pilot applications of the ELA®-Language Modules with PWAs are discussed in terms of the principles of experience-dependent neural plasticity and usability.

Jacqueline Stark, Christiane Pons, Ronald Bruckner, Beate Fessl, Rebecca Janker, Verena Leitner, Karin Mittermann, Michaela Rausch
Assistive Technology: Writing Tool to Support Students with Learning Disabilities

Previous studies show that assistive technology has a significant impact on helping students with disabilities achieve their academic goals. Assistive technology is hardware, devices and software equipment that help students with disabilities by giving them the same access to perform certain tasks that would otherwise have been challenging. Selecting an appropriate AT tool for a student requires parents, educators, and other professionals take a comprehensive view, carefully analyzing the interaction between the student, the technology, the tasks to be performed, and the settings where it will be used. Therefore, this study was conducted in order to confirm the effective use of assistive technology such as Thai Word Search. The results reflected an improvement in student achievement and appeared to make a greater contribution toward student in this study when using assistive technology to support writing.

Onintra Poobrasert, Alongkorn Wongteeratana
Communication Access for a Student with Multiple Disabilities: An Interdisciplinary Collaborative Approach

This case study highlights the challenges and outcomes of implementing assistive technology for a 17 year old school student with a profound hearing loss, and significant physical disabilities. It demonstrates the importance of a collaborative team approach and the benefits for the student of using assistive technology with regards to the development of self determination and social relationships. This article is of benefit for inter-professional teams working in special education, particularly with students with multiple disabilities.

Frances Layman, Cathryn Crowle, John Ravenscroft

Easy to Web between Science of Education, Information Design and (Speech) Technology

Multimedia Advocacy
A New Way of Self Expression and Communication for People with Intellectual Disabilities

The paper describes the early stages of one strand of an international project entitled Web 2.0 for People with Intellectual Disabilities (W2ID). The project team reports on a project pilot that involves five countries, 400 learners with Intellectual Disabilities (ID) (13 to adulthood), their teachers and supporters, developing rich media web content using Multimedia Self Advocacy Approach and the specially designed ‘Klik in’ platform.

The ‘Klik in’ Web2.0 platform was designed to enable people with ID to express their views and preferences using pictures, videos, sounds and text and to share these with their peers and supporters. Easy-to-use learning materials and a standardised pedagogic approach were also developed to assist learners and supporters throughout the project. The project is being monitored and evaluated using mostly quantitative instruments, although some qualitative data is also being collected and will inform final findings. The early results indicate that learners with ID are motivated to work with rich media content and the web 2.0 ‘Klik in’ platform and are able to express their right to self advocacy.

Gosia Kwiatkowska, Thomas Tröbinger, Karl Bäck, Peter Williams
How Long Is a Short Sentence? – A Linguistic Approach to Definition and Validation of Rules for Easy-to-Read Material

This paper presents a new approach to empirical validation and verification of guidelines for easy-to-read material. The goal of our approach is twofold. One the one hand, the linguistic analysis investigates if the well-known rules are really applied consistently throughout the published easy-to-read material. The findings from this study can help define new rules and refine existing rules.

One the other hand, we show how the software developed for the linguistic analysis can also be used as a tool to support authors in the production of easy-to-read material. The tool applies the rules to the new text and highlights any passages that do not meet those rules, so that the author can go back and improve the text.

Annika Nietzio, Birgit Scheer, Christian Bühler
CAPKOM – Innovative Graphical User Interface Supporting People with Cognitive Disabilities

Most research activities on web accessibility focus on people with physical or sensory disabilities, while potential users with cognitive disabilities still lack adequate solutions to overcome barriers resulting from their disability. The innovative graphical user interface to be developed within the project CAPKOM intends to change this. In a novel approach, this user interface shall be instantly adaptable to the very different demands of people with cognitive disabilities. Iterative user tests will feed results into practical software development, first exemplified by a community art portal for people with cognitive disability.

Andrea Petz, Nicoleta Radu, Markus Lassnig

Smart and Assistive Environments: Ambient Assisted Living (AAL)

A Real-Time Sound Recognition System in an Assisted Environment

This article focuses on the development of detection and classification system of environmental sounds in real-time in a typical home for persons with disabilities. Based on the extraction of acoustic characteristics (Mel Frequency Cepstral Coefficients, Zero Crossing Rate, Roll Off Point and Spectral Centroid) and using a probabilistic classifier (Gaussian Mixture Model), preliminary results show an accuracy rate greater than 93% in the detection and 98% in the classification task

Héctor Lozano, Inmaculada Hernáez, Javier Camarena, Ibai Díez, Eva Navas
Gestures Used by Intelligent Wheelchair Users

This paper is concerned with the modality of gestures in communication between an intelligent wheelchair and a human user. Gestures can enable and facilitate human-robot interaction (HRI) and go beyond familiar pointing gestures considering also context-related, subtle, implicit gestural and vocal instructions that can enable a service. Some findings of a user study related to gestures are presented in this paper; the study took place at the Bremen Ambient Assisted Living Lab, a 60m

2

apartment suitable for the elderly and people with physical or cognitive impairments.

Dimitra Anastasiou, Christoph Stahl
Augmented Reality Based Environment Design Support System for Home Renovation

To improve the living environment for elderly persons, home renovation is performed. A part of home renovations cost is supported by long-term care insurance in Japan and several tens of problems related to renovation constructions are reported. They are caused by lack of communication and knowledge of constructions. We have developed an Augmented Reality environment design support system for home modifications. Especially it is designed for the persons that need long-term care. The preliminary experiment has been carried out and confirmed the functionality of the system.

Yoshiyuki Takahashi, Hiroko Mizumura
Fall Detection on Embedded Platform Using Kinect and Wireless Accelerometer

In this paper we demonstrate how to accomplish reliable fall detection on a low-cost embedded platform. The detection is achieved by a fuzzy inference system using Kinect and a wearable motion-sensing device that consists of accelerometer and gyroscope. The foreground objects are detected using depth images obtained by Kinect, which is able to extract such images in a room that is dark to our eyes. The system has been implemented on the PandaBoard ES and runs in real-time. It permits unobtrusive fall detection as well as preserves privacy of the user. The experimental results indicate high effectiveness of fall detection.

Michal Kepski, Bogdan Kwolek
Controlled Natural Language Sentence Building as a Model for Designing User Interfaces for Rule Editing in Assisted Living Systems – A User Study

As part of the web-based services developed within the WebDA-project the Action Planner was implemented to allow care givers of people with dementia to support them in accomplishing activities of daily living and counteract restlessness amongst others. In order to define rules that include a description of situations indicating e.g. restlessness as well as an action that should be undertaken in such situations, a user interface was designed enabling care givers to express these rules in a controlled natural language setting. Here, rule expressions were offered in preformulated natural sentences that could be manipulated by changing (pre)selected notions as “daily” in pop-up menus embedded in the sentences. A user study was conducted with 24 test participants (12 < 65 years; 12 > 65 years) proofing that this approach can be understood as intuitive and well usable also for test participants beyond 65 years of age.

Henrike Gappa, Gaby Nordbrock, Yehya Mohamad, Jaroslav Pullmann, Carlos A. Velasco
MonAMI Platform in Elderly Household Environment
Architecture, Installation, Implementation, Trials and Results

Paper describes how ambient technology platform MonAMI and related ICT services were adapted into society of Slovakia. MonAMI is European project focusing on ambient assisted living based on software, human machine interfaces and hardware. Main aim was to increase autonomy, enhance ICT services for monitoring purposes for carers and support safety of vulnerable people living alone. Broader description of architecture, devices, process of installation and implementation follow.

Dušan Šimšík, Alena Galajdová, Daniel Siman, Juraj Bujňák, Marianna Andrášová, Marek Novák

Text Entry for Accessible Computing

Modeling Text Input for Single-Switch Scanning

A method and algorithm for modeling single-switch scanning for text input is presented. The algorithm uses the layout of a scanning keyboard and a corpus in the form of a word-frequency list to generate codes representing the scan steps for entering words. Scan steps per character (

SPC

) is computed as a weighted average over the entire corpus.

SPC

is an absolute measure, thus facilitating comparisons of keyboards. It is revealed that

SPC

is sensitive to the corpus if a keyboard includes word prediction. A recommendation for other research using

SPC

is to disclose both the algorithm and the corpus.

I. Scott MacKenzie
DualScribe: A Keyboard Replacement for Those with Friedreich’s Ataxia and Related Diseases

An alternative text composition method is introduced, comprising a small special-purpose keyboard as an input device and software to make text entry fast and easy. The work was inspired by an FA (Friedreich’s Ataxia) patient who asked us to develop a viable computer interaction solution – taking into account the specific symptoms induced by his disease. The outcome makes text entry easier than with the standard keyboard without being slower. It is likely that the system has general use for anyone with a similar condition, and also for able-bodied users looking for a small-size keyboard. We present a usability study with four participants showing the method’s effectiveness.

Torsten Felzer, I. Scott MacKenzie, Stephan Rinderknecht
Easier Mobile Phone Input Using the JusFone Keyboard

We present an alternate mobile phone keyboard for inputing text, the JusFone Keyboard. This keyboard allows people to enter characters by resting their finger on a the desired key, and rocking to select a specific character. We ran user tests of the keyboard with 12 seniors comparing it against a touchscreen keyboard, a phone with large buttons, and an on-screen PC keyboard. The users found several things to like about the JusFone Keyboard, including comfort and size of keys and having direct access to characters. Users also had several suggestions about how to make the keyboard better such as making the text on the keys bigger and adjusting the spacing between keys. We also conducted a diary study of a user with reduced hand function who used the JusFone keyboard on his PC. The results indicate that the keyboard may be of assistance to persons with reduced hand function.

Oystein Dale, Trenton Schulz
Automatic Assessment of Dysarthric Speech Intelligibility Based on Selected Phonetic Quality Features

This paper addresses the problem of assessing the speech intelligibility of patients with dysarthria, which is a motor speech disorder. Dysarthric speech produces spectral distortion caused by poor articulation. To characterize the distorted spectral information, several features related to phonetic quality are extracted. Then, we find the best feature set which not only produces a small prediction error but also keeps their mutual dependency low. Finally, the selected features are linearly combined using a multiple regression model. Evaluation of the proposed method on a database of 94 patients with dysarthria proves the effectiveness in predicting subjectively rated scores.

Myung Jong Kim, Hoirin Kim
Adaptation of AAC to the Context Communication: A Real Improvement for the User Illustration through the VITIPI Word Completion

This paper describes the performance of the VITIPI word completion system through a text input simulation. The aim of this simulation is to estimate the impact of the linguistic knowledge base size through two metrics: the Key-Stroke Ratio (KSR) and the KeyStroke Per Character (KPC). Our study shows that the performance of a word completion is depending of the % of words not available and the size of the lexicon.

Philippe Boissière, Nadine Vigouroux, Mustapha Mojahid, Frédéric Vella
Tackling the Acceptability of Freely Optimized Keyboard Layout

Reorganization of a keyboard layout based on linguistic characteristics would be an efficient way to improve text input speed. However, a new character layout imposes a learning period that often discourages users. The Quasi-QWERTY Keyboard aimed at easing a new layout acceptance by limiting the changes. But this strategy prejudices the long term performance. Instead, we propose a solution based on the multilayer interface paradigm. The Multilayer Keyboard enables to progressively converge through a freely optimized layout. It transform the learning period into a transition period. During this transition period, user’s performance never regresses and progressively improves.

Bruno Merlin, Mathieu Raynal, Heleno Fülber
Measuring Performance of a Predictive Keyboard Operated by Humming

A number of text entry methods use a predictive completion based on letter-level

n

-gram model. In this paper, we investigate on an optimal length of

n

-grams stored in such model for a predictive keyboard operated by humming. In order to find the length, we analyze six different corpora, from which a model is built by counting number of primitive operations needed to enter a text. Based on these operations, we provide a formula for estimation of words per minute (WPM) rate. The model and the analysis results are verified in an experiment with three experienced users of the keyboard.

Ondřej Poláček, Adam J. Sporka, Zdeněk Míkovec
Dysarthric Speech Recognition Error Correction Using Weighted Finite State Transducers Based on Context–Dependent Pronunciation Variation

In this paper, we propose a dysarthric speech recognition error correction method based on weighted finite state transducers (WFSTs). First, the proposed method constructs a context–dependent (CD) confusion matrix by aligning a recognized word sequence with the corresponding reference sequence at a phoneme level. However, because the dysarthric speech database is too insufficient to reflect all combinations of context–dependent phonemes, the CD confusion matrix can be underestimated. To mitigate this underestimation problem, the CD confusion matrix is interpolated with a context–independent (CI) confusion matrix. Finally, WFSTs based on the interpolated CD confusion matrix are built and integrated with a dictionary and language model transducers in order to correct speech recognition errors. The effectiveness of the proposed method is demonstrated by performing speech recognition using the proposed error correction method incorporated with the CD confusion matrix. It is shown from the speech recognition experiment that the average word error rate (WER) of a speech recognition system employing the proposed error correction method with the CD confusion matrix is relatively reduced by 13.68% and 5.93%, compared to those of the baseline speech recognition system and the error correction method with the CI confusion matrix, respectively.

Woo Kyeong Seong, Ji Hun Park, Hong Kook Kim
Text Entry Competency for Students with Learning Disabilities in Grade 5 to 6

This study intended to understand the computer text entry skills for students with learning disabilities in grade 5 to 6. 35 students with learning disabilities, who received special education services in resource room at school, and 35 non-disabled students participated in our study. “Mandarin Chinese Character Entry Training system (MCChEn system)” was used to measure the students’ text entry skills. SPSS19.0 was used to compare the difference in text entry skills between children with and without learning disabilities. In addition, the correlations between the abilities of recognition in Chinese characters, and text entry skills were also explored. The results indicated that children with learning disabilities perform significantly poorer than children without disabilities in recognizing Chinese characters orally and in computer text entry skills. Chinese characters recognition is an important factor affecting Chinese Character entry skills in children with learning disabilities. The tool, “Mandarin Chinese Character Entry Training system (MCChEn system)”, we utilized is able to discriminate the computer text entry skills between children with and without learning disabilities. The results of this study can provide educators important information about text entry skills of children with learning disabilities, in order to develop further training programs.

Ting-Fang Wu, Ming-Chung Chen

Tactile Graphics and Models for Blind People and Recognition of Shapes by Touch

Vision SenS

The electronic prototype for vision developed in this paper intends to show that it is possible to build an inexpensive and functional device which serves to partly compensate the sense of sight for visually impaired individuals through sensory substitution, by replacing some functions the sense of sight with functions of the sense of touch, with the proposed prototype, blind users receive electrical signals in the tips of their fingers generated from the capture of images objects with a camera and processed on a laptop to extract visual information.

Berenice Machuca Bautista, José Alfredo Padilla Medina, Francisco Javier Sánchez Marín
Computer-Aided Design of Tactile Models
Taxonomy and Case Studies

Computer-aided tools offer great potential for the design and production of tactile models. While many publications focus on the design of essentially two-dimensional media like raised line drawings or the reproduction of three-dimensional objects, we intend to broaden this view by introducing a taxonomy that classifies the full range of conversion possibilities based on dimensionality. We present an overview of current methods, discuss specific advantages and difficulties, identify suitable programs and algorithms and discuss personal experiences from case studies performed in cooperation with two museums.

Andreas Reichinger, Moritz Neumüller, Florian Rist, Stefan Maierhofer, Werner Purgathofer
Three-Dimensional Model Fabricated by Layered Manufacturing for Visually Handicapped Persons to Trace Heart Shape

In this study, we fabricated three-dimensional models of the human heart by stereolithography and powder-layered manufacturing; using these models, visually handicapped persons could trace the shape of a heart by touching. Further, we assessed the level of understanding of the visually handicapped persons about the external structure of the heart and the position of blood vessels. Experimental results suggest that the heart shape models developed in this study by layered manufacturing were useful for teaching anatomy to visually handicapped persons.

Kenji Yamazawa, Yoshinori Teshima, Yasunari Watanabe, Yuji Ikegami, Mamoru Fujiyoshi, Susumu Oouchi, Takeshi Kaneko
Viable Haptic UML for Blind People

We investigate tactile representations and haptic interaction that may enable blind people to utilize UML diagrams by using an industry standard editor. In this paper we present a new approach to present tactile UML diagrams by preserving spatial information on a touch-sensitive tactile display. Furthermore we present the results of a fundamental evaluation showing that blind people retain orientation during exploration of tactile diagrams and which problems are associated with the usage of ideographs. We compared our new developed representation with the common method blind people utilize sequence diagrams: non-visually through verbalization. We indicate problems for both representations.

Claudia Loitsch, Gerhard Weber
Non-visual Presentation of Graphs Using the Novint Falcon

Several technological advances have contributed to providing non-visual access to information by individuals who have sight impairments. Screen readers and Braille displays, however, are not the means of choice for conveying pictorial data such as graphs, maps, and charts. This paper thus proposes the “Falcon Graph” interface which has been developed to enable visually impaired individuals to access computer-based visualisation techniques: mainly pie charts, bar charts, and line graphs. In addition to its interaction with Microsoft Excel, the interface uses the Novint Falcon as the main force feedback media to navigate the haptic virtual environment. Initial findings gathered from testing the interface are also presented.

Reham Alabbadi, Peter Blanchfield, Maria Petridou

Mobility for Blind and Partially Sighted People

Towards a Geographic Information System Facilitating Navigation of Visually Impaired Users

In this paper, we propose some adaptation to Geographical Infor-mation System (GIS) components used in GPS based navigation system. In our design process, we adopted a user-centered design approach in collaboration with final users and Orientation and Mobility (O&M) instructors. A database scheme is presented to integrate the principal classes proposed by users and O&M instructors. In addition, some analytical tools are also implemented and integrated in the GIS. This adapted GIS can improve the guidance process of existing and future EOAs. A first implementation of an adapted guidance process allowing a better representation of the surroundings is provided as an illustration of this adapted GIS. This work is part of the NAVIG system (Navigation Assisted by Artificial VIsion and GNSS), an assistive device, whose aim is to improve the Quality of Life of Visually Impaired (VI) persons via increased orientation and mobility capabilities.

Slim Kammoun, Marc J. -M. Macé, Bernard Oriola, Christophe Jouffrais
Combination of Map-Supported Particle Filters with Activity Recognition for Blind Navigation

By implementing a combination of an activity recognition with a map-supported particle filter we were able to significantly improve the positioning of our navigation system for blind people. The activity recognition recognizes walking forward or backward, or ascending or descending stairs. This knowledge is combined with knowledge from the maps, i.e. the location of stairs. Different implementations of the particle filter were evaluated regarding their ability to compensate for sensor drift.

Bernhard Schmitz, Attila Györkös, Thomas Ertl
AccessibleMap
Web-Based City Maps for Blind and Visually Impaired

Today cities can be discovered easily with the help of web-based maps. They assist to discover streets, squares and districts by supporting orientation, mobility and feeling of safety. Nevertheless do online maps still belong to those elements of the web which are hardly or even not accessible for partially sighted people. Therefore the main objective of the AccessibleMap project is to develop methods to design web-based city maps in a way that they can be better used by people affected with limited sight or blindness in several application areas of daily life.

Höckner Klaus, Daniele Marano, Julia Neuschmid, Manfred Schrenk, Wolfgang Wasserburger
Design and User Satisfaction of Interactive Maps for Visually Impaired People

Multimodal interactive maps are a solution for presenting spatial information to visually impaired people. In this paper, we present an interactive multimodal map prototype that is based on a tactile paper map, a multi-touch screen and audio output. We first describe the different steps for designing an interactive map: drawing and printing the tactile paper map, choice of multi-touch technology, interaction technologies and the software architecture. Then we describe the method used to assess user satisfaction. We provide data showing that an interactive map – although based on a unique, elementary, double tap interaction – has been met with a high level of user satisfaction. Interestingly, satisfaction is independent of a user’s age, previous visual experience or Braille experience. This prototype will be used as a platform to design advanced interactions for spatial learning.

Anke Brock, Philippe Truillet, Bernard Oriola, Delphine Picard, Christophe Jouffrais
A Mobile Application Concept to Encourage Independent Mobility for Blind and Visually Impaired Students

This paper presents a user-centric application development process for mobile application to blind and visually impaired students. The development process connects the assistive technology experts, teachers and students from the school for visually impaired together to participate to the design of the mobile application. The data for the analysis is gathered from interviews and workshops with the target group. The main goal of the project is to examine how mobile application can be used to encourage and motivate visually impaired students to move independently indoors and outdoors. The application allows the students to interact with their environment through use of sensor technology now standard on most smart and feature phones. We present a user-centric application development process, report on findings from the initial user trials, and propose a framework for future phases of the project.

Jukka Liimatainen, Markku Häkkinen, Tuula Nousiainen, Marja Kankaanranta, Pekka Neittaanmäki
Do-It-Yourself Object Identification Using Augmented Reality for Visually Impaired People

In this paper, we present a Do-It-Yourself (DIY) application for helping Visually Impaired People (VIP) identify objects in their day-to-day interaction with the environment. The application uses the Layar

TM

Augmented Reality (AR) API to build a working prototype for identifying grocery items. The initial results of using the application show positive acceptance from the VIP community.

Atheer S. Al-Khalifa, Hend S. Al-Khalifa
An Assistive Vision System for the Blind That Helps Find Lost Things

We present a computer vision system that helps blind people find lost objects. To this end, we combine color- and SIFT-based object detection with sonification to guide the hand of the user towards potential target object locations. This way, we are able to guide the user’s attention and effectively reduce the space in the environment that needs to be explored. We verified the suitability of the proposed system in a user study.

Boris Schauerte, Manel Martinez, Angela Constantinescu, Rainer Stiefelhagen
Designing a Virtual Environment to Evaluate Multimodal Sensors for Assisting the Visually Impaired

We describe how to design a virtual environment using Microsoft Robotics Developer Studio in order to evaluate multimodal sensors for assisting visually impaired people in daily tasks such as navigation and orientation. The work focuses on the design of the interfaces of sensors and stimulators in the virtual environment for future subject experimentation. We discuss what type of sensors we have simulated and define some non-classical interfaces to interact with the environment and get feedback from it. We also present preliminary results for feasibility by showing experimental results on volunteer test subjects, concluding with a discussion of potential future directions.

Wai L. Khoo, Eric L. Seidel, Zhigang Zhu
A Segmentation-Based Stereovision Approach for Assisting Visually Impaired People

An accurate 3D map, automatically generated in real-time from a camera-based stereovision system, is able to assist blind or visually impaired people to obtain correct perception and recognition of the surrounding objects and environment so that they can move safely. In this paper, a segmentation-based stereovision approach is proposed to rapidly obtain accurate 3D estimations of man-made scenes, both indoor and outdoor, with largely textureless areas and sharp depth changes. The new approach takes advantage of the fact that many man-made objects in an urban environment consist of planar surfaces. The final outcome of the system is not just an array of individual 3D points. Instead, the 3D model is built in a geometric representation of plane parameters, with geometric relations among different planar surfaces. Based on this 3D model, algorithms can be developed for traversable path planning, obstacle detection and object recognition for assisting the blind in urban navigation.

Hao Tang, Zhigang Zhu
KinDectect: Kinect Detecting Objects

Detecting humans and objects in images has been a very challenging problem due to variation in illumination, pose, clothing, background and other complexities. Depth information is an important cue when humans recognize objects and other humans. In this work we utilize the depth information that a Kinect sensor - Xtion Pro Live provides to detect humans and obstacles in real time for a blind or visually impaired user. The system runs in two modes. For the first mode, we focus on how to track and/or detect multiple humans and moving objects and transduce the information to the user. For the second mode, we present a novel approach on how to avoid obstacles for safe navigation for a blind or visually-impaired user in an indoor environment. In addition, we present a user study with some blind-folded users to measure the efficiency and robustness of our algorithms and approaches.

Atif Khan, Febin Moideen, Juan Lopez, Wai L. Khoo, Zhigang Zhu
A System Helping the Blind to Get Merchandise Information

We propose a system helping the blind to get best-before/use-by date of perishable foods. The system consists of a computer, a wireless camera and an earphone. It processes images captured by a user and extracts character regions in the image by using Support Vector Machine (SVM). Processing the regions by Optical Character Recognition (OCR) and the system outputs the best-before/use-by date as synthesized speech.

Nobuhito Tanaka, Yasunori Doi, Tetsuya Matsumoto, Yoshinori Takeuchi, Hiroaki Kudo, Noboru Ohnishi

Human-Computer Interaction for Blind and Partially Sighted People

Accessibility for the Blind on an Open-Source Mobile Platform
MObile Slate Talker (MOST) for Android

As Android handsets keep flooding the shops in a wide range of prices and capabilities, many of the blind community turn their attention to this emerging alternative, especially because of a plethora of cheaper models offered. Earlier, accessibility experts only recommended Android phones sporting an inbuilt QWERTY keyboard, as the touch-screen support had then been in an embryotic state. Since late 2011, with Android 4.X (ICS), this has changed. However, most handsets on the market today -especially the cheaper ones- ship with a pre-ICS Android version. This means that their visually impaired users won’t be able to enjoy the latest accessibility innovations. Porting MObile SlateTalker to Android has been aimed at filling this accessibility gap with a low-cost solution, regarding the special needs of our target audience: the elderly, persons with minimal tech skills and active Braille users.

Norbert Markus, Szabolcs Malik, Zoltan Juhasz, András Arató
Accessibility of Android-Based Mobile Devices: A Prototype to Investigate Interaction with Blind Users

The study presented in this paper is part of mobile accessibility research with particular reference to the interaction with touch-screen based smartphones. Its aim was to gather information, tips and indications on interaction with a touch-screen by blind users. To this end we designed and developed a prototype for an Android-based platform. Four blind users (two inexperienced and two with experience of smartphones) were involved from the early phase of prototype design. The involvement of inexperienced users played a key role in understanding expectations of smart phones especially concerning touch-screen interaction. Skilled users provided useful suggestions on crucial aspects such as gestures and button position. Although the prototype developed is limited to only a few features for the Android operating system, the results obtained from blind user interaction can be generalized and applied to any mobile device based on a touch-screen. Thus, the results of this work could be useful to developers of mobile operating systems and applications based on a touch-screen, in addition to those working on designing and developing assistive technologies.

Sarah Chiti, Barbara Leporini
TypeInBraille: Quick Eyes-Free Typing on Smartphones

In recent years, smartphones (e.g., Apple iPhone) are getting more and more widespread among visually impaired people. Indeed, thanks to natively available screen readers (e.g., VoiceOver) visually impaired persons can access most of the smartphone functionalities and applications. Nonetheless, there are still some operations that require long time or high mental workload to be completed by a visually impaired person. In particular, typing on the on-screen QWERTY keyboard turns out to be challenging in many typical contexts of use of mobile devices (e.g., while moving on a tramcar). In this paper we present the results of an experimental evaluation conducted with visually impaired people to compare the native iPhone on-screen QWERTY keyboard with

TypeInBraille

, a recently proposed typing technique based on Braille. The experimental evaluation, conducted in different contexts of use, highlights that

TypeInBraille

significantly improves typing efficiency and accuracy.

Sergio Mascetti, Cristian Bernareggi, Matteo Belotti
Real-Time Display Recognition System for Visually Impaired

Currently, electronic devices incorporating displays to present the information to the user are ubiquitous common, and visually impaired people might have problems to use these devices. This article focuses on developing a real time display detector and digital character recognition application using techniques based on the connected components approach. The display zone detection accuracy rate is about 85% and the recognition rate greater than 88%. The system was implemented on both a desktop and a cell phone.

Irati Rasines, Pedro Iriondo, Ibai Díez
A Non-visual Interface for Tasks Requiring Rapid Recognition and Response
An RC Helicopter Control System for Blind People

To implement a user interface for blind people, auditory and tactile outputs have mainly been used. However, an auditory interface is ineffective for tasks that require the rapid recognition that vision enables. Thus, this paper presents a method to achieve rapid recognition with a non-visual user interface. This user interface is implemented to achieve a prototype fully controllable sys-tem of an RC helicopter for blind people by using a braille display as a tactile output device. This paper also explains the system integration software, named brl-drone, and hardware components of that system including the AR. Drone. The AR. Drone is a controlled helicopter that uses an auxiliary magnetic sensor and a game controller to solve the problems that arise when a braille display is used as a tactile indicating device.

Kazunori Minatani, Tetsuya Watanabe
Reaching to Sound Accuracy in the Peri-personal Space of Blind and Sighted Humans

With the aim of designing an assistive device for the Blind, we compared the ability of blind and sighted subjects to accurately locate several types of sounds generated in the peri-personal space. Despite a putative lack of calibration of their auditory system with vision, blind subjects performed with a similar accuracy as sighted subjects. The average error was sufficiently low (10° in azimuth and 10 cm in distance) to orient a user towards a specific goal or to guide a hand grasping movement to a nearby object. Repeated white noise bursts of short duration induced better performance than continuous sounds of similar total duration. These types of sound could be advantageously used in an assistive device. They would provide indications about direction to follow or position of surrounding objects, with limited masking of environmental sounds, which are of primary importance for the Blind.

Marc J. -M. Macé, Florian Dramas, Christophe Jouffrais
Hapto-acoustic Scene Representation

The use of the Phantom Omni force feedback device combined with sonification is evaluated in applications for visually impaired people such as medical engineering, numerical simulation, and architectural planning.

Sebastian Ritterbusch, Angela Constantinescu, Volker Koch
Efficient Access to PC Applications by Using a Braille Display with Active Tactile Control (ATC)

Braille displays are providing tactile access to information shown on a screen. The invention of Active Tactile Control (ATC) allows detecting the tactile reading position on a Braille display in real time. Based on ATC new computer interactions have been implemented. Braille frames allow the simultaneous display of various independent sources of information on a Braille display and are used to improve access to complex applications. A task-overview for handling of multiple tasks with direct access to the activated tasks triggered by the reading position has been implemented. A tactile notification of a spelling mistake triggered by the tactile reading position provides blind users assistance when editing text. A new rule set for blind users’ PC interaction based on Active Tactile Control needs to be defined.

Siegfried Kipke
Applications of Optically Actuated Haptic Elements

There are missing commercially available large area dynamic tactile displays providing access to high-resolution graphic and Braille for the blind people. This is not solved by currently available displays in the form of a Braille line. The objective of the project NOMS (Nano-Optical Mechanical Systems) is to solve this problem by using optically activated haptic actuators. These will require no hard to assembly moving mechanical parts and have the potential for finer resolution. Recently developed carbon nanotube enriched photoactive polymers provided the starting technology for this purpose. There will be presented development of materials of this kind and their integration into tactile displays.

Branislav Mamojka, Peter Teplický
Trackable Interactive Multimodal Manipulatives: Towards a Tangible User Environment for the Blind

This paper presents the development of Trackable Interactive Multi-modal Manipulatives (TIMM). This system provides a multimodal tangible user environment (TUE), enabling people with visual impairments to create, modify and naturally interact with graphical representations on a multitouch surface. The system supports a novel notion of active position, proximity, stacking, and orientation tracking of manipulatives. The platform has been developed and it is undergoing formal evaluation.

Muhanad S. Manshad, Enrico Pontelli, Shakir J. Manshad
Introduction of New Body-Braille Devices and Applications

In this paper, two new Body-Braille devices are described. After the Body-Braille system and its current development status is explained, first, a new device for Braille-based real-time communication over internet (via Skype) is introduced and second, a new device for autonomous learning, which adopts wireless communication, is explained. The former is already developed and being used in the field test stage; the latter one is being developed now.

Satoshi Ohtsuka, Nobuyuki Sasaki, Sadao Hasegawa, Tetsumi Harakawa
Backmatter
Metadaten
Titel
Computers Helping People with Special Needs
herausgegeben von
Klaus Miesenberger
Arthur Karshmer
Petr Penaz
Wolfgang Zagler
Copyright-Jahr
2012
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-31534-3
Print ISBN
978-3-642-31533-6
DOI
https://doi.org/10.1007/978-3-642-31534-3

Neuer Inhalt