Skip to main content

2018 | Buch

Universal Access in Human-Computer Interaction. Methods, Technologies, and Users

12th International Conference, UAHCI 2018, Held as Part of HCI International 2018, Las Vegas, NV, USA, July 15-20, 2018, Proceedings, Part I

insite
SUCHEN

Über dieses Buch

This two-volume set LNCS 10907 and 10908 constitutes the refereed proceedings of the 12th International Conference on Universal Access in Human-Computer Interaction, UAHCI 2018, held as part of HCI International 2018 in Las Vegas, NV, USA, in July 2018.The total of 1170 papers and 195 posters included in the 30 HCII 2018 proceedings volumes was carefully reviewed and selected from 4373 submissions.
The 49 papers presented in this volume were organized in topical sections named: design for all, accessibility and usability; alternative I/O techniques, multimodality and adaptation; non-visual interaction; and designing for cognitive disabilities.

Inhaltsverzeichnis

Frontmatter

Design for All, Accessibility and Usability

Frontmatter
A Method for Analyzing Mobility Issues for People with Physical Disabilities in the Context of Developing Countries

In this paper, we propose a method based on studies available in the literature and in the norms that regulate urban accessibility to analyze the problems of urban mobility faced by people with physical disabilities in cities of developing countries. To performing this analysis, we carried out a series of activities through participatory workshops and analysis of route services involving 29 people with physical disabilities or their companions. The results revealed some of the main problems of accessibility found in cities, new ways for tracing routes in map applications, considering accessibility aspects.

Leticia Maria de Oliveira Camenar, Diego de Faria do Nascimento, Leonelo Dell Anhol Almeida
Mobile-PrivAccess: Method for Analyzing Accessibility in Mobile Applications from the Privacy Viewpoint Abiding by W3C

Despite accessibility is a right ensured by the legislation, it is still a challenge to People with Disability and Limitations (PwDaL), considering Limitations, for instance, those deriving from aging and low literacy. In Digital Systems (DS) restrictions on use by PwDaL are easily found which even more excludes them from society which preponderantly interacts with the technology. Recognizing the importance of ensuring accessibility and privacy to PwDaL we investigated the problems of accessibility and digital privacy in the context of mobile applications and proposed the Mobile-PrivAccess method. This method allows applying the W3C directives to assess the accessibility to mobile applications privacy resources, without the participation of PwDaL. The absence of these users spares unnecessary efforts at the preliminary stages of the application assessment. The method based on an inspection technique is a four-stage structured process with established goals containing support artifacts for specific phases. For applying the method, two professionals with knowledge on and or experience in digital inclusion for PwDaL and/or in the W3C standards are required. For verifying the viability of the method proposed an experiment was performed adopting the proposed method to assess the accessibility of the Waze App [13] in three operating systems, as follows: Android, iOS and Windows Phone. The results demonstrated the viability of the method indicating that Waze meets few success criteria and presents a number of accessibility barriers to different PwDaL profiles.

Rachel T. Chicanelli, Patricia C. de Souza, Luciana C. Lima de Faria Borges
A Taxonomy for Website Evaluation Tools Grounded on Semiotic Framework

Taxonomies are valuable for providing a standardized way of cataloging elements into categories. In the context of website evaluation tools, providing a structured way for researchers and practitioners to compare and analyze existing solutions is valuable for identifying gaps/trends or to support well-informed decisions during development cycles (from planning to deployment). This paper proposes a taxonomy for classifying website evaluation tools grounded on Semiotic Framework, an artifact from Organizational Semiotics. The taxonomy is structured into 4 main dimensions (i.e., Participant-evaluator interaction; Effort; Automation type; Data source) and considers interaction and efforts involving UI evaluation stakeholders. From the proposed taxonomy, we expect to support consistent characterization of website evaluation tools.

Vagner Figueredo de Santana, Maria Cecília Calani Baranauskas
Copy Here, Paste There? On the Challenges of Scaling Inclusive Social Innovations

This article addresses the question of under which conditions established social innovations aiming at improving social inclusion may be transferred from one specific environmental context to another. Through the example of an in-depth case-study on the PIKSL laboratories in Germany, the authors develop insights into the importance of innovation-friendly ecosystems as preconditions of successful breaching and scaling of social innovations. Previous work (cf. [1]) provides a generic understanding of such an ecosystem and proposes a ‘context understand guide’, which is applied to the specific use-case of a social innovation initiative and its goal to scale their new solution. On the basis of a working definition of inclusive social innovations and a critical reflexion of scaling concepts the authors draft a framework which is then applied to the PIKSL initiative. In the following, a set of questions is presented to which those inclusive social innovation initiatives can answer who want to systematically plan a dissemination process of their ideas, theories and methodologies. Main outcome of this paper is an instruction of how to apply the context-understanding guide to the scaling process of inclusive social innovations.

Jennifer Eckhardt, Christoph Kaletka, Bastian Pelka
Universal Design of ICT for Emergency Management
A Systematic Literature Review and Research Agenda

The primary objectives of this article are to give a systematic overview of the current state of the emerging research field of Universal Design of Information and Communication Technology (ICT) for Emergency Management, and to highlight high-impact research opportunities to ensure that the increasing introduction of ICT in Emergency Management can contribute to removing barriers instead of adding more barriers, in particular for the elderly and people with disabilities. A systematic review on various literature with respect to Universal Design, ICT and Emergency Management between 2008 to 2018 was employed in this study, and reviewed systematically using a predefined framework. The ultimate goal of this effort is to answer the following questions: (1) How strong is the coverage of research on Universal Design of ICT in Emergency Management in the different categories of Emergency Management ICT tools? (2) What potential next steps in research on Universal Design of ICT in Emergency Management have the highest potential impact in terms of improved Emergency Management and reduced Disaster Risk? We identify a set of gaps in the literature, indicating that there are some challenges where Universal Design is not so much taken into account in the technology development to support the different phases of the crisis management cycle. We also derive a research agenda based on areas that are missing in the literature, to serve a future research in the area of universal design and Emergency Management.

Terje Gjøsæter, Jaziar Radianti, Weiqin Chen
When Universal Access Does not Go to Plan: Lessons to Be Learned

While the theory of designing for Universal Access is increasingly understood, there remain persistent issues over realising products and systems that meet the goal of being accessible and usable by the broadest possible set of users. Clearly products or service that are designed without even considering the needs of the wider user base are implicitly going to struggle to be universally accessible. However, even products that have been designed knowing that they are to be used by broad user bases frequently still struggle to achieve the ambition of being universally accessible. This paper examines a number of such products that did not achieve, at least initially, the desired level of universal accessibility. Principal recommendations from each case study are presented to provide a guide to common issues to be avoided.

Simeon Keates
Categorization Framework for Usability Issues of Smartwatches and Pedometers for the Older Adults

In recent years various usability issues related to device characteristics of quantified-self wearables such as smartwatches and pedometers have been identified which appear likely to impact device adoption among the older adults. However, an overall framework has not yet been developed to provide a comprehensive set of usability issues related to smartwatches and pedometers. This study used a two-stage research approach with 33 older participants, applying contextual action theory and usability evaluation methods both to determine perceived usability issues and to formulate a usability categorization framework based on identified issues. Additionally, we prioritized the predominant usability issues of smartwatches and pedometers that warrant immediate attention from technology designers, the research community, and application developers. Results revealed predominant usability issues related to the following device characteristics of smartwatches: user interface (font size, interaction techniques such as notification, button location) and hardware (screen size); and of pedometers: user interface (font size, interaction techniques such as notification, button location, and tap detection) and hardware (screen size).

Jayden Khakurel, Antti Knutas, Helinä Melkas, Birgit Penzenstadler, Bo Fu, Jari Porras
Towards a Framework for the Design of Quantitative Experiments: Human-Computer Interaction and Accessibility Research

Many students and researchers struggle with the design and analysis of empirical experiments. Such issue may be caused by lack of knowledge about inferential statistics and suitable software tools. Often, students and researchers conduct experiments without having a complete plan for the entire lifecycle of the process. Difficulties associated with the statistical analysis are often ignored. Consequently, one may end up with data that cannot be easily analyzed. This paper discusses the concept sketch of a framework that intends to help students and researchers to design correct empirical experiments by making sound design decisions early in the research process. The framework consists of an IDE, i.e., Integrated (statistical experiment) Development Environment. This IDE helps the user structures an experiment by giving continuous feedback drawing the experimenter’s attention towards potential problems. The output of the IDE is an experimental structure and data format that can be imported to common statistical packages such as JASP in addition to providing guidance about what tests to use.

Frode Eika Sandnes, Evelyn Eika, Fausto Orsi Medola
A Strategy on Introducing Inclusive Design Philosophy to Non-design Background Undergraduates

Focusing on how to integrating design into crossover-education, which is a controversial topic in china’s education. And in china, all china’s colleges and universities are trying their best to set up crossover education. Cause firstly they all think that it is vital important for the college students to broaden their horizon, secondly, more and more projects need diverse and professional genius to cooperate to be finished. They need to know the design thinking. But the problem is coming, differing from design-major background students, how to make design curriculum transforming a better and easier way to accept and assimilate by the other background students. How to cultivate the design thinking in crossover education, I think, which is the most things we as educator need to concentrate. This paper focuses on how to introduce inclusive design philosophy to non-design background undergraduates. This is one of the parts of a research project “Applied universities’ design education reform and practice based on the principle of inclusive design” supported by the Shanghai Education Science Research Program (Grant No. C17067) [1].

Shishun Wang, Ting Zhang, Guoying Lu, Yinyun Wu

Alternative I/O Techniques, Multimodality and Adaptation

Frontmatter
Stabilising Touch Interactions in Cockpits, Aerospace, and Vibrating Environments

Incorporating touch screen interaction into cockpit flight systems is increasingly gaining traction given its several potential advantages to design as well as usability to pilots. However, perturbations to the user input are prevalent in such environments due to vibrations, turbulence and high accelerations. This poses particular challenges for interacting with displays in the cockpit, for example, accidental activation during turbulence or high levels of distraction from the primary task of airplane control to accomplish selection tasks. On the other hand, predictive displays have emerged as a solution to minimize the effort as well as cognitive, visual and physical workload associated with using in-vehicle displays under perturbations, induced by road and driving conditions. This technology employs gesture tracking in 3D and potentially eye-gaze as well as other sensory data to substantially facilitate the acquisition (pointing and selection) of an interface component by predicting the item the user intents to select on the display, early in the movements towards the screen. A key aspect is utilising principled Bayesian modelling to incorporate and treat the present perturbation, thus, it is a software-based solution that showed promising results when applied to automotive applications. This paper explores the potential of applying this technology to applications in aerospace and vibrating environments in general and presents design recommendations for such an approach to enhance interactions accuracy as well as safety.

B. I. Ahmad, Patrick M. Langdon, S. J. Godsill
MyoSL: A Framework for Measuring Usability of Two-Arm Gestural Electromyography for Sign Language

Several Sign Language (SL) systems have been developed using various technologies: Kinect, armbands, and gloves. Majority of these studies never considered user experience as part of their approach. With that, we propose a new framework that eases usability by employing two-arm gestural electromyography instead of typical vision-based systems see Fig. 5. Interactions can be considered seamless and natural with this way. In this preliminary study, we conducted focus group discussions and usability tests with signers. Based on the results of the usability tests, 90% of respondents found the armband comfortable. The respondents also stated that the armband was not intrusive when they tried to perform their sign gestures. At the same time, they found it aesthetically pleasing. Additionally, we produced an initial prototype from this experiment setup and tested them on several conversational scenarios. By using this approach, we enable an agile framework that caters the needs of the signer-user.

Jordan Aiko Deja, Patrick Arceo, Darren Goldwin David, Patrick Lawrence Gan, Ryan Christopher Roque
Evaluating Devices for Object Rotation in 3D

An experiment with 12 participants was conducted to compare the performance of a mouse, a mobile phone accelerometer, and a joystick in a 3D rotation task. The 3D rotation task was designed to measure throughput, the user performance metric specified in ISO 9241-9. The mouse had a throughput and error rate of 4.09 bps and 0.88%, respectively, the mobile phone 2.05 bps and 3.46%, and the joystick 2.42 bps and 1.76%. The differences were significant between the mouse and both the mobile phone and joystick, but not between the mobile phone and joystick. There was a significant difference in error rate only between the mouse and mobile phone conditions. The mobile phone condition did not appear to conform to Fitts’ law as task index of difficulty had no apparent relationship with movement time. This was most likely caused by reaction time and homing time for that condition.

Sean DeLong, I. Scott MacKenzie
Interaction Techniques to Promote Accessibility in Games for Touchscreen Mobile Devices: A Systematic Review

Games for touchscreen mobile devices have become a part of popular culture, reaching beyond the limits of entertainment. However, while touchscreen devices have become one of the most far-reaching gaming platforms, there are very few studies that consider accessibility issues for People with Disabilities (PwD). In this scenario, this work presents the results of a Systematic Review (SR), which allowed to identify interaction techniques/strategies that are being applied in touchscreen devices, in order to promote accessibility of motor-coordination PwD. From the results of the SR, not only interaction techniques that promote accessibility were identified, but also low-cost and short development time adjustment parameters that can improve the interaction of motor-coordination PwD in 3D VEs. We noticed that promoting accessibility adjustments to meet different player profiles considering their limitations in motor coordination can be a differential in the player’s experience.

Eunice P. dos Santos Nunes, Vicente Antônio da Conceição Júnior, Luciana C. Lima de Faria Borges
A Collaborative Virtual Game to Support Activity and Social Engagement for Older Adults

Many older adults suffer from Alzheimer’s disease or other dementias and have affected cognitive abilities. In general, physical exercise, cognitive stimulation, and social engagement have been found to be beneficial for the physical and mental health of older adults with and without cognitive impairment. In an effort to address these needs, researchers have been developing human-machine interaction (HMI) systems to administer activity-oriented therapies. However, most of these system, while promising, focus on one-on-one interaction with the computer and thus do not support social engagement by involving multiple older adults. In this paper, we present the design and development of a motion-based collaborative virtual environment (CVE) application to support both activity and social engagement. The CVE task is based on a book-sorting activity and has embedded collaborative components to encourage human-human interaction (HHI). The system records quantitative data regarding users’ performance, interaction frequency, and social interaction. A preliminary user study was conducted to validate system usability and test on older adults’ tolerance and acceptance of the motion-based user interface (UI) as well as the CVE task. The results showed the usability of the motion-based UI and system capability to assess HMI and HHI from recorded quantitative data. The results from post-test and analysis of audio files indicated that the system might be potentially useful. More user study and data analysis need to be conducted to further investigate the CVE system.

Jing Fan, Linda Beuscher, Paul Newhouse, Lorraine C. Mion, Nilanjan Sarkar
Evaluation of an English Word Look-Up Tool for Web-Browsing with Sign Language Video for Deaf Readers

Research has shown that some people who are Deaf or Hard of Hearing (DHH) in the U.S. have lower levels of English language literacy than their hearing peers, which creates a barrier to access web content for these users. We have designed an interface to assist these users in reading English text on web pages; users can click on certain marked words to view an ASL sign video in a pop-up. A user study was conducted to evaluate this tool and compare it with web pages containing only text, as well as pages where users can click on words and see text-definitions using the Google Dictionary plug-in for browsers. The study assessed participants’ subjective preference for these conditions and compared their performance in completing reading comprehension tasks with each of these tools. We found that participants preferred having support tools in their interface as opposed to none, but we did not measure a significant difference in their preferences between the two support tools provided. This paper presents the details of design and development of this proposed tool, design guidelines applied to the prototype, factors influencing the results, and directions for future work.

Dhananjai Hariharan, Sedeeq Al-khazraji, Matt Huenerfauth
Gesture-Based Vehicle Control in Partially and Highly Automated Driving for Impaired and Non-impaired Vehicle Operators: A Pilot Study

A concept for shared and cooperative guidance and control based on the H-Metaphor is developed, implemented and presented in this paper. In addition, a pilot study with a small user group conducted in a static driving simulator is discussed. The concept enables communication between an automated vehicle and the driver, who is requested to take over driving in a conditional automated driving mode. The request is communicated to the driver by tactile feedback in a sidestick, which is used for control of the automated vehicle. Two different ways of take over request are investigated and later compared in a survey for “Perceived Utility”, “Perceived Safety”, “User Satisfaction” and “Perceived Usability”. The study is a pilot study for investigating interaction paradigms that are suitable in automated vehicles used by impaired people, which frequently are operated by joysticks. The outcomes of the study are used as a basis for further research.

Ronald Meyer, Rudolf Graf von Spee, Eugen Altendorf, Frank O. Flemisch
Real-Time Implementation of Orientation Correction Algorithm for 3D Hand Motion Tracking Interface

This paper outlines the real-time implementation of an orientation correction algorithm using the gravity vector and the magnetic North vector for a miniature, commercial-grade Inertial Measurement Unit to improve orientation tracking in 3D hand motion tracking interface. The algorithm uses the sensor fusion approach to determine the correct orientation of the human hand motion in 3D environment. The bias offset error is the IMU’s systematic error that can cause a problem in orientation tracking called drift. The algorithm is able to determine the bias offset error and update the gyroscope reading to obtain unbiased angular velocity. Furthermore, the algorithm will compare the initial estimated orientation result by using other referencing sources which are the gravity vector measured from the accelerometer and the magnetic North vector measured from the magnetometer, resulting in the improvement of the estimated orientation. The orientation correction algorithm is implemented in real-time within Unity along with position tracking, through a system of infrared cameras. To validate the performance of the real-time implementation, the orientation estimated from the algorithm and the position obtained from the infrared cameras are applied to a 3D hand model. An experiment requiring the acquisition of cubic targets within a 3D environment using the 3D hand motion tracking interface was performed 30 times. Experimental results show that the algorithm can be implemented in real-time and can eliminate the drift in orientation tracking.

Nonnarit O-larnnithipong, Armando Barreto, Neeranut Ratchatanantakit, Sudarat Tangnimitchok, Francisco R. Ortega
Haptic Information Access Using Touchscreen Devices: Design Guidelines for Accurate Perception of Angular Magnitude and Line Orientation

The overarching goal of our research program is to address the long-standing issue of non-visual graphical accessibility for blind and visually-impaired (BVI) people through development of a robust, low-cost solution. This paper contributes to our research agenda aimed at studying key usability parameters governing accurate rendering and perception of haptically-accessed graphical materials via commercial touchscreen-based smart devices, such as smart phones and tablets. The current work builds on the findings from our earlier studies by empirically investigating the minimum angular magnitude that must be maintained for accurate detection and angular judgment of oriented vibrotactile lines. To assess the minimum perceivable angular magnitude (i.e., cord length) between oriented lines, a psychophysically-motivated usability experiment was conducted that compared accuracy in oriented line detection across four angles (2°, 5°, 9°, and 22°) and two radiuses (1-in. and 2-in.). Results revealed that a minimum 4 mm cord length (which corresponds to 5° at a 1-in. radius and 2° at a 2-in. radius) must be maintained between oriented lines for supporting accurate haptic perception via vibrotactile cuing. Findings provide foundational guidelines for converting/rendering oriented lines on touchscreen devices for supporting haptic information access based on vibrotactile stimuli.

Hari Prasath Palani, G. Bernard Giudice, Nicholas A. Giudice
Brain Controlled Interface Log Analysis in Real Time Strategy Game Matches

Emotions are an important aspect that affects human interaction with systems and applications. The correlation of emotional and affective state with game interaction data is a relevant issue since it can explain player behavior and the outcome of a digital game match. In this work, we present an initial exploratory study to analyze interaction log data and its correlation with an off-the-shelf Brain Controlled Interface (BCI) that collected excitement in a RTS (Real Time Strategy game). Our results shown moderate correlations with player’s preferences and amount of interactions. Additionally, we also found in the interaction and game logs that character’s choice significantly impacts the time spent in data-driven levels of excitement. We did not find statistically significant differences of excitement for other factors such as player ranking and game style, map, and opponent character.

Mauro C. Pichiliani
M2TA - Mobile Mouse Touchscreen Accessible for Users with Motor Disabilities

This paper addresses the accessibility challenges of people with motor impairments regarding their access to the computer. Our focus is a new mouse design, which in its traditional ergonomics may affect the interaction with a computer and, consequently, with the Web. We introduce the design and development of a mobile application, the M2TA, which transforms a touchscreen mobile device into a mouse controller. The mobile application provides more flexible/customizable interfaces, it is portable, and is cheaper. Two users with motor limitations, cerebral palsy, participated in the development process of the M2TA. They used mobile interfaces interacting with computer applications of their preference freely. We aimed to observe possible bugs and receive suggestions for the M2TA improvement. We also collected their satisfaction with the use of M2TA interfaces. Preliminary results are promising and indicate a good level of acceptance. Further studies are in progress to attest the M2TA potential, such as improving the quality of life of people with neuropsychomotor sequelae caused by TBI - Traumatic Brain Injury and Stroke - Stroke.

Agebson Rocha Façanha, Maria da Conceição Carneiro Araújo, Windson Viana, Jaime Sánchez
Multi-switch Scanning Keyboards: A Theoretical Study of Simultaneous Parallel Scans with QWERTY Layout

Scanning keyboards can be useful aids for individuals with reduced motor function. However, scanning input techniques are known for being very slow to use because they require waiting for the right cell to be highlighted during each character input cycle. This study explores the idea of parallel scanning keyboards controlled with multiple switches and their theoretical effects on performance. The designs explored assume that the keyboard layouts are familiar to users and that the mapping between the switches and the keyboards are natural and direct. The results show that the theoretical performance increases linearly with the number of switches used. Future work should perform user tests with parallel scans to assess the practicality of this approach.

Frode Eika Sandnes, Evelyn Eika, Fausto Orsi Medola
Towards Multi-modal Interaction with Interactive Paint

We present a Multi-Modal Interactive Paint application. Our work is intended to illustrate shortcomings in current multi-modal interaction and to present design strategies to address and alleviate these issues. In particular, from an input perspective use in a regular desktop environment. A serious of challenges are listed and addressed individually with their corresponding strategies in our discussion of design practices for multi-modality. We also identify areas which we will improve for future iterations of similar multi-modal interaction applications due to the findings identified in this paper. These improvements should alleviate shortcomings with our current design and provide further opportunities to research multi-modal interaction.

Nicholas Torres, Francisco R. Ortega, Jonathan Bernal, Armando Barreto, Naphtali D. Rishe

Non Visual Interaction

Frontmatter
Nateq Reading Arabic Text for Visually Impaired People

Nateq is a system developed to aid visually impaired people in their daily life tasks. Nateq allows blind users to read text written on papers and labels using their mobile phones. It uses two sources to read text from, either from camera or photo gallery. In the camera mode, the system will automatically capture the image once the object is sufficiently detected along with an option to capture the image of the object manually. To increase the accuracy, a novel approach was implemented to ensure the correctness of the extracted text, by adding rectangular boundaries detection to the system. It helps the user to avoid partial capturing of the object which may lead to extracting incomplete sentences. Testing on target users showed high level of satisfaction on the improvement made in the field of assistive application with an overall process being faster in comparison to similar applications in the market.

Omaimah Bamasag, Muna Tayeb, Maha Alsaggaf, Fatimah Shams
Designing a 2 × 2 Spatial Vibrotactile Interface for Tactile Letter Reading on a Smartphone

In this paper, an eyes-free tactile reading system on a smartphone is proposed. This system adopts 2 × 2 flat vibration motors that are attached to the back of a smartphone, and a spatial tactile feedback will be generated and applied to the palm while the user holds the device. The tactile reading of 26 English letters was designed using spatial vibration codes. The hieroglyphs of English letters and their order of writing strokes were borrowed to minimize the tactile code learning curve for users. Numerous user experiments were conducted to tune important design parameters, such as distance between motors and vibration times. Results showed that a 3-cm distance between motors and a 200-ms vibration time are appropriate for designing an efficient system. The accuracy of tactile letter reading was 84.6%, time was 976.9 ms per letter, and the system can provide an efficient tactile reading technique for users in an eyes-free interaction.

Shaowei Chu, Mei Peng
LêRótulos: A Mobile Application Based on Text Recognition in Images to Assist Visually Impaired People

The autonomy of the visual impaired person can be evaluated in day to day activities like recognizing objects, identifying textual information, among others. This paper features the OCR technology-based LêRótulos application, with the objective of helping visually impaired users to identify textual object information that is captured by the camera of an smartphone. The design of the prototype followed guidelines and recommendations for usability and accessibility, aiming for greater user autonomy. There was an evaluation with specialists and end users, in real situations of use. The results indicated that the application has good usability and meets accessibility criteria for blind and low vision users. However, some improvements were indicated. Related work is presented, the LêRótulos design process, the results of usability and accessibility assessments, and lessons learned for the development of assistive technology aimed at visually impaired users.

Juliana Damasio Oliveira, Olimar Teixeira Borges, Vanessa Stangherlin Machado Paixão-Cortes, Marcia de Borba Campos, Rafael Mendes Damasceno
Information Design on the Adaptation of Evaluation Processes’ Images to People with Visual Impairment

It is a right for people with visual impairment to have access to tests and evaluation processes of various kinds. That way, adapted tests must present themselves in a way that visually impaired candidates can demonstrate their knowledge in the same way as the others. In this context, the importance of adapting images and complex information to an adequate comprehension of the questions is highlighted. The aim of this present paper is to explore the adaptation processes of tests for people with visual impairment, as well as to explore the role of information design on the production of tactile images to assist the evaluation process. Through a literature review, this paper presents five examples of tests applied to visually impaired candidates, focusing on the way tactile images were presented and how the candidates participated on the process. As a result, the importance of the adapted image as well as the need of evaluation processes that explore diversified means of comprehension is verified.

Fernanda Domingues, Emilia Christie Picelli Sanches, Claudia Mara Scudelari de Macedo
Cognitive Impact Evaluation of Multimodal Interfaces for Blind People: Towards a Systematic Review

Visual disability has a major impact on people’s quality of life. Although there are many technologies to assist people who are blind, most of them do not necessarily guarantee the effectiveness of the intended use. Then, we have conducted a systematic literature review concerning the cognitive impact evaluation of multimodal interfaces for blind people. We report in this paper the preliminary results of the systematic literature review with the purpose of understanding how the cognitive impact is currently evaluated when using multimodal interfaces for blind people. Among twenty-five papers retrieved from the systematic review, we found a high diversity of experiments. Some of them do not present the data clearly and do not apply a statistical method to guarantee the results. Besides this, other points related to the experiments are analyzed. We conclude that there is a need to better plan and present data from experiments on technologies for cognition of blind people. Moreover, as the next step in this research, we will investigate these preliminary results with a qualitative analysis.

Lana Mesquita, Jaime Sánchez, Rossana M. C. Andrade
Keyboard and Screen Reader Accessibility in Complex Interactive Science Simulations: Design Challenges and Elegant Solutions

Interactive science simulations are commonly used educational tools that, unfortunately, present many challenges for robust accessibility. The PhET Interactive Simulations project creates a suite of widely used HTML5 interactive science simulations and has been working to advance the accessibility of these simulations for users of alternative input devices (including keyboards) and screen reader software. To provide a highly interactive experience for students, science simulations are often designed to encourage interaction with real-world or otherwise physical objects, resulting in user interface elements being implemented in ways either unrecognizable as native HTML elements, or that require fully custom implementation and interactions. Here, we highlight three examples of simulation design scenarios that presented challenges for keyboard and screen reader access. For each scenario, we describe our initial approach, challenges encountered, and what we have found to be the most elegant solution to address these challenges to date. By sharing our approaches to design and implementation, we aim to contribute to the general knowledge base of effective strategies to support the advancement of accessibility for all educational interactives.

Emily B. Moore, Taliesin L. Smith, Jesse Greenberg
Fair Play: A Guidelines Proposal for the Development of Accessible Audiogames for Visually Impaired Users

The area of games, digital entertainment, and development of assistive technologies is constantly growing. However, there are still groups of users who face barriers to using games, such as visually impaired people. Audiogames defined as games based on sound interface, have been an initiative for the inclusion of this audience. Conversely, these are not always games with good accessibility. In order to address this issue, this study presents Fair Play, a set of 33 guidelines for audiogames design. Fair Play aims, aiming to promote good accessibility, gameplay, and usability in audiogames. Fair Play was proposed based on the results of a literature review. The guidelines were validated following 6 steps, detailed in this study. Also available online for the use of the community.

Olimar Teixeira Borges, Juliana Damasio Oliveira, Marcia de Borba Campos, Sabrina Marczak
Comparison of Feedback Modes for the Visually Impaired: Vibration vs. Audio

Mobile computing has brought a shift from physical keyboards to touch screens. This has created challenges for the visually impaired. Touch screen devices exacerbate the sense of touch for the visually impaired, thus requiring alternative ways to improve accessibility. In this paper, we examine the use of vibration and audio as alternative ways to provide location guidance when interacting with touch screen devices. The goal was to create an interface where users can press on the touch screen and take corrective actions based on non-visual feedback. With vibration feedback, different types of vibration and with audio feedback different tones indicated the proximity to the goal. Based on the feedback users were required to find a set of predetermined buttons. User performance in terms of speed and efficiency was gathered. It was determined that vibration feedback took on average 41.3% longer than audio in terms of time to reach the end goal. Vibration feedback was also less efficient than audio taking on average 35.2% more taps to reach the end goal. Even though the empirical evidence favored audio, six out of 10 participants preferred vibration feedback due to its benefits and usability in real life.

Sibu Varghese Jacob, I. Scott MacKenzie
Ultrasonic Waves to Support Human Echolocation

In this paper a new device and methods to get an acoustic image of the environment is proposed. It can be used as an electronic aid for people who are visual impaired or blind. The paper presents current methods on human echolocation and current research in electronic aids. It also describes the technical basics and implementation of the audible high resolution ultrasonic sonar followed by a first evaluation of the device. The paper concludes with a discussion and a comparison to classical methods on active human echolocation.

Florian von Zabiensky, Michael Kreutzer, Diethelm Bienhaus
Wayfinding Board Design for the Visually Impaired Based on Service Design Theory

The visually impaired people have difficulty in finding positions in public places. Current wayfinding systems usually neglect the demand of the visually impaired people. Many studies have focused on designing the wayfinding system for the visually impaired people. But current researches have some disadvantages when applying. In this paper, a color related and QR code enhanced wayfinding system is proposed to provide wayfinding service for the visually impaired people. The design of the proposed system is based on the service design theory. The proposed system has the advantage of low cost, smooth update and high level of effect.

Wanru Wang, Xinxiong Liu

Designing for Cognitive Disabilities

Frontmatter
Design of an Assistive Avatar in Improving Eye Gaze Perception in Children with ASD During Virtual Interaction

Children diagnosed with autism spectrum disorder (ASD) usually experience impairment in social interaction and often display reduced gaze sharing when interacting with another person. The lack of gaze sharing or joint attention early in the children’s developmental stage may create a delay in their ability to learn new things and share information with others. The presented study involved the design of a novel virtual reality (VR)-based training game with an avatar and eye tracker aimed to eventually address the joint attention impairment in children with ASD. The assistive avatar provides necessary cues and hints based on both the eye tracking data recorded by the VR system and the task performance of the participant. The system uses the task performance to adaptively change the difficulty level of the game. We believe that the training game will be able to improve participant’s gaze following skill. A usability study was carried out to validate the system design. The result showed that the system was feasible and able to obtain the expected gaze performance from the participants. The details of the system architecture and result of system validation are presented in this paper.

Ashwaq Zaini Amat, Amy Swanson, Amy Weitlauf, Zachary Warren, Nilanjan Sarkar
ICT to Support Dental Care of Children with Autism: An Exploratory Study

The dental health of children with autism presents many challenges, since they usually perceive sensory experiences differently and have problems accepting unknown social contexts. In a dental care setting, there are many strong sound-visual stimulations that are not experienced in any other setting. This usually upsets a patient with autism, often forcing dentists to administer chemical sedation in order to carry out dental work. Recently, many technology-enhanced systems and apps have been proposed to help people with autism adapt and cope with distressing situations. However, few studies have attempted to exploit ICT to simplify dental care in people with autism. This study explores the potential of personalized digital tools to help children with autism become familiar with dental care procedures and environments and to learn how to perform proper oral hygiene at home. To this aim, we carried out a 3-month exploratory study involving a multidisciplinary team of researchers, developers, dentists, psychologist, parents and ten children with autism observed under natural conditions during their first dental care cycle. The results appear to confirm the potential of technology for reducing anxiety in professional settings, increasing children’s wellbeing and safety. The main contribution of this paper is the detailed account of this exploratory study and the discussion of the results obtained. Moreover, we outline the user requirements of an accessible and customizable multimodal platform to help dentists and families facilitate ADS children’s dental care according to the methodology described here.

Mariasole Bondioli, Maria Claudia Buzzi, Marina Buzzi, Susanna Pelagatti, Caterina Senette
Design of an Interactive Gesture Measurement System for Down Syndrome People

Usability and accessibility are perhaps the most important issues when using software. In our research, we propose measuring the type of movements that may affect the interactions of Down syndrome people when using touch gestures in devices, body gestures in console games, and eye gestures in glasses or other devices. In this research work we present our approach and a process description for the acquisition of empirical data whereby we shall observe the differences among interaction gestures performed in different devices by people with trisomy 21.

Marta del Rio Guerra, Jorge Martin Gutierrez, Luis Aceves
Assistive Technologies for People with Cognitive Impairments – Which Factors Influence Technology Acceptance?

While in the general field of acceptance research convincing and empirically well-tested models for the acceptance of technical systems exist, only a few studies have been carried out in the area of acceptance of assistive software systems. Appropriate acceptance models play an important role, especially for user-centered and participative software development and quality assurance. In this article, the most important models from general acceptance research are briefly introduced. Based on the results of an acceptance study of an app for independent media access for users with cognitive impairments, a proposal for an acceptance model of assistive technology was developed, in which personal and environmental factors are considered more strongly than in the classical acceptance models.

Susanne Dirks, Christian Bühler
Designing Wearable Immersive “Social Stories” for Persons with Neurodevelopmental Disorder

“Social stories” are used in educational interventions for subjects with Neurodevelopmental Disorder (NDD) to help them gain an accurate understanding of social situations, develop autonomy and learn appropriate behavior. Traditionally, a Social Story is a short narrative that uses paper sheets, animations, or videos to describe a social situation of every day life (e.g., “going to school”, “visiting a museum”, “shopping at the supermarket”). In our research, we exploit Wearable Immersive Virtual Reality (WIVR) technology to create a novel form of social story called Wearable Immersive Social Story (WISS). The paper describes the design process, performed in collaboration with NDD experts, leading to the definition of WISS. We also describe an authoring tool that enables therapists to develop WISSes and to personalize them for the specific needs of each person with NDD.

Franca Garzotto, Mirko Gelsomini, Vito Matarazzo, Nicolo’ Messina, Daniele Occhiuto
An AAC System Designed for Improving Behaviors and Attitudes in Communication Between Children with CCN and Their Peers

Visual aids are widely used in augmentative and alternative communication (AAC) for individuals with pervasive developmental and intellectual disabilities. To satisfy their complex communication needs, a variety of AAC systems have been developed as mobile applications (apps). The effectiveness of these apps mainly relies on the abilities of communication peers. Persuasive technology is aimed at changing behaviors and attitudes. In order to increase the frequency of presenting visual aids with verbal messages, we applied persuasive principles in designing the mobile AAC app named “STalk2.” The app is capable of recognizing voice and presenting visual aids stored in a local database and/or retrieved by image search on the web; it also monitors communication activities. In this study, we examined the effects of using STalk2 on the behaviors and attitudes of five children with CCN and eleven of their peers. Special attention was paid to analyzing questionnaires, diaries, and video recordings obtained from peers. The results suggest that persuasive technology in AAC systems may be effective in improving communication behaviors and attitudes.

Tetsuya Hirotomi
Teaching Concepts with Wearable Technology: Learning Internal Body Organs

In this study, a wearable smart cloth was designed and developed for children with intellectual disabilities (IDs) to help them learn name and position of internal body organs. In this regard, five plush organs (heart, lungs, stomach, liver and intestines) that can interact with a smart cloth were designed. Additionally, an application that provides animated characters, feedback, visual cues, and sounds and also interacts with the smart cloth by controlling the sensors on the smart cloth were developed and utilized during the implementation. Participants of the study were four students from a private Special Education School in Turkey. As a research methodology, a single-subject research method was employed and the data were collected via field notes and video recordings. Results of the study showed that students with IDs can use smart cloth and it can help them to learn names and positions of internal body organs. Moreover, animated character can get their attention and students with IDs can complete instructions on their own.

Ersin Kara, Mustafa Güleç, Kürşat Çağıltay
The Utility of the Virtual Reality in Autistic Disorder Treatment

Autistic disorder patients lack the social communication abilities and need interventional therapy to alleviate such symptoms. The cost of health care and treatment across the lifespan of patients were up to $3.2 million which places a crushing burden on the poor patients and their families. To relieve the symptoms of disease and reduce the financial pressure of the patients, many methods were proposed. The normal therapy is proceeding under the instruction of professional doctors in the hospital. Each person need to spend 6–8 h in the specialized institutions. Given the cost of treatment and time, the treatment could not carry out continually which could lead to reducing the curative effectiveness. The current study explores the utility of the virtual reality interventions to the autistic disorder patients. In the virtual environment, the patients could be receiving treatment continually and practice their social communication abilities in different social scenes. To generate immersive virtual social environment, a VR engine (Unity3D 5.0) were used. Some typical social communication scenes were also established which include the classroom, shopping mall and hospital. In these virtual scenes, the ASD patients were required to communicate with artificial intelligence (AI) players and finish some tasks. The coach which played by the researcher or expert would send appropriate instructions to help the patient when they encounter the difficulties. Two different statistic tables will be collected twice: one is before the training, the other one is after the training. The two checklists are Autism Behavior Scale (ABS) table and Childhood Autism Rating Scale (CARS) table. By comparing the scores which achieved in different time, researchers could assess the result of treatment and changing the content of the treatment in time. Four ASD children who had confirmed ASD diagnoses from a clinical doctor take part in this experiment. Informed consent was obtained from the parents before participation. The average age of the subjects is 6 (±1). These volunteers were asked to execute nine tasks in different social scenes, which include communicate with strange teachers, sellers and doctors. All these tasks have three levels: in lv1, only one AI player in the scenes, in lv2, two AI players in scenes, in lv3, no less than three AI players will in the scenes. The patients in which scenes at which levels is controlled by researchers. According to the results, we find that after the training that the scores of the ASD patients are raised. Such results suggest that the VR technology could very helpful for the adjuvant therapy of the ASD.

Sicong Liu, Yan Xi, Hui Wang
A Data-Driven Mobile Application for Efficient, Engaging, and Accurate Screening of ASD in Toddlers

Early detection of Autism Spectrum Disorder (ASD) followed by targeted intervention has been shown to yield meaningful improvements in outcomes for individuals with ASD. However, despite the potential to curtail developmental delays, constrained clinical resources and barriers to access for some populations prevent many families from obtaining these services. In response, we have developed a tablet-based ASD screening tool called Autoscreen that uses machine learning methods and a data-driven design with the ultimate goal of efficiently triaging toddlers with ASD concerns based on an engaging and non-technical administration procedure. The current paper describes the design of Autoscreen as well as a pilot evaluation to assess the feasibility of the novel approach. Preliminary results suggest the potential for robust risk classification (i.e., F1 score = 0.94), adequate levels of usability based on the System Usability Scale (M = 87.19, 100 point scale), and adequate levels of acceptability based on a novel instrument called ALFA-Q (M = 85.94, 100 point scale). These results, combined with participant feedback, will be used to improve Autoscreen prior to evaluation with the target population of toddlers with concerns for ASD.

Arpan Sarkar, Joshua Wade, Amy Swanson, Amy Weitlauf, Zachary Warren, Nilanjan Sarkar
An Interactive Cognitive-Motor Training System for Children with Intellectual Disability

It is increasingly evident that engaging in regular physical activity is important for people’s health and well-being. However, physical training is still a big challenge for individuals with cognitive disabilities since it is difficult to motivate them and provide them with sustained pleasant training experiences over time. Active Video Games and Exergames may help achieve this, especially in the younger population. This paper describes an accessible Interactive Cognitive-Motor Training system (ICMT) created to encourage physical activity in children with cognitive disabilities by combining cognitive and gross motor training. The system was developed at a low cost, on top of an open source rhythm game, which has built-in support for dance pads and large video screens. The application employs user profiling in order to deliver personalized training. Performance data are recorded for further analysis to verify the training’s efficacy and if needed, to tune the intervention. A pilot study showed the effectiveness of the proposed system, which by taking advantage of the positive effects of playing videogames, appears to encourage cognitively impaired people’s physical activity.

Caterina Senette, Amaury Trujillo, Erico Perrone, Stefania Bargagna, Maria Claudia Buzzi, Marina Buzzi, Barbara Leporini, Alice Elena Piatti
A Robot-Based Cognitive Assessment Model Based on Visual Working Memory and Attention Level

Vocational assessment is the process of identifying and assessing an individual’s level of functioning in relation to vocational preparation. In this research, we have designed a framework to evaluate and train the visual working memory and attention level of users by using a humanoid robot and a brain headband sensor. The humanoid robot generates a sequence of colors and the user performs the task by arranging the colored blocks in the same order. In addition, a task-switching paradigm is used to switch between the tasks and colors to give a new instruction to the user by the robot. The humanoid robot displays guidance error detection information, observes the performance of users during the assessment and gives instructive feedback to them. This research describes the profile of cognitive and behavioral characteristics associated with visual working memory skills, selective attention and ways of supporting the learning needs of workers affected by this problem. Finally, the research concludes the relationships between visual working memory and attentional level during different level of the assessment.

Ali Sharifara, Ashwin Ramesh Babu, Akilesh Rajavenkatanarayanan, Christopher Collander, Fillia Makedon
Effects of E-Games on the Development of Saudi Children with Attention Deficit Hyperactivity Disorder Cognitively, Behaviourally and Socially: An Experimental Study

Attention Deficit Hyperactivity Disorder (ADHD) is a set of behavioural characteristics disorder, such as inattentiveness, hyperactivity and/or impulsiveness. It can affect people with different intelligent abilities, and it may affect their academic performance, social skills and generally, their lives. Usually, symptoms are not clearly recognized until the child enters school, most cases are identified between the ages 6 to 12. In the kingdom of Saudi Arabia (KSA), ADHD is a widely spread disorder among young children. Usually, they suffer from distraction and lack of focus, and hyperactivity, which reduce their academic achievements. As technology have been used in classrooms to facilitate the information delivery for students, and to make learning fun; some of these technologies have actually been applied in many schools in KSA with normal students, but unfortunately no studies were reported by the time of writing this paper. Specifically, there are no studies done for using any type of technology to help Saudi students with ADHD reaching up their peers academically. Because of that, our focus in this study is to investigate the effect of using technology, particularly e-games, to improve Saudi children with ADHD cognitively, behaviourally and socially. As well as evaluating the interaction between those children with the game interface. Thus, the investigation done through exploring the interaction of web-based games that runs on Tablets. The respondents are 17 ADHD children aged from 6–12 in classroom settings. The study involves focussing on interface of the games stimulate different executive functions in the brain, which is responsible for the most important cognitive capacities, such as: Sustained Attention, Working Memory, and Speed of Processing. Ethnographic method of research was used, which involved observing students’ behaviour in classroom, to gather information and feedback about their interaction with the application. National Institutes of Health (NIH) tests were used in pre- and post- intervention to measure improvements in attention, processing speed and working memory. Students’ test scores of main school subjects were taken pre- and post-intervention to measure enhancement in academic performance. Results show that using the application significantly improve cognitive capacities for participants, which affected their academic grades in Math, English and Science, as well as its positive influence on their behaviour. In addition, the application’s interface was found easy to use and subjectively pleasing. As a conclusion, the application considered effective and usable.

Doaa Sinnari, Paul Krause, Maysoon Abulkhair
Audiovisual Design of Learning Systems for Children with ASD

The increasing use of information and communication technologies for children with Autism Spectrum Disorder (ASD) establishes a complex media, interaction and learning scenario not currently supported by models of communication, computation, and pedagogy. Moving forward on this theme, this work proposes a model of interactions for problem description, planning, and production of systems for teaching skills and abilities to children with ASD. The proposed model arises from theoretical integration between Audiovisual Design framework with Taxonomy Instructional Objectives. In addition to theoretical input, a systematic review of the state of the art reveals a shared nature of media, among individuals with ASD, family members, clinical and educational professionals. Four stages of interaction were identified. In this way, this research contributes to teaching strategies that integrate levels of interaction from cognitive, affective and psychomotor domains, that impact the generation of contents and adaptive systems for the needs of children with ASD.

Rafael Toscano, Valdecir Becker
Assisting, Not Training, Autistic Children to Recognize and Share Each Other’s Emotions via Automatic Face-Tracking in a Collaborative Play Environment

One of the core characteristics of Autism Spectrum Disorder (ASD) is the presence of early and persistent impairments in social-communicative skills; and among the diagnostic characterization, difficulty in recognizing faces and interpreting facial emotions have been reported at all stages of development in ASD. Till now, an overwhelming number of previous works focus on training children with ASD on emotion recognition mostly via face perception and learning. Few published works have attempted on designing assistive tools to help the population recognize the emotions expressed by each other and make the emotion labels aware among each other, which motivates our present study. Drawn from results from our previous works, in this paper, we offer a collaborative play environment to inform autistic children each other’s emotions with an aim to engage them happily and with much less stress. The emotion recognition is accomplished through a mounted motion capture camera which can capture users’ facial landmark data and generate emotion labels accordingly.

Pinata Winoto, Tiffany Y. Tang, Xiaoyang Qiu, Aonan Guan
Research on the Interactive Design of Wearable Devices for Autistic Children

According to the “Development report on China’s autism education and rehabilitation industry”, the autistic children aged 0–14 years old may exceed more than 2 million in China. With the development of science and technology, autistic children have drawn more attention. The treatment of autistic children is becoming more and more diverse. Since wearable products are portable, real-time and interconnected, they have advanced rapidly in recent years. It has been applied to the treatment of autistic children in the field of medicine. But the impact of the existing wearable medical products on the body and mind of the patients is not comprehensive. There are no effective interaction design specification systems. Therefore, further improving and studying the interactive mode of wearable devices for autistic children have important theoretical value and practical significance.This paper first studied physiological and psychological characteristics of autistic children and obstacles they encounter in their lives through the methods of literature and documentation, interview and observation. Next, the classification and characteristics of wearable devices are generalized, and the characteristics and elements of the interactive design of wearable devices are summarized. In addition, this paper also carries out detailed case analysis of the current wearable products, and summarizes the shortage of the present products. Finally, system network diagram in respect with wearable device medical services for autistic children is proposed, combining the interactive information flow of the wearable device service system with autistic children centered, and sums up the wearable device interaction design specification in terms of autistic children in the end, and proposes feasible ideas and suggestions for interactive design.

Minggang Yang, Xuemei Li
Understanding Fine Motor Patterns in Children with Autism Using a Haptic-Gripper Virtual Reality System

Many children with Autism Spectrum Disorders (ASD) experience deficits in fine motor skills as compared to their typically developing (TD) peers. It is possible that the differences in fine motor patterns of children with ASD may provide useful insight into clinical diagnosis and intervention of ASD. This paper presents a preliminary study that used machine learning approaches to recognize the motor patterns exhibited by children with ASD based on their fine motor data obtained during carefully designed manipulation tasks in a virtual haptic environment. Six children with ASD and six TD children (aged 8–12) participated in a study that presented a series of fine motor tasks using a novel Haptic-Gripper virtual reality system. The results revealed that the identification accuracy of several machine learning approaches such as k-Nearest Neighbor (k-NN) and Artificial Neural Network (ANN) are encouraging and can reach up to 80%, indicating the potential of such an approach in ASD identification and intervention.

Huan Zhao, Amy Swanson, Amy Weitlauf, Zachary Warren, Nilanjan Sarkar
Evaluating the Accessibility of Scratch for Children with Cognitive Impairments

Research on the use of interactive media as learning tools for children with cognitive impairments has focused mainly on employing predesigned content, rather than constructing new content. Visual programming tools could potentially provide cognitively impaired children with a platform that can enable them to create their own interactive media. However, very little is known about the accessibility of the tools. This study uses a novel approach to evaluate the accessibility of Scratch (a visual programming tool) for children with cognitive impairments by employing a Grounded Theory research method. The study was conducted with 9 participants: 2 special education teachers and 7 cognitively impaired children over a period of ten weeks. The children’s usage of Scratch was documented through screen capturing. In addition, semi structured interviews were conducted with the two teachers. Grounded Theory based analysis was performed using QSR NVivo, which led to the identification of: accessibility issues; causal conditions; contexts; strategies employed to tackle issues; and consequences. Thus, the findings of this research contribute to existing knowledge on the accessibility of visual programming tools and elucidate the experience of cognitively impaired children while using the tools.

Misbahu S. Zubair, David Brown, Thomas Hughes-Roberts, Matthew Bates
Backmatter
Metadaten
Titel
Universal Access in Human-Computer Interaction. Methods, Technologies, and Users
herausgegeben von
Prof. Margherita Antona
Prof. Constantine Stephanidis
Copyright-Jahr
2018
Electronic ISBN
978-3-319-92049-8
Print ISBN
978-3-319-92048-1
DOI
https://doi.org/10.1007/978-3-319-92049-8

Neuer Inhalt