Skip to main content

2019 | Buch

Universal Access in Human-Computer Interaction. Multimodality and Assistive Environments

13th International Conference, UAHCI 2019, Held as Part of the 21st HCI International Conference, HCII 2019, Orlando, FL, USA, July 26–31, 2019, Proceedings, Part II

insite
SUCHEN

Über dieses Buch

This two-volume set constitutes the proceedings of the 13th International Conference on Universal Access in Human-Computer Interaction, UAHCI 2019, held as part of the 21st International Conference, HCI International 2019, which took place in Orlando, FL, USA, in July 2019.
The total of 1274 papers and 209 posters included in the 35 HCII 2019 proceedings volumes was carefully reviewed and selected from 5029 submissions.
UAHCI 2019 includes a total of 95 regular papers; they were organized in topical sections named: universal access theory, methods and tools; novel approaches to accessibility; universal access to learning and education; virtual and augmented reality in universal access; cognitive and learning disabilities; multimodal interaction; and assistive environments.

Inhaltsverzeichnis

Frontmatter

Cognitive and Learning Disabilities

Frontmatter
A Collaborative Talking Assistive Technology for People with Autism Spectrum Disorders

Autism spectrum disorders (ASD) are characterized by difficulties of socialization, disorders of verbal communication, restricted and stereotyped patterns of behaviors. Firstly, the paper reports tools of the user-centered design (UCD) used as well participants involved in the design of interactive collaborative system for children with ASD. Then, we describe the UCD deployed to design a vocal communication tool (VCT) between an adult with ASD and his family caregivers. The analyses of interviews demonstrate a strong need for a collaborative assistive technology based on voice interaction to avoid the family caregivers repeating the same sentences to the adult with ASD and, to create a friendly atmosphere at home. Observations in a real life environment demonstrate that the VCT is useful and accepted by the adult with ASD and his family. The work is not complete and issues such as designing a spoken dialogue system in the smart home need further works. The study of the type of voice synthesis (human or text-to-speech synthesis) is also an open question.

Wajih Abdallah, Frédéric Vella, Nadine Vigouroux, Adrien Van den Bossche, Thierry Val
Usability Enhancement and Functional Extension of a Digital Tool for Rapid Assessment of Risk for Autism Spectrum Disorders in Toddlers Based on Pilot Test and Interview Data

Early accurate identification and treatment of young children with Autism Spectrum Disorder (ASD) represents a pressing public health and clinical care challenge. Unfortunately, large numbers of children are still not screened for ASD, waits for specialized diagnostic assessment can be very long, and the average age of diagnosis in the US remains between 4 to 5 years of age. In a step towards meaningfully addressing this issue, we previously developed Autoscreen: a digital tool for accurate and time-efficient screening, diagnostic triage, referral, and treatment engagement of young children with ASD concerns within community pediatric settings. In the current work, we significantly improve upon and expand Autoscreen based on usability data and interview data collected in a pilot investigation of pediatric healthcare providers using Autoscreen. The enhanced version of Autoscreen addresses limitations of the previous tool, such as scalability, and introduces important new features based on rigorous interviews with the target user population. Once validated on a large sample, Autoscreen could become an impactful tool for early ASD screening and targeted referral in primary care settings. The comprehensively-enhanced tool described in the current work will enable the investigative team to achieve this goal.

Deeksha Adiani, Michael Schmidt, Joshua Wade, Amy R. Swanson, Amy Weitlauf, Zachary Warren, Nilanjan Sarkar
Understanding How ADHD Affects Visual Information Processing

Attention Deficit Hyperactivity Disorder (ADHD) is a condition that is characterized by impulsivity, age-inappropriate attention, and hyperactivity. ADHD is one of the most prevalent disorders among children. For a significant number of children whose condition persists into adulthood, ADHD leads to poor social and academic performance. In this paper we present preliminary results of an experiment that investigates how ADHD affects visual information processing under three information presentation methods (textual, graphical, and tabular). The efficiency and accuracy of both the neurotypical group and the group with ADHD were significantly impacted by different information presentation methods. However, the neurotypical group and the group with ADHD showed different patterns in their perceived interaction experience with the three information presentation methods. The result provides insights that might help designers and educators develop or adopt more effective information representation for people with ADHD.

Yahya Alqahtani, Michael McGuire, Joyram Chakraborty, Jinjuan Heidi Feng
Attention Assessment: Evaluation of Facial Expressions of Children with Autism Spectrum Disorder

Technological interventions for teaching children with autism spectrum disorders (ASD) are becoming popular due to their potentials for sustaining the attention of children with rich multimedia and repetitive functionalities. The degree of attentiveness to these technological interventions differs from one child to another due to variability in the spectrum. Therefore, an objective approach, as opposed to the subjective type of attention assessment, becomes essential for automatically monitoring attention in order to design and develop adaptive learning tools, as well as to support caregivers to evaluate learning tools. The analysis of facial expressions recently emerged as an objective method of measuring attention and participation levels of typical learners. However, few studies have examined facial expressions of children with ASD during an attention task. Thus, this study aims to evaluate existing facial expression parameters developed by “affectiva”, a commercial engagement level measuring tool. We conducted fifteen experimental scenarios of 5 min each with 4 children with ASD and 4 typically developing children with an average age of 8.8 years, A desktop virtual reality-continuous performance task (VR-CPT) as attention stimuli and a webcam were used to stream real-time facial expressions. All the participants scored above average in the VR-CPT and the performance of the TD group was better than that of ASD. While 3 out of 10 facial expressions were prominent in the two groups, ASD group showed addition facial expression. Our findings showed that facial expression could serve as a biomarker for measuring attention differentiating the groups.

Bilikis Banire, Dena Al Thani, Mustapha Makki, Marwa Qaraqe, Kruthika Anand, Olcay Connor, Kamran Khowaja, Bilal Mansoor
Improving Usability of a Mobile Application for Children with Autism Spectrum Disorder Using Heuristic Evaluation

Autism Spectrum Disorder (ASD) is a complex clinical condition that includes social, behavioral, and communication deficits. As numbers in ASD prevalence rise significantly, the tools for computer-assisted interventions also increase proportionally, which can be confirmed by the growth in the literature body addressing the issue. The development of autism-specific software is far from being straightforward: it often requires a user-centered approach, with a cross-functional team, and a primary focus on usability and accessibility. One of the most popular methods for finding usability problems is the heuristic evaluation, which is performed by having a group of experts testing the User Interface and providing feedback based on predetermined acceptance criteria. Thus, this paper informs on the assessment of a mobile application for autistic individuals using the heuristic evaluation. The software subjected to evaluation – prototyped in a previous study – addresses organization and behavioral patterns in ASD children. Through the heuristic evaluation, improvements could be performed in the application. Also, lessons learned with the evaluation process include recommendations to help the selection of methods and materials, the conduction of the evaluation, and the definition of the follow-up strategy. By describing the method stepwise and sharing lessons learned, the aim is to provide knowledgeable insights for development teams handling autism-specific software.

Murilo C. Camargo, Tathia C. P. Carvalho, Rodolfo M. Barros, Vanessa T. O. Barros, Matheus Santana
Learning About Autism Using VR

This paper describes a project that was carried out at the University of Malta, merging the digital arts and information technologies. The project, ‘Living Autism’, uses virtual reality (VR) technologies to describe daily classroom events as seen from the eyes of a child diagnosed with autism. The immersive experience is proposed as part of the professional development program for teachers and learning support assistants in the primary classrooms, to aid in the development of empathic skills with the autism disorder. The VR experience for mobile technologies has also been designed in line with user experience (UX) guidelines, to help the user assimilate and associate the projected experiences into newly formed memories of an unfamiliar living experience. Living Autism, is framed within a 4-min audio-visual interactive project, and has been piloted across a number of schools in Malta with 300 participants. The qualitative results collected gave an indication that the project had a positive impact on the participants with 85% of them reporting that they felt they became more aware of the autistic children’s needs in the primary classroom.

Vanessa Camilleri, Alexiei Dingli, Foaad Haddod
Breaking Down the “Wall of Text” - Software Tool to Address Complex Assignments for Students with Attention Disorders

One undergraduate student’s strategy to deal with long assignment instructions is to black out all of the information that they deem to be unimportant in the text, allowing them to focus just on the “important” information. While this technique may work well on paper, it does not naturally transition into a digital format. The student in the example above also identifies as having an attention disorder. In this paper we introduce a Microsoft Word add-in that enables the user to black out selected text using a new menu. Participants used the new Microsoft Word add-in to mark up a sample assignment. They were then asked in a post questionnaire to provide feedback on their experience utilizing the tool. Separately, we also conducted a survey in which we asked undergraduate students about their current strategies to understand long assignment instructions and why those strategies work for them. We then discuss their responses and compare it to the results of the previously mentioned case study.

Breanna Desrochers, Ella Tuson, Syed Asad R. Rizvi, John Magee
Feel Autism VR – Adding Tactile Feedback to a VR Experience

Feel Autism VR is a new virtual reality (VR) system which builds upon an Autism VR system which was developed in the past year. The aim of the original system was to design a novel VR environment to boost user’s awareness about autism and in so doing, increase the level of empathy towards autistic children. We sought to create a VR environment which provides total immersion to its users. A touching-without-feeling technique was used to send an ultrasound signal to the user’s body when the VR experience displays a touching scenario. Sound, vision and virtual touching elements could increase the awareness of presence, of the autistic children. A novel approach has been implemented which will allow users to feel tactile feedback physically without being touched. In so doing, we can recreate the annoyance felt by the autistic child throughout the day. Ultrasound waves will be generated via an ultrasound speaker which send the waves through the air and create pressure to simulate a real-life event. This concept will be inserted in the narrative of the original VR Autism project in order to mimic physical touching between an autistic child and his classmates, teachers, etc. The results were very promising whereby 50% of the users declared that after the experience, they are in a better position to understand children with autism.

Foaad Haddod, Alexiei Dingli, Luca Bondin
Caregivers’ Influence on Smartphone Usage of People with Cognitive Disabilities: An Explorative Case Study in Germany

Intuitive handling, mobile internet access, and a large number of applications make smartphones extremely popular devices. Smartphones promise particularly high potentials for various marginalized groups. This explorative case study examines formal caregivers’ attitudes towards smartphone usage and internet access by people with cognitive disabilities. Due to the close relationship to their clients, it is assumed that caregivers support or prevent smartphone usage of people with cognitive disabilities depending on their attitudes and experiences. The aim of this study is to examine which particular factors influence caregiver’s attitudes towards smartphone usage. Twenty-four semi-structured interviews with formal caregivers were conducted between January and December 2018 in Germany. This paper discusses the main findings on the background of psychological and technological theories of technology acceptance and personal-growth, including self-determination-theory.

Vanessa N. Heitplatz, Christian Bühler, Matthias R. Hastall
The PTC and Boston Children’s Hospital Collaborative AR Experience for Children with Autism Spectrum Disorder

Minimally verbal children with Autism Spectrum Disorder often face challenges in the areas of language, communication, and organization. Augmented Reality (AR) may provide a valuable technique to enhance language learning as well as navigating tasks and activities. The “AR experience,” developed by PTC Inc with Boston Children’s Hospital, was a collaborative effort to provide a working tool for use in studies regarding the effectiveness of AR as a teaching tool for minimally verbal children with Autism Spectrum Disorder. The purpose of this paper is to describe (a) the development of the application using the “Design Thinking Process,” (b) describe the features of the resulting AR experience, and (c) present initial evaluation results.

David Juhlin, Chris Morris, Peter Schmaltz, Howard Shane, Ralf Schlosser, Amanda O’Brien, Christina Yu, Drew Mancini, Anna Allen, Jennifer Abramson
Design of an Intelligent and Immersive System to Facilitate the Social Interaction Between Caregivers and Young Children with Autism

Children with autism spectrum disorder (ASD) have core deficits in social interaction skills. Intelligent technological systems have been developed to help children with ASD develop their social interaction skills, like response to name (RTN), response to joint attention (RJA), initiation of joint attention (IJA) and imitation skills. Most existing systems entail human-computer interaction (HCI) or human-robot interaction (HRI), in which participants interact with the systems to elicit certain social behaviors or practice certain social skills. However, because the robot/computer being the only therapeutic factor in HRI/HCI systems, this may result in the isolation effect. Therefore, in this work, an intelligent and immersive computer system is proposed for caregivers and their young children with ASD to interact with each other and help develop social skills (RTN and IJA). In this computer assisted HHI setting, caregivers deliver social cues to participants (young children with ASD) and give a decision-making signal to the system. The system also provides different non-social cues, to help caregivers to elicit and reinforce the social behaviors of participants. By including a caregiver in the loop, we hope to ameliorate the isolation effect by creating a more real-world HHI scenario. In this paper, we will show the feasibility of the proposed system and validate its potential effectiveness by both subjective measurements and objective measurements.

Guangtao Nie, Akshith Ullal, Amy R. Swanson, Amy S. Weitauf, Zachary E. Warren, Nilanjan Sarkar
Taking Neuropsychological Test to the Next Level: Commercial Virtual Reality Video Games for the Assessment of Executive Functions

Virtual reality and video games are increasingly considered as potentially effective tools for the assessment of several cognitive abilities, including executive functions. However, thus far, only non-commercial contents have been tested and virtual reality contents and video games have been investigated separately. Within this context, this study aimed to explore the effectiveness in the assessment of executive functions using a new type of interactive content - commercial virtual reality games - which combines the advantages of virtual reality with that of commercial video games. Thirty-eight participants completed the Trial Making Test as traditional commonly used assessments of executive functions and then played the virtual reality game Audioshield using an HTC Vive systems. Scores on the Trial Making Test (i.e., time to complete part A and B) were compared to scores obtained on Audioshield (i.e., number of orbs hit by the players and technical score). The results showed that: (a) performance on the Trial Making Test correlated significantly with performance on the virtual reality video game; (b) scores on Audioshield can be used as a reliable estimator of the results of Trial Making Test.

Federica Pallavicini, Alessandro Pepe, Maria Eleonora Minissi
Evaluation of Handwriting Skills in Children with Learning Difficulties

Many children have physical, cognitive, motor, and other limitations that influence their ability to develop handwriting skills. Recently, haptic technology is gaining rising interest as an assistive technologies to improve the acquisition of handwriting skills for children with learning difficulties. In this paper, we introduce a method and an experimental protocol to evaluate the quality of handwriting for children with learning difficulties. We developed a copy work task comprising four categories of handwriting tasks, namely numbers, letter, shapes, and emoticons (a total of 32 tasks, covering low to high complexity handwriting tasks). Results demonstrated that shapes are more difficult to learn than emoticons, even though emoticons are more complex to construct. This is probably due to the fact that children are more familiar with emoticons than abstract shapes. Findings in this study are crucial for developing a longitudinal experimental study to evaluate the effectiveness of various haptic guidance methods to improve learning outcomes for children with learning difficulties.

Wanjoo Park, Georgios Korres, Samra Tahir, Mohamad Eid
“Express Your Feelings”: An Interactive Application for Autistic Patients

Much effort is put into Information Technology (IT) to achieve better efficiency and quality of expressing communication between autistic children with the surrounding. This paper presents an application that aims to help the autistic child to interact and express their feeling with their loved ones in easy manner. The major objective of the project is to connect autistic children with their family and friends by providing tools that enable an easy way to express their feeling and emotions. To accomplish this goal an Android app has been developed through which, autistic child can express their emotion based on emoji. Child’s emotions are share by sending the emoji to their relatives. The project aims a high impact within the autistic child community by providing a mechanism to share emotions in an “emotionless world”. The project was developed under the Sustainable Development Goal (SDG) 3: good health and well-being in the society by making the meaningful impact in the life of autistic child.

Prabin Sharma, Mala Deep Upadhaya, Amrit Twanabasu, Joao Barroso, Salik Ram Khanal, Hugo Paredes
The Design of an Intelligent LEGO Tutoring System for Improving Social Communication Skills Among Children with Autism Spectrum Disorder

A system intended to help children with autism spectrum disorder (ASD) to play with LEGO bricks is proposed. The system could provide step-by-step guidance to complete pre-defined task using interactive dialog strategy. A camera is used to capture the bricks, and an image recognition module is implemented to identify the color and size of each brick. The system can also detect a misplaced brick, provide guidance to reassemble it, and suggest a correct one. Dialog is generated using a speech synthesizer on a set of pre-defined statements used in daily social communication with the children. To further enrich the interaction between children and the system, teachers may intervene remotely in real time to alter existing statements in the system.

Qiming Sun, Pinata Winoto
An Augmented Reality-Based Word-Learning Mobile Application for Children with Autism to Support Learning Anywhere and Anytime: Object Recognition Based on Deep Learning

An abundant earlier controlled studies have underscored the importance of early diagnosis and intervention in autism. Over the past several years, thanks to technological advances, we have witnessed a large number of technology-based teaching and learning applications for children with autism. Among them, augmented reality-based ones have gained much attention recently due to its unique benefits of providing multiple learning stimulus for these children via accessing a kinesthetic moving simply using a mobile device. Despite it, few have been developed for these young children in China, which motivates our study. In particular, in this paper, we present a mobile vocabulary-learning application for Chinese autistic children especially for outdoor and home use. The core object recognition module is implemented within the deep learning platform, TensorFlow; unlike other sophisticated systems, the algorithm has to run in an offline fashion. We conducted two small-scale pilot studies to assess the system’s feasibility and usability with typically developing children, children with autism, their parents and special education teachers with very promising and satisfying results. Our studies did suggest that the downside of the application is the performance of the object-recognition module. Therefore, before we further examine the benefits of such AR-based learning tools in clinical settings, it is crucial to fine-tune the algorithm in order to improve its accuracy. Despite it, since the current literature of AR-technology on Chinese word-learning for children with special needs is still in its infancy, our studies offers early glimpse into the usefulness, usability and applicability of such AR-based mobile learning application, particularly to facilitate learning at anytime and anywhere.

Tiffany Y. Tang, Jiasheng Xu, Pinata Winoto
Design and Evaluation of Mobile Applications for Augmentative and Alternative Communication in Minimally-verbal Learners with Severe Autism

One of the most significant disabilities in autism spectrum disorders (ASD) includes a delay in, or total lack of, the development of spoken language. Approximately half of those on the autism spectrum are functionally non-verbal or minimally verbal and will not develop sufficient natural speech or writing to meet their daily communication needs.A suite of evidence-based mobile applications, SPEAKall!® and SPEAKmore!®, was developed to help these individuals achieve critical speech and language milestones. SPEAKall! and SPEAKmore! enable early language learning, facilitate natural speech development, enhance generalization skills, and expand social circles as students learn. These solutions grow with the learner, enabling better participation in school and community, thus reducing the lifetime cost of care while enhancing chances for classroom success.Evidence generation for the newly created applications involved: (a) Single-subject experimental designs to evaluate treatment efficacy through repeated measurement of behavior and replication across and within participants; (b) quantitative electroencephalograms to gain information about brain functioning.The comprehensive approach to evidence-generation facilitated adoption of SPEAKall! and SPEAKmore! in clinical practice. It also allowed identifying critical app features that enhance skill acquisition and contribute to treatment effectiveness.

Oliver Wendt, Grayson Bishop, Ashka Thakar

Multimodal Interaction

Frontmatter
Principles for Evaluating Usability in Multimodal Games for People Who Are Blind

Multimodal video games designed for increasing cognition of people who are blind should be friendly and pleasant to use, instead of adding complexity to the interaction, leading people to acquire cognitive skills while interacting. There are specific issues that make multimodal usability evaluation different from the evaluation of traditional user interfaces in the context of improving cognition of people who are blind. In this context, identifying how well the Usability Evaluation Methods (UEM) meet the evaluation criteria to assess multimodal games for people who are blind is necessary. In this paper, we conducted an expert opinion survey to analyze how usability evaluation has been done by researchers and practitioners in this field. As a result, we propose the PrincipLes for Evaluating Usability of Multimodal Video Games for People who are Blind (PLUMB), a set of evaluation good practices that should be observed while planning the evaluation. This paper builds on the literature about how multimodal features affect people who are blind interaction with multimodal interfaces by focusing on their practical evaluation.

Ticianne Darin, Rossana Andrade, Jaime Sánchez
A Low Resolution Haptic Interface for Interactive Applications

This paper introduces a novel haptic interface for use as a general-purpose sensory substitution device called the Low Resolution Haptic Interface (LRHI). A prototype of the LRHI was developed and tested in a user study for its effectiveness in conveying information through the sense of touch as well as for use in interactive applications. Results are promising, showing that participants were able to accurately discriminate a range of both static and dynamic haptic patterns using the LRHI with a composite accuracy of 98.38%. The user study also showed that participants were able to sucessfully learn to play a completely haptic interactive cat-mouse game with the device.

Bijan Fakhri, Shashank Sharma, Bhavica Soni, Abhik Chowdhury, Troy McDaniel, Sethuraman Panchanathan
A Fitts’ Law Evaluation of Hands-Free and Hands-On Input on a Laptop Computer

We used the Fitts’ law two-dimensional task in ISO 9241-9 to evaluate hands-free and hands-on point-select tasks on a laptop computer. For the hands-free method, we required a tool that can simulate the functionalities of a mouse to point and select without having to touch the device. We used a face tracking software called Camera Mouse in combination with dwell-time selection. This was compared with three hands-on methods, a touchpad with dwell-time selection, a touchpad with tap selection, and face tracking with tap selection. For hands-free input, throughput was 0.65 bps. The other conditions yielded higher throughputs, the highest being 2.30 bps for the touchpad with tap selection. The hands-free condition demonstrated erratic cursor control with frequent target re-entries before selection, particularly for dwell-time selection. Subjective responses were neutral or slightly favourable for hands-free input.

Mehedi Hassan, John Magee, I. Scott MacKenzie
A Time-Discrete Haptic Feedback System for Use by Persons with Lower-Limb Prostheses During Gait

Persons with lower-limb amputations experience limited tactile knowledge of their prostheses due to the loss of sensory function from their limb. This sensory deficiency has been shown to contribute to improper gait kinematics and impaired balance. A novel haptic feedback system has been developed to address this problem by providing the user with center of pressure information in real-time. Five piezoresistive force sensors were adhered to an insole corresponding to critical contact points of the foot. A microcontroller used force data from the insole to calculate the center of pressure, and drive four vibrotactile pancake motors worn in a neoprene sleeve on the medial thigh. Center of pressure information was mapped spatially from the plantar surface of the foot to the medial thigh. Human perceptual testing was conducted to determine the efficacy of the proposed haptic display in conveying gait information to the user. Thirteen able-bodied subjects wearing the haptic sleeve were able to identify differences in the speed of step patterns and to classify full or partial patterns with (92.3 ± 2.6)% and (94.9 ± 2.1)% accuracy respectively. The results suggest that the system was effective in communicating center of pressure information through vibrotactile feedback.

Gabe Kaplan, Troy McDaniel, James Abbas, Ramin Tadayon, Sethuraman Panchanathan
Quali-Quantitative Review of the Use of Multimodal Interfaces for Cognitive Enhancement in People Who Are Blind

Visual disability has a major impact on people’s quality of life. Although there are many technologies to assist people who are blind, most of them do not necessarily guarantee the effectiveness of the intended use. As part of research developed at the University of Chile since 1996, we investigated the interfaces for people who are blind regarding a gap in cognitive impact. We first performed a systematic literature review concerning the cognitive impact evaluation of multimodal interfaces for people who are blind. Based on the papers retrieved from the systematic review, a high diversity of experiments was found. Some of them do not present the data results clearly and do not apply a statistical method to guarantee the results. We conclude that there is a need to better plan and present data from experiments on technologies for cognition of people who are blind. Moreover, we also performed a Grounded Theory qualitative-based data analysis to complement and enrich the systematic review results.

Lana Mesquita, Jaime Sánchez
Statistical Analysis of Novel and Traditional Orientation Estimates from an IMU-Instrumented Glove

This paper outlines the statistical evaluation of novel and traditional orientation estimates from an IMU-instrumented glove. Thirty human subjects participated in the experiment by performing the instructed hand movements in order to compare the performance of the proposed orientation correction algorithm with Kalman-based orientation filtering. The result of two-way multivariate analysis of variance indicates that there is no statistically significant difference in the means of the orientation errors: Phi ( $$F(1,580)=.080; p=.777$$ ), Theta ( $$F(1,580)=2.556; p=.110$$ ) and Psi ( $$F(1,580)=.049; p=.825$$ ) between the orientation correction algorithm using the gravity and magnetic North vectors (GMV) and the correction using Kalman-based orientation filtering (KF). The different hand poses have a statistically significant effect on the orientation errors: Phi ( $$F(9,580)=129.555; p=.000$$ ), Theta( $$F(9,580)=85.109; p=.000$$ ) and Psi ( $$F(9,580)=134.474; p=.000$$ ). The effect of the two algorithms on the orientation errors is consistent across the different hand poses.

Nonnarit O-larnnithipong, Neeranut Ratchatanantakit, Sudarat Tangnimitchok, Francisco R. Ortega, Armando Barreto, Malek Adjouadi
Modeling Human Eye Movement Using Adaptive Neuro-Fuzzy Inference Systems

The eye’s muscles are difficult to model to build an eye prototype or an interface between the eye’s movements and computers; they require complex mechanical equations for describing their movements and the generated voltage signals from the eye are not always adequate for classification. However, they are very important for developing human machine interfaces based on eye movements. Previously, these interfaces have been developed for people with disabilities or they have been used for teaching the anatomy and movements of the eye’s muscles. However, the eye’s electrical signals have low amplitude and sometimes high levels of noise. Hence, artificial neural networks and fuzzy logic systems are implemented using an ANFIS topology to perform this classification. This paper shows how the eye’s muscles can be modeled and implemented in a concept prototype using an ANFIS topology that is trained using experimental signals from an end user of the eye prototype. The results show excellent performance for prototype when the ANFIS topology is deployed.

Pedro Ponce, Troy McDaniel, Arturo Molina, Omar Mata
Creating Weather Narratives

Information can be conveyed to the user by means of a narrative, modeled according to the user’s context. A case in point is the weather, which can be perceived differently and with distinct levels of importance according to the user’s context. For example, for a blind person, the weather is an important element to plan and move between locations. In fact, weather can make it very difficult or even impossible for a blind person to successfully negotiate a path and navigate from one place to another. To provide proper information, narrated and delivered according to the person’s context, this paper proposes a project for the creation of weather narratives, targeted at specific types of users and contexts. The proposal’s main objective is to add value to the data, acquired through the observation of weather systems, by interpreting that data, in order to identify relevant information and automatically create narratives, in a conversational way or with machine metadata language. These narratives should communicate specific aspects of the evolution of the weather systems in an efficient way, providing knowledge and insight in specific contexts and for specific purposes. Currently, there are several language generator’ systems, which automatically create weather forecast reports, based on previously processed and synthesized information. This paper, proposes a wider and more comprehensive approach to the weather systems phenomena, proposing a full process, from the raw data to a contextualized narration, thus providing a methodology and a tool that might be used for various contexts and weather systems.

Arsénio Reis, Margarida Liberato, Hugo Paredes, Paulo Martins, João Barroso
RingBoard 2.0 – A Dynamic Virtual Keyboard Using Smart Vision

Computers have evolved throughout the digital era becoming more powerful, smaller, and cheaper. However, they are still lacking basic accessibility features that appeal to all users. They can be controlled with your voice and eye movement, but there is still much work to be done. This paper presents RingBoard 2.0, a dynamic virtual keyboard that uses computer vision to recognize and track hand movements and gestures. It allows for basic input to a computer using a web camera. This application was built to provide additional accessibility features for those who experience tremors or limited motor capability in their hands, which make it difficult to interact with a standard keyboard and mouse. At the core, it is built to recognize any form of a hand and can accurately track it, regardless of sporadic movement. This paper is an extension of previous work describing touch input for a computer using the HP Sprout [2].

Taylor Ripke, Eric O’Sullivan, Tony Morelli
Introducing Pneumatic Actuators in Haptic Training Simulators and Medical Tools

Simulators have been traditionally used for centuries during medical training as the trainees have to improve their skills before practicing on a real patient. Nowadays mechatronic technology has open the way to more evolved solutions enabling objective assessment and dedicated pedagogic scenarios. Trainees can now practice in virtual environments on various body parts, with current and rare pathologies, for any kind of patient (slim, elderly ...). But medical students need kinesthetic feedback in order to get significant learning. Gestures to acquire vary according to medical specialties: needle insertion in rheumatology or anesthesia, forceps installation during difficult births ... Simulators reproducing such gestures require haptic interfaces with a variable rendered stiffness, featuring commonly called Variable Stiffness Actuators (VSA) which are difficult to embed with off-the-shelf devices. Existing solutions do not always fit the requirements because of their significant size. In contrast, pneumatic technology is low-cost, available off-the-shelf and has a better mass-power ratio. Its main drawback is its non-linear dynamics, which implies more complex control laws than with electrical motors. It also requires a compressed air supply. Ampère research laboratory has developed during the last decade haptic solutions based on pneumatic actuation, applied on a birth simulator, an epidural needle insertion simulator, a pneumatic master for remote ultrasonography, and more recently a needle insertion under ultrasonography simulator. This paper recalls the scientific approaches in the literature about pneumatic actuation for simulation and tools in the medical context. It is illustrated with the aforementioned applications to highlight the benefits of this technology as a replacement or for an hybrid use with classical electric actuators.

Thibault Sénac, Arnaud Lelevé, Richard Moreau, Minh Tu Pham, Cyril Novales, Laurence Nouaille, Pierre Vieyres
ANA: A Natural Language System with Multimodal Interaction for People Who Have Tetraplegia

To interact with a computer, users with tetraplegia must to use special tools/devices that, in most cases, require a great effort. In online education, these tools normally become a distraction, which might hinder learning. Solutions like tongue mouses, smart glasses and computer vision systems, although promising, still face problems of use. This paper introduces ANA, a natural language system which can hear the student and see what is being presented on the interface. With new affordance, learning objects (LO)can have their own grammar, which allows a much more natural voice interaction. LOs respond either by audio or performing the requested action. Tests performed with people with tetraplegics show that the creation of such a shared workspace brings a statistically significant reduction in effort while taking on online lessons and their respective workshops.

Maikon Soares, Lana Mesquita, Francisco Oliveira, Liliana Rodrigues
An Investigation of Figure Recognition with Electrostatic Tactile Display

The visually impaired must obtain shape information in a tactile manner. However, existing conventional graphics are static. We prepared a more useful, dynamic tactile display; we aimed to allow the visually impaired to recognize and draw figures via tactile feedback. We developed an electrostatic force-based tactile display and performed two preliminary evaluative experiments. We measured figure recognition rates and explored how users perceived figures that were displayed in a tactile manner. We describe the results and future planned improvements.

Hirobumi Tomita, Shotaro Agatsuma, Ruiyun Wang, Shin Takahashi, Satoshi Saga, Hiroyuki Kajimoto
A Survey of the Constraints Encountered in Dynamic Vision-Based Sign Language Hand Gesture Recognition

Vision-based hand gesture recognition has received attention in the recent past and much research is being conducted on the topic. However, achieving a robust real time vision-based sign language hand gesture recognition system is still a challenge, because of various limitations (The term limitation in this study is used interchangeably to mean constraint or challenge in respect to the problems that can or are encountered in the process of implementing a vision-based hand gesture recognition system.). These limitations include multiple context and interpretations of gestures as well as well as complex non-rigid characteristics of the hand. This paper exposes the constraints encountered in the image acquisition via camera, image segmentation and tacking, feature extraction and gesture classification phase of vision-based sign language hand gesture recognition. It also highlights the various algorithms that have been used to address the problems. This paper will be useful to new as well as experienced researchers in this field. The paper is envisaged to act as a reference point for new researchers in vision-based hand gesture recognition in the journey towards achieving a robust system that is able to recognize full sign language.

Ruth Wario, Casam Nyaga

Assistive Environments

Frontmatter
Quantifying Differences Between Child and Adult Motion Based on Gait Features

Previous work has shown that motion performed by children is perceivably different from that performed by adults. What exactly is being perceived has not been identified: what are the quantifiable differences between child and adult motion for different actions? In this paper, we used data captured with the Microsoft Kinect from 10 children (ages 5 to 9) and 10 adults performing four dynamic actions (walk in place, walk in place as fast as you can, run in place, run in place as fast as you can). We computed spatial and temporal features of these motions from gait analysis, and found that temporal features such as step time, cycle time, cycle frequency, and cadence are different in the motion of children compared to that of adults. Children moved faster and completed more steps in the same time as adults. We discuss implications of our results for improving whole-body interaction experiences for children.

Aishat Aloba, Annie Luc, Julia Woodward, Yuzhu Dong, Rong Zhang, Eakta Jain, Lisa Anthony
Learning User Preferences via Reinforcement Learning with Spatial Interface Valuing

Interactive Machine Learning is concerned with creating systems that operate in environments alongside humans to achieve a task. A typical use is to extend or amplify the capabilities of a human in cognitive or physical ways, requiring the machine to adapt to the users’ intentions and preferences. Often, this takes the form of a human operator providing some type of feedback to the user, which can be explicit feedback, implicit feedback, or a combination of both. Explicit feedback, such as through a mouse click, carries a high cognitive load. The focus of this study is to extend the current state of the art in interactive machine learning by demonstrating that agents can learn a human user’s behavior and adapt to preferences with a reduced amount of explicit human feedback in a mixed feedback setting. The learning agent perceives a value of its own behavior from hand gestures given via a spatial interface. This feedback mechanism is termed Spatial Interface Valuing. This method is evaluated experimentally in a simulated environment for a grasping task using a robotic arm with variable grip settings. Preliminary results indicate that learning agents using spatial interface valuing can learn a value function mapping spatial gestures to expected future rewards much more quickly as compared to those same agents just receiving explicit feedback, demonstrating that an agent perceiving feedback from a human user via a spatial interface can serve as an effective complement to existing approaches.

Miguel Alonso Jr.
Adaptive Status Arrivals Policy (ASAP) Delivering Fresh Information (Minimise Peak Age) in Real World Scenarios

Real-time systems make their decisions based on information communicated from sensors. Consequently, delivering information in a timely manner is critical to such systems. In this paper, a policy for delivering fresh information (or minimising the Peak Age of the information) is proposed. The proposed policy, i.e., the Adaptive Status Arrivals Policy (ASAP), adaptively controls the timing between updates to enhance the Peak Age (PA) performance of real-time systems. Firstly, an optimal value for the inter-arrival rate is derived. Afterwards, we implemented the policy in three scenarios and measured the ASAP PA performance. The experiments showed that ASAP is able to approach the theoretical optimal PA performance. Moreover, it can deliver fresh information in scenarios where the server is located in the cloud.

Basel Barakat, Simeon Keates, Ian Wassell, Kamran Arshad
A Feasibility Study of Designing a Family-Caregiver-Centred Dementia Care Handbook

This study aims to explore the feasibility of designing a family-caregiver-centred dementia care handbook. The objectives of this study were to (1) test the readability and understandability of existing written health education materials (WHEMs), (2) identify barriers to meeting dementia family caregivers’ (DFCs’) information needs in the context in which caregiving occurs, and (3) propose best-practice strategies and recommendations for redesigning WHEMs for DFCs with diverse health literacy skills. An innovative product design and development (IPDD) approach was implemented for the design and development process of the proposed WHEM for DFCs named IDEA. In-depth interviews with healthcare experts were conducted and analysed to determine their limitations and actionable recommendations for possible changes. Based on the research findings, we clarified current barriers to information experienced by DFCs and designated nine prominent themes and three essential elements to be included in thedesignprotocol. Finally, best-practice strategies and recommendations were proposed for redesigning a family-caregiver-centred dementia care handbook that may help to enhance the role of DFCs as active caregivers.

Ting-Ya Chang, Kevin C. Tseng
Occupational and Nonwork Stressors Among Female Physicians in Taiwan: A Single Case Study

The high suicide rate among doctors is a significant issue in many countries, especially among female doctors, for whom the rate is more than two times that of the general population. Compared to many countries, Taiwan has a much lower proportion of female physicians relative to male physicians, which has been suggested as a negative factor in affecting the suicide rate. Previous studies of female physician stressors are few and focus mainly on occupational stress. Nonwork stress has not been well-researched. This study aims to explore the feasibility of providing a comprehensive evaluation of all stressors in female doctors’ daily lives by examining a cohort of Taiwanese female doctors. Maslach burnout inventory (MBI) and the Brief Symptom Rating Scale (BSRS-5) are used to screen participants for occupational stress and depressive attributes respectively. In this study, an interview is conducted with a participant, and factors contributing to lifestyle and occupational stress are identified. The study results indicate that family issues, primarily child-rearing, acts as the largest stressor in the participant’s life, outweighing even traditionally studied occupational stressors for female physicians.

Kuang-Ting Cheng, Kevin C. Tseng
Classification of Physical Exercise Intensity Based on Facial Expression Using Deep Neural Network

If done properly, physical exercise can help maintain fitness and health. The benefits of physical exercise could be increased with real time monitoring by measuring physical exercise intensity, which refers to how hard it is for a person to perform a specific task. This parameter can be estimated using various sensors, including contactless technology. Physical exercise intensity is usually synchronous to heart rate; therefore, if we measure heart rate, we can define a particular level of physical exercise. In this paper, we proposed a Convolutional Neural Network (CNN) to classify physical exercise intensity based on the analysis of facial images extracted from a video collected during sub-maximal exercises in a stationary bicycle, according to standard protocol. The time slots of the video used to extract the frames were determined by heart rate. We tested different CNN models using as input parameters the individual color components and grayscale images. The experiments were carried out separately with various numbers of classes. The ground truth level for each class was defined by the heart rate. The dataset was prepared to classify the physical exercise intensity into two, three, and four classes. For each color model a CNN was trained and tested. The model performance was presented using confusion matrix as metrics for each case. The most significant color channel in terms of accuracy was Green. The average model accuracy was 100%, 99% and 96%, for two, three and four classes classification, respectively.

Salik Ram Khanal, Jaime Sampaio, Joao Barroso, Vitor Filipe
Effect of Differences in the Meal Ingestion Amount on the Electrogastrogram Using Non-linear Analysis

This paper reports a study of the impact of differences in meal ingestion amount on electrogastrograms. The study was performed by recording an electrogastrogram and an electrocardiogram of eight young men for 60 min once before and once after each test subject ingested meals with ingestion amounts of 800 kcal and 400 kcal. The results showed that meal ingestion affected the power spectral density of the tachygastria range (3.7–5.0 cpm) by significantly increasing its value after the meal. The differences in meal ingestion amount are expressed in the power spectral density of the colon range (6.0–8.0 cpm) by significantly increasing its value after the meal, but only when the subject ingested an 800-kcal meal.

Fumiya Kinoshita, Kazuya Miyanaga, Kosuke Fujita, Hideaki Touyama
MilkyWay: A Toolbox for Prototyping Collaborative Mobile-Based Interaction Techniques

Beside traditional multitouch input, mobile devices provide various possibilities to interact in a physical, device-based manner due to their built-in hardware. Applying such interaction techniques allows for sharing content easily, e.g. by literally pouring content from one device into another, or accessing device functions quickly, e.g. by facing down the device to mute incoming calls. So-called mobile-based interaction techniques are characterized by movements and concrete positions in real spaces. Even though such interactions may provide many advantages in everyday life, they have limited visibility in interaction design due to the complexity of sensor processing. Hence, mobile-based interactions are often integrated, if any, at late design stages. To support testing interactive ideas in early design stages, we propose MilkyWay, a toolbox for prototyping collocated collaborative mobile-based interaction techniques. MilkyWay includes an API and a mobile application. It enables easily building up mobile interactive spaces between multiple collocated devices as well as prototyping interactions based on device sensors by a programming-by-demonstration approach. Appropriate sensors are selected and combined automatically to increase tool support. We demonstrate our approach using a proof of concept implementation of a collaborative Business Model Canvas (BMC) application.

Mandy Korzetz, Romina Kühn, Karl Kegel, Leon Georgi, Franz-Wilhelm Schumann, Thomas Schlegel
@HOME: Exploring the Role of Ambient Computing for Older Adults

Building on results of a recent global study as well as additional exploratory research focused on Aging in Place, this paper reflects on the role that intelligent systems and ambient computing may play in future homes and cities, with a specific emphasis on populations aged 65 and beyond. This paper is divided into five sections. The first section provides an introductory background, which outlines context, vision, and implications around the development of ambient computing and smart home technologies for the 65+ population. The second part of the paper overviews the methodological approaches adopted during the research activity at the center of this paper. The third section summarizes pertinent findings and a discussion on the opportunities offered by intelligent, ambient systems for the 65+ population follows. While this fourth section will specifically focus on the smart home, it will also provide reflections on opportunities and applications in the context of autonomous vehicles and smart cities. The fifth and last section offers conclusive remarks, including implications for developers and designers that are shaping ambient computing usages and technologies for the 65+ population. The paper ultimately advocates for adopting Participatory Design [1] approaches, to ensure that intelligent and ambient technologies are developed with (instead of for) end users.

Daria Loi
Designing and Evaluating Technology for the Dependent Elderly in Their Homes

The ageing population and the increasing longevity of individuals is a challenging reality for healthcaretoday. Longevity often leads to increased dependence and the need for continued care, whichis often left to informal caregivers given the inability of the elderly care network to provide. The informal care provided to the dependent elderly occurs either at the caregiver’s or the elderly person’s home. Technology application in healthcare has been attracting the attention of engineers for a long time, especially in providing support for health recovery and maintaining therapy practices. Due to major advances in technology, particularly in movement capture optic systems and information extraction through digital image analysis, support systems are being created to monitorhow therapeutic plans are carried out, as well as to evaluate people’s physical recovery or to assist healthcare professionals and informal caregiver to provide care for dependent elders.This paper reports on the accomplishments of a project with the general objective of using information and communication technologies to develop a prototype for a system focused on monitoring and assisting the execution of a therapeutic plan integrating physical mobilization and medication.

Maria João Monteiro, Isabel Barroso, Vitor Rodrigues, Salviano Soares, João Barroso, Arsénio Reis
Applying Universal Design Principles in Emergency Situations
An Exploratory Analysis on the Need for Change in Emergency Management

The United Nations Convention on the Rights of Persons with Disabilities (CRPD) obligates States’ to take all necessary measures to ensure the protection and safety of persons with disabilities in emergency situations. While these requirements represent one aspect of this article’s aims, it also focuses on how another paradigm, universal design, can and should offer a useful approach to emergency situations and management. Referring once more to the CRPD, universal design is defined as the “design of products, environments, programs and services to be usable by all people, to the greatest extent possible, without the need for adaptation or specialized design”. Consequently, universal design should provide a valid framework for identifying and removing usability and accessibility barriers in emergency situations. Using a heuristic analysis, this article intends to offer a preliminary reflection on the following question: “To what extent can universal design principles be applied to emergency management situations?”.

Cristina Paupini, George A. Giannoumis
Digital Volunteers in Disaster Response: Accessibility Challenges

The emergence of the Digital Humanitarian Volunteer (DHV) movements when disaster strikes have drawn the attention of researchers and practitioners in the emergency management and humanitarian domain. While there are established players in this rapidly developing field, there are still unresolved challenges, including accessibility of their digital tools and platforms. The purposes of this paper are twofold. First, it describes the background, impact and future potential of the DHV movement, and discusses the importance of universal design for the digital tools and platforms used for crowdsourcing of crisis information. Second, this paper shows how lack of concern for universal design and accessibility can have significant negative impact on the practical use of these tools, not only for people with disabilities, but also for anyone and in particular the DHVs who may be affected by situational disabilities in the field in an emergency situation. The insights from the findings serve as feedback on how to improve digital humanitarian response by broadening the base of potential volunteers as well as making the related tools and platforms more reliably usable in the field.

Jaziar Radianti, Terje Gjøsæter
The Contribution of Social Networks to the Technological Experience of Elderly Users

Social networks have changed the way people and companies communicate. Nowadays, more and more elderly persons are using these platforms to communicate with friends and family, access news, entertainment and education. This study focuses on the elderly population and its use of social networks, and analyzes the contribution provided by the platforms to the users’ technological experiences, and whether this interaction contributes to the quality of life of this population. A survey based on the experience economy theory was disseminated online through Facebook to gauge users’ behavior. A Social Networks User Experience (SNUX) model was developed to study the elderly-user experience associated with the use of social networks, which was analyzed through structural equations modeling using SmartPLS 2.0. From the results obtained, it was concluded that social networks can contribute to an increased well-being of the older population, mainly from the technological experience associated with the use of these platforms, the environment of which contributes to entertainment and education of these users.

Célia M. Q. Ramos, João M. F. Rodrigues
Automatic Exercise Assistance for the Elderly Using Real-Time Adaptation to Performance and Affect

This work presents the design of a system and methodology for reducing risk of locomotive syndrome among the elderly through the delivery of real-time at-home exercise assistance via intensity modulation of a worn soft exoskeleton. An Adaptive Neural Network (ANN) is proposed for the prediction of locomotive risk based on squat exercise performance. A preliminary pilot evaluation was conducted to determine how well these two performance metrics relate by training the ANN to predict test scores among three standard tests for locomotive risk with features from joint tracking data. The promising initial results of this evaluation are presented with discussions for future implementation of affective classification and a combined adaptation strategy.

Ramin Tadayon, Antonio Vega Ramirez, Swagata Das, Yusuke Kishishita, Masataka Yamamoto, Yuichi Kurita
EEG Systems for Educational Neuroscience

Numerous studies suggest that digital technology has an important role to play in physical and mental functioning and generally in the quality of life of elderly people. Many digital serious games have been developed to enhance cognitive functions. These games incorporate a multitude of multimedia elements that are perceived as sensory stimuli. To implement an effective digital environment, all sensory representations have to be investigated in order to be compatible with the visual, acoustic and tactile perception of the user. An effective way to examine those stimuli is to study the users’ brain functioning and especially the electric activity by using electroencephalographic recording systems. In recent years, there has been a growth of low-cost EEG systems. These are used in various fields such as educational research, serious games, mental and physical health, entertainment, etc. This study investigates whether a wireless low-cost EEG system (EPOC EMOTIV) can deliver qualitative results compared to a research system (G.tec) while recording the EEG data at an event-related brain potentials setup regarding the differentiation of the semantic content of two image categories. Our results show that, in terms of signal quality, the Emotiv system lags G.tec, however, based on the answers of the participants in the questionnaire, Emotiv excels in terms of ease of use. It can be used in continuous EEG recordings in game environments and could be useful for applications such as games in ageing.

Angeliki Tsiara, Tassos Anastasios Mikropoulos, Panagiota Chalki
A Soft Exoskeleton Jacket with Pneumatic Gel Muscles for Human Motion Interaction

This work proposes to use an assistive device to augment the perception of human motion force applied from one subject to another through an avatar. This experiment presents an avatar augmentation that can provide the feeling of motion with two degrees of freedom at the elbow and shoulder acquired with algorithmic detection of input angles. To generate the operation of the appropriate motion, the interface is comprised of a depth sensor, an ESP32 embedded board, electric valves, and pneumatic gel muscles. An evaluation is presented which confirms the performance of the suit by measuring the latency of the system. The experimental results demonstrate that the developed suit can convey the motion of one user to another with a delay of 670 ms.

Antonio Vega Ramirez, Yuichi Kurita
Backmatter
Metadaten
Titel
Universal Access in Human-Computer Interaction. Multimodality and Assistive Environments
herausgegeben von
Dr. Margherita Antona
Prof. Constantine Stephanidis
Copyright-Jahr
2019
Electronic ISBN
978-3-030-23563-5
Print ISBN
978-3-030-23562-8
DOI
https://doi.org/10.1007/978-3-030-23563-5

Neuer Inhalt