Zum Inhalt

2023 | Buch

Human-Computer Interaction – INTERACT 2023

19th IFIP TC13 International Conference, York, UK, August 28 – September 1, 2023, Proceedings, Part I

herausgegeben von: José Abdelnour Nocera, Marta Kristín Lárusdóttir, Helen Petrie, Antonio Piccinno, Marco Winckler

Verlag: Springer Nature Switzerland

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

Die vierbändige Reihe LNCS 14442 -14445 bildet die Vortragsreihe der 19. IFIP TC 13 International Conference on Human-Computer Interaction, INTERACT 2023, die im August / September 2023 in York, Großbritannien, stattfand. Die 71 vollständigen und 58 kurzen Beiträge in diesem Buch wurden sorgfältig überprüft und aus 406 Einreichungen ausgewählt. Sie waren in folgende Themenbereiche gegliedert: 3D-Interaktion; Barrierefreiheit; Barrierefreiheit und Alterung; Zugänglichkeit für Hör- / Hörbehinderungen; Co-Design; Cybersicherheit und Vertrauen; Datenphysikalisierung und geräteübergreifende Interaktion; Augenfreiheit, Gesteninteraktion und Zeichensprache; Haptische Interaktion und Gesundheitsanwendungen; Selbstüberwachung; Mensch-Roboter-Interaktion; Informationsvisualisierung und 3D-Interaktion; Interaktion mit Kindern; Interaktion mit Konversationsagenten; Methoden für HCI; modellbasiertes UI-Design und -Testen; Monitoringkrankheit, Stress- und Risikowahrnehmung in 3D-Umgebungen und multisensorischer Interaktion; VR-Erfahrungen; Verarbeitung natürlicher Sprache und AI-Erklärbarkeit; Online-Zusammenarbeit und kooperative Arbeit; Empfehlungssysteme und AI-Explainability;

Inhaltsverzeichnis

Frontmatter

3D Interaction

Frontmatter
AHO-Guide: Automatically Guiding the Head Orientation of a Local User in Augmented Reality to Realign the Field of View with Remote Users

Augmented Reality (AR) offer significant benefits for remote collaboration scenarios. However, when using a Head-Mounted Display (HMD), remote users do not always see exactly what local users are looking at. This happens when there is a spatial offset between the center of the Field of View (FoV) of the HMD’s cameras and the center of the FoV of the user. Such an offset can limit the ability of remote users to see objects of interest, creating confusion and impeding the collaboration. To address this issue, we propose the AHO-Guide techniques. AHO-Guide techniques are Automated Head Orientation Guidance techniques in AR with a HMD. Their goal is to encourage a local HMD user to adjust their head orientation to let remote users have the appropriate FoV of the scene. This paper presents the conception and evaluation of the AHO-Guide techniques. We then propose a set of recommendations from the encouraging results of our experimental study.

Lucas Pometti, Charles Bailly, Julien Castet
Point- and Volume-Based Multi-object Acquisition in VR

Multi-object acquisition is indispensable for many VR applications. Commonly, users select a group of objects of interest to perform further transformation or analysis. In this paper, we present three multi-object selection techniques that were derived based on a two-dimensional design space. The primary design dimension concerns whether a technique acquires targets through point-based methods (selecting one object at a time) or volume-based methods (selecting a set of objects within a selection volume). The secondary design dimension examines the mechanisms of selection and deselection (cancel the selection of unwanted objects). We compared these techniques through a user study, emphasizing on scenarios with more randomly distributed objects. We discovered, for example, that the point-based technique was more efficient and robust than the volume-based techniques in environments where the targets did not follow a specific layout. We also found that users applied the deselection mechanism mostly for error correction. We provide an in-depth discussion of our findings and further distill design implications for future applications that leverage multi-object acquisition techniques in VR.

Zhiqing Wu, Difeng Yu, Jorge Goncalves
Using Mid-Air Haptics to Guide Mid-Air Interactions

When users interact with mid-air gesture-based interfaces, it is not always clear what interactions are available, or how they might be executed. Mid-air interfaces offer no tactile affordances, pushing systems to rely on other modalities (e.g. visual) to guide users regarding how to interact with the interface. However, these alternative modalities are not always appropriate or feasible (e.g. eyes-free interactions), meaning that they are not easy to learn from touch alone. Despite the possibility of conveying contactless haptic information in mid-air through ultrasound phased arrays, this technology has been limited to providing feedback on user interactions. In this paper, we explore the feasibility of using mid-air haptics to guide gestures in mid-air. Specifically, we present approaches to guide the user’s hand in cardinal directions, execute a hand gesture and navigate a 2D mid-air plane, which we tested with 27 participants. After, reporting encouraging results which suggest good accuracy and relatively low workload, we reflect on the feasibility and challenges of using haptic guidance mechanisms in mid-air.

Timothy Neate, Sergio Alvares Maffra, William Frier, Zihao You, Stephanie Wilson

Accessibility

Frontmatter
Brilliance and Resilience: A New Perspective to the Challenges, Practices and Needs of University Students with Visual Impairments in India

People with visual impairments in India have low literacy rates and only a few pursue higher education at the country's top universities. We present an insight into the educational experiences of these few university students with visual impairment based on the Frame of Interdependence. We found that educational challenges arise when interdependence fails due to restricted or misfitted assistance from social relations and ableist social interactions. Analysis of practices to overcome these challenges from the lens of Resilience Theory revealed that students develop a sense of self-confidence through successful academic experiences, internalise external stressors into intrinsic motivation, and find ways to navigate inaccessibility with the available social resources. In addition, students express the need to increase the integration of assistive technologies in education and facilitate social integration. Finally, we discuss the implications of these findings for equitable and inclusive education practices.

Tigmanshu Bhatnagar, Vikas Upadhyay, P. V. Madhusudhan Rao, Nicolai Marquardt, Mark Miodownik, Catherine Holloway
Mapping Virtual Reality Controls to Inform Design of Accessible User Experiences

A lack of accessible controls remains a barrier to disabled users engaging in virtual reality experiences. This paper presents a modified cognitive walkthrough of 120 virtual reality applications to identify 2,284 pairs of operant and resultant actions and creates an inventory of domain objects and their operant and resultant actions in the virtual space. This inventory captures both the form and prevalence of interactions that are expected of users in current virtual reality design. An analysis of this inventory reveals that while many barriers could be addressed by existing solutions, those options currently are not often present in current designs. Further, there are a set of barriers related to embodied controls that represent opportunities and challenges for new and innovative designs in virtual reality.

Christopher Power, Paul Cairns, Triskal DeHaven
WAM-Studio: A Web-Based Digital Audio Workstation to Empower Cochlear Implant Users

This paper introduces WAM-Studio, an online Digital Audio Workstation (DAW) for recording, mixing, producing, and playing multitrack music. WAM-Studio advances music development by proposing a web-based environment based on a visual programming paradigm of end-user programming (EUP). In this paper, we describe how users can associate individual tracks with real-time audio processing plugins that can then be customized to produce a desired audio effect. Moreover, we describe how users can visually create macros to control multiple plugin parameters at once. While programming macro controls and customizing track parameters might have many applications in the music industry, they also present an opportunity to afford Hard-of-Hearing users greater control over their music listening. To illustrate the potential of WAM-Studio, we present a case study illustrating how this tool could be used by Hard-of-Hearing users to modify individual musical elements in a multi-track listening context to create a more enjoyable listening experience.

Michel Buffa, Antoine Vidal-Mazuy, Lloyd May, Marco Winckler
Web Accessibility in Higher Education in Norway: To What Extent are University Websites Accessible?

University websites should be accessible and easy to navigate for all users, regardless of their ability or disability. However, many university websites still have inaccessible features, even in countries where web accessibility is a legal requirement for public organizations. This study aims to investigate the accessibility of Norwegian university websites using both manual and tool-based evaluation methods. The results reveal significant accessibility violations in 6 of 10 websites, despite the implementation of regulatory frameworks since 2013. The most common violations include an absence of alternative text and very low contrast. Other frequent violations are a lack of keyboard support, lengthy navigation, empty buttons, missing form labels, empty links, and empty headings. These issues are considered critical and need to be addressed urgently because incorrect design elements and navigation problems can cause confusion and loss of control for users, particularly those relying on screen readers. The study indicates that the above-mentioned violations result from insufficient awareness and understanding of the accessibility prerequisites of individuals with a wide variety of characteristics.

Yavuz Inal, Anne Britt Torkildsby
Wesee: Digital Cultural Heritage Interpretation for Blind and Low Vision People

When museums worldwide are introducing digital technology to help the heritage interpretation for visitors, blind and low vision (BLV) people are still excluded by various challenges. What BLV people need in museums is an in-depth learning and independent exploration process. However, the audio guide provided in museums is mostly simple descriptions, and cultural relics can not be touched, which cannot meet the cultural needs of BLV people. In this paper, we designed and implemented Wesee, an interactive platform that combined interactive narrative, voice interaction, and tactile interaction, to help BLV people experience cultural heritage more independently and interactively. The preliminary evaluation was conducted with 20 BLV participants. The results show that this platform is effective in helping BLV people experience cultural heritage.

Yalan Luo, Weiyue Lin, Yuhan Liu, Xiaomei Nie, Xiang Qian, Hanyu Guo

Accessibility and Aging

Frontmatter
Accessibility Inspections of Mobile Applications by Professionals with Different Expertise Levels: An Empirical Study Comparing with User Evaluations

Providing accessibility in mobile applications is essential for the appropriate use by people with disabilities. Different evaluation methods yield different results, and professionals and researchers must be aware of the types of results obtained by user evaluations and inspections performed by professionals with different expertise levels. This study aimed to compare the results from manual inspection of mobile apps performed by two groups of professionals with different expertise levels and to compare them with user evaluations conducted by users with visual disabilities. The Saraiva and Receita Federal applications usability evaluations carried out with nine visually impaired users encountered 189 problems divided into 39 violations of accessibility guidelines. Then, the applications were inspected by two groups of professionals: 17 specialists in different areas of software development (full-stack developers, testers, and front-end developers) and ten specialists in Human-Computer Interaction (HCI) with previous experience in accessibility. The results indicated a difference between accessibility assessment methods. In accessibility inspections, there was a difference between software development specialists (DEV group) and HCI specialists (HCI group). The results indicated a difference in the number of violations the DEV group encounters compared to the HCI group. Inspections by HCI experts and testing with users with disabilities encountered a greater diversity of problem types. HCI professionals also showed a broader repertoire of accessibility inspection approaches. The results allow for an initial understanding of the extent to which evaluations by developers without HCI background can cover compared to inspections by HCI specialists and users with disabilities.

Delvani Antônio Mateus, Simone Bacellar Leal Ferreira, Maurício Ronny de Almeida Souza, André Pimenta Freire
Evaluating the Acceptance of a Software Application Designed to Assist Communication for People with Parkinson’s Disease

Parkinson’s disease (PD) is a neurodegenerative disorder that affects a huge number of people. People with PD may have trouble speaking. Impeded speech affects 70% of people with PD and this can have particularly harmful consequences linked to social exclusion and isolation. Considering this context, we have been working for the last three years in the development of a software application to assist people with PD to communicate with others. To assure the use and adoption of this application an evaluation of its acceptance was carried out. To that end the Unified Theory of Acceptance of Use of Technology (UTAUT) was applied. The results showed acceptation of the application for people with PD that has serious problems of communication. This paper presents this evaluation from its design to the discussion of the results.

Julia Greenfield, Káthia Marçal de Oliveira, Véronique Delcroix, Sophie Lepreux, Christophe Kolski, Anne Blanchard-Dauphin
“The Relief is Amazing”: An In-situ Short Field Evaluation of a Personal Voice Assistive Technology for a User Living with Dementia

We present a first short field evaluation of IntraVox, a smart home assistive technology that has the potential to support older adults with dementia living independently at home. Based on sensor data, IntraVox uses a personalized human voice to send prompts and reminders to end-users to conduct daily life activities. During a short field study of seven days, IntraVox was installed in the home of an end-user with advanced dementia to prompt a lifestyle change. Additional feedback was collected from their family supporter and three carers. Results show that IntraVox has the potential to prompt end-users with complex needs into changing their actions. In particular, the family supporter found that IntraVox was “100% successful” in that it allowed the family more time together rather than focusing on caregiving, and the relief afforded by the system was considered “amazing”. Thus, we argue the system has the potential to improve the quality of life of both the end-users and their carers. These preliminary findings will inform future larger studies that will assess the usability and feasibility of such systems.

Ana-Maria Salai, Glenda Cook, Lars Erik Holmquist
Towards an Automatic Easy-to-Read Adaptation of Morphological Features in Spanish Texts

The Easy-to-Read (E2R) Methodology was created to improve the daily life of people with cognitive disabilities. This methodology aims to present clear and easily understood documents. The E2R Methodology includes, among others, a set of guidelines related to the writing of texts. Some of these guidelines focus on morphological features that may cause difficulties in reading comprehension. Examples of those guidelines are: (a) to avoid the use of adverbs ending in -mente (-ly in English), and (b) to avoid the use of superlative forms. Both linguistic structures are quite long, which is also related to another E2R guideline (“The use of long words should be avoided”). Currently, E2R guidelines are applied manually to create easy-to-read text materials. To help in such a manual process, our research line is focused on applying the E2R Methodology in Spanish texts in a (semi)-automatic fashion. Specifically, in this paper we present (a) the inclusive design approach for the development of E2R adaptation methods for avoiding adverbs ending in -mente and superlative forms, (b) the initial methods for adapting those morphological features to an E2R version, and (c) a preliminary user-based evaluation of the implementation of those methods.

Mari Carmen Suárez-Figueroa, Isam Diab, Álvaro González, Jesica Rivero-Espinosa

Accessibility for Auditory/Hearing Disabilities

Frontmatter
Challenges Faced by the Employed Indian DHH Community

One-sixth of the global Deaf or Hard-of-Hearing (DHH) population resides in India. However, most of the research on the DHH population is situated in the Global North. In this work, we study the accessibility issues faced by the DHH community in India by conducting 15 interviews and surveying 131 people. We focus on the employed DHH community for two reasons: (a) to gauge the effectiveness of the widespread intent to increase diversity, equity, and inclusion in workplaces, and (b) to establish the state of early adoption of (accessible) technology. Our work reveals that our participants face acute communication challenges at the workplace primarily due to non-availability of certified interpreters critically impacting their outcomes at work. We report the consequent workarounds used, including the human infrastructure available to our participants and how at times it impacts their agency and privacy. We identify socio-cultural and linguistic contexts that contribute to our participants’ reduced language proficiency both in sign language and English. We also identify that our participants use a variety of technologies, from video conferencing tools to ride hailing apps, and identify their current usability failings. Based on our findings, we recommend several assistive technologies, such as providing access to on-demand interpreters and accessibility improvements for current video conferencing and smartphone telephony apps.

Advaith Sridhar, Roshni Poddar, Mohit Jain, Pratyush Kumar
Haptic Auditory Feedback for Enhanced Image Description: A Study of User Preferences and Performance

Our research has focused on improving the accessibility of mobile applications for blind or low vision (BLV) users, particularly with regard to images. Previous studies have shown that using spatial interaction can help BLV users create a mental model of the positions of objects within an image. In order to address the issue of limited image accessibility, we have developed three prototypes that utilize haptic feedback to reveal the positions of objects within an image. These prototypes use audio-haptic binding to make the images more accessible to BLV users. We also conducted the first user study to evaluate the memorability, efficiency, preferences, and comfort level with haptic feedback of our prototypes for BLV individuals trying to locate multiple objects within an image. The results of the study indicate that the prototype combining haptic feedback with both audio and caption components offered a more accessible and preferred among other prototypes. Our work contributes to the advancement of digital image technologies that utilize haptic feedback to enhance the experience of BLV users.

Mallak Alkhathlan, M. L. Tlachac, Elke A. Rundensteiner
Using Colour and Brightness for Sound Zone Feedback

We investigate the use of colour and brightness for feedback from sound zone systems. User interaction with sound zones suffer from them being invisible. Hence, spatial properties such as volume, size, and overlaps need to be represented through, e.g., light. Two studies were conducted. In the first study (N $$=$$ 27), participants experienced different colour and brightness values shown on an LED strip attached to a volume controller and related those to sound zone volume, size, and overlaps. In the second study (N $$=$$ 36), participants created an overlap between two sound zones by turning up the volume, triggering 12 animated light patterns. Our findings show that brightness reflects well the size of a sound zone, and that instant patterns are better indicators of overlaps compared to gradual patterns. These contributions are useful for designing sound zone visualisations.

Stine S. Johansen, Peter Axel Nielsen, Kashmiri Stec, Jesper Kjeldskov

Co-design

Frontmatter
Common Objects for Programming Workshops in Non-Formal Learning Contexts

We investigate common objects as material support for programming workshops for children and adolescents in non-formal learning contexts. To this end, we engaged in a one-year participatory design process with a facilitator of programming workshops. Based on observations of workshops and interviews with the facilitator, we mapped out their artifact ecologies to investigate how the multiple artifacts and common objects were orchestrated by the facilitator and then adopted by the participants of the workshops. Building on these findings, we explored the development of a collaborative teaching tool, MicroTinker, through a participatory design process with the facilitator. This paper presents the results of our analyses and shows their constructive use to design technology in a non-formal learning setting.

Nathalie Bressa, Susanne Bødker, Clemens N. Klokmose, Eva Eriksson
Engaging a Project Consortium in Ethics-Aware Design and Research

Ethics is an important perspective in project work. For a research and development project, ethics plays a key role when creating a shared understanding of societal goals and the intended long-term impacts of the project. It is an essential part of designing novel solutions and an integral part of conducting research. However, ethics is typically an area dedicated to ethics experts only, even though it would be important to embed it in the work of all project participants. In a European project on smart manufacturing, we have pursued to involve the whole project consortium to discuss and consider ethics in design and research throughout the project. This paper describes our ethical approach and the results of the engagement activities. Finally, we discuss the practical means we applied to create awareness and commitment towards ethics.

Päivi Heikkilä, Hanna Lammi, Susanna Aromaa
Exploring Emotions: Study of Five Design Workshops for Generating Ideas for Emotional Self-report Interfaces

Accurately reporting our emotions is essential for various purposes, such as men-tal health monitoring and annotating artificial intelligence datasets. However, emotions are complex and challenging to convey, and commonly used concepts such as valence and arousal can be difficult for users to understand correctly. Our main goal was to explore new ways to inform the design of affective self-report instruments that can bridge the gap between people's understanding of emotions and machine-interpretable data. In this paper, we present the findings of five de-sign workshops to generate ideas and solutions to represent emotion-related con-cepts and improve the design of affective self-report interfaces. The workshops originated seven themes that informed the derivation of design implications. These implications include representing arousal using concepts such as shape, movement, and body-related elements, representing valence using facial emojis and color properties, prioritizing arousal in questioning, and facilitating user con-firmation while preserving introspection.

Carla Nave, Francisco Nunes, Teresa Romão, Nuno Correia
Moving Away from the Blocks: Evaluating the Usability of EduBlocks for Supporting Children to Transition from Block-Based Programming

When learning to code, children and novice programmers often transition from block-based to traditional text-based programming environments. This paper explores the usability problems within a block-based authoring environment, EduBlocks, that may hinder children’s learning. Using domain-specific heuristics, a usability evaluation was performed by expert evaluators, which was later combined with data from an analysis of problems reported in Forums, to produce a corpus of usability problems. The corpus was subsequently analysed using thematic analysis, and seven design guidelines were synthesized. Using the guidelines, a model of interaction was created to inform the design of block-based authoring environments that support the transition to text-based authoring. The model examines the interplay between learning within a school environment to independently using the authoring environment and how the interface can support these differing scenarios. This paper contributes to the design of effective user interfaces to support children learning to code and provides guidelines for developers of hybrid authoring environments to support the transition away from blocks.

Gavin Sim, Mark Lochrie, Misbahu S. Zubair, Oliver Kerr, Matthew Bates

Cybersecurity and Trust

Frontmatter
Dark Finance: Exploring Deceptive Design in Investment Apps

This study aimed to explore how financial technology companies employed dark patterns to influence investors’ financial decision-making and behavior. We examined 26 mobile apps that are available in Norway and allow users to purchase stocks, funds, and cryptocurrencies. Our goal was to identify any design strategies that may be deemed unethical. We detected several methods or deceptive tactics deliberately devise to evade the purpose of GDPR. Nearly all the studied apps incorporate dark patterns to varying degrees, and the manipulation level using these practices differs between bank and non-bank apps. Banks have more transparent apps with fewer dark patterns. They give more importance to safeguarding users’ personal information than non-bank fintech companies and are less likely to exploit the data shared by users. Non-bank apps display more intrusive data policies and subpar default settings than banks. They utilize deceptive practices to conceal pricing, encourage user interaction, and dissuade users from exiting the platform.

Ivana Rakovic, Yavuz Inal
Elements that Influence Transparency in Artificial Intelligent Systems - A Survey

Artificial Intelligence (AI) models operate as black boxes where most parts of the system are opaque to users. This reduces the user’s trust in the system. Although the Human-Computer Interaction (HCI) community has proposed design practices to improve transparency, work that provides a mapping of these practices and interactive elements that influence AI transparency is still lacking. In this paper, we conduct an in-depth literature survey to identify elements that influence transparency in the field of HCI. Research has shown that transparency allows users to have a better sense of the accuracy, fairness, and privacy of a system. In this context, much research has been conducted on providing explanations for the decisions made by AI systems. Researchers have also studied the development of interactive interfaces that allow user interaction to improve the explanatory capability of systems. This literature review provides key insights about transparency and what the research community thinks about it. Based on the insights gained we gather that a simplified explanation of the AI system is key. We conclude the paper with our proposed idea of representing an AI system, which is an amalgamation of the AI Model (algorithms), data (input and output, including outcomes), and the user interface, as visual interpretations (e.g. Venn diagrams) can aid in understanding AI systems better and potentially making them more transparent.

Deepa Muralidhar, Rafik Belloum, Kathia Marçal de Oliveira, Ashwin Ashok
Empowering Users: Leveraging Interface Cues to Enhance Password Security

Passwords are a popular means of authentication for online accounts, but users struggle to compose and remember numerous passwords, resorting to insecure coping strategies. Prior research on graphical authentication schemes showed that modifying the interface can encourage more secure passwords. In this study ( $$N=59$$ ), we explored the use of implicit (website background and advertisements) and explicit (word suggestions) cues to influence password composition. We found that 60.59% of passwords were influenced by the interface cues. Our work discusses how designers can use these findings to improve authentication interfaces for better password security.

Yasmeen Abdrabou, Marco Asbeck, Ken Pfeuffer, Yomna Abdelrahman, Mariam Hassib, Florian Alt
Friendly Folk Advice: Exploring Cybersecurity Information Sharing in Nigeria

The risk of cyber crimes continues to increase as more Nigerians continue to adopt digital and online tools and services. However, we do not know enough about citizens’ understanding of cybersecurity behaviours and habits. In this paper, we explored the cybersecurity behaviours of Nigerians using a mixed-methods approach to understand how citizens stay safe online. Using a survey, we collected data (n = 208) on how citizens protect themselves online and where they get cybersecurity advice from. We then further explored the reported behaviours using semi-structured interviews (n = 22). We found that Nigerian citizens discussed cybersecurity incidents openly and shared tips and advice with peers through social media and through broadcasts on messaging platforms. We discovered that this has resulted in relatively high adoption rates for protective technologies like 2FA, particularly on WhatsApp. However, we also report how the adoption of 2FA on one account did not necessarily lead to enabling it on other accounts and how some citizens were being socially engineered to bypass those 2FA protections. Finally, we discuss some recommendations for how tools could provide more information to improve users’ understanding of both security threats and the countermeasures the tools offer.

James Nicholson, Opeyemi Dele Ajayi, Kemi Fasae, Boniface Kayode Alese
Trust in Facial Recognition Systems: A Perspective from the Users

High-risk artificial intelligence (AI) are systems that can endanger the fundamental rights of individuals. Due to their complex characteristics, users often wrongly perceive their risks, trusting too little or too much. To further understand trust from the users’ perspective, we investigate what factors affect their propensity to trust Facial Recognition Systems (FRS), a high-risk AI, in Mozambique. The study uses mixed methods, with a survey (N = 120) and semi-structured interviews (N = 13). The results indicate that users’ perceptions of the FRS’ robustness and principles of use affect their propensity to trust it. This relationship is moderated by external issues and how the system attributes are communicated. The findings from this study shed light on aspects that should be addressed when developing AI systems to ensure adequate levels of trust.

Gabriela Beltrão, Sonia Sousa, David Lamas

Data Physicalisation and Cross-Device

Frontmatter
Comparing Screen-Based Version Control to Augmented Artifact Version Control for Physical Objects

Besides referring to digital twins, the iterative development of physical objects cannot be easily managed in version control systems. However, physical content also could benefit from versioning for structured work and collaborative uses, thereby increasing equality between digital and physical design. Hence, it needs to be investigated what kind of system is most suitable for supporting a physical object version control. Focusing on the visualization of differences between states of a physical artifact, two systems were compared against each other in a lab study: a screen-based solution optimized for 3D models as baseline and an approach that augments a physical artifact with digital information as hypothesis. Our results indicate that the Augmented Artifact system is superior in task completion time but scores a lower usability rating than the baseline. Based on the results, we further provide design considerations for building a physical object version control system.

Maximilian Letter, Marco Kurzweg, Katrin Wolf
EmoClock: Communicating Real-Time Emotional States Through Data Physicalizations

Expressive interfaces that communicate human emotional state (e.g., level of arousal) are beneficial to many applications. In this work, we use a research-through-design approach to learn about the challenges and opportunities involved in physicalizing emotional data derived from biosignals in real-time. We present EmoClock, a physicalization that uses a clock as a metaphor to communicate arousal and valence derived from biosignal data and lessons learned from its evaluation.

Dennis Peeters, Champika Ranasinghe, Auriol Degbelo, Faizan Ahmed
Extending User Interaction with Mixed Reality Through a Smartphone-Based Controller

A major concern in mixed-reality (MR) environments is to support intuitive and precise user interaction. Various modalities have been proposed and used, including gesture, gaze, voice, hand-recognition or even special devices, i.e. external controllers. However, these modalities may often feel unfamiliar and physically demanding to the end-user, leading to difficulties and fatigue. One possible solution worth investigating further is to use an everyday object, like a smartphone, as an external device for interacting with MR. In this paper, we present the design of a framework for developing an external smartphone controller to extend user input in MR applications, which we further utilize to implement a new interaction modality, a tap on the phone. We also report on findings of a user study (n=24) in which we examine performance and user experience of the suggested input modality through a comparative user evaluation task. The findings suggest that incorporating a smartphone as an external controller shows potential for enhancing user interaction in MR tasks requiring high precision, as well as pinpointing the value of providing alternative means of user input in MR applications depending on a given task and personalization aspects of an end-user.

Georgios Papadoulis, Christos Sintoris, Christos Fidas, Nikolaos Avouris
Fitts’ Throughput Vs Empirical Throughput: A Comparative Study

Every time a user taps on an element on a screen, she provides some “information”. Classically, Fitts’ law accounts for the speed accuracy trade-off in this operation, and Fitts’ throughput provides the “rate of information transfer” from the human to the device. However, Fitts’ throughput is a theoretical construct, and it is difficult to interpret it in the practical design of interfaces. Our motivation is to compare this theoretical rate of information transfer with the empirical values achieved in typical, realistic pointing tasks. To do so, we developed four smartphone-based interfaces - a 1D and a 2D interface for a typical Fitts’ study and a 1D and a 2D interface for an empirical study. In the Fitts’ study, participants touched the target bar or circle as quickly as possible. In the empirical study, participants typed seven 10-digit phone numbers ten times each. We conducted a systematic, within-subjects study with 20 participants and report descriptive statistics for the Fitts’ throughput and the empirical throughput values. We also carried out statistical significance tests, the results of which are as follows. As we had expected, the Fitts’ throughput for 1D task was significantly higher than the empirical throughput for the number typing task in 1D. Surprisingly, the difference was in the opposite direction for the 2D tasks. Further, we found that throughputs for both the 2D tasks were higher than their 1D counterparts, which too is an unusual result. We compare our values with those reported in key Fitts’ law literature and propose potential explanations for these surprises, which need to be evaluated in future research.

Khyati Priya, Anirudha Joshi

Eye-Free, Gesture Interaction and Sign Language

Frontmatter
Developing and Evaluating a Novel Gamified Virtual Learning Environment for ASL

The use of sign language is a highly effective way of communicating with individuals who experience hearing loss. Despite extensive research, many learners find traditional methods of learning sign language, such as web-based question-answer methods, to be unengaging. This has led to the development of new techniques, such as the use of virtual reality (VR) and gamification, which have shown promising results. In this paper, we describe a gamified immersive American Sign Language (ASL) learning environment that uses the latest VR technology to gradually guide learners from numeric to alphabetic ASL. Our hypothesis is that such an environment would be more engaging than traditional web-based methods. An initial user study showed that our system scored highly in some aspects, especially the hedonic factor of novelty. However, there is room for improvement, particularly in the pragmatic factor of dependability. Overall, our findings suggest that the use of VR and gamification can significantly improve engagement in ASL learning.

Jindi Wang, Ioannis Ivrissimtzis, Zhaoxing Li, Yunzhan Zhou, Lei Shi
Effects of Moving Speed and Phone Location on Eyes-Free Gesture Input with Mobile Devices

Using smartphones while moving is challenging and can be dangerous. Eyes-free input gestures can provide a means to use smartphones without the need for visual attention from users. In this study, we investigated the effect of different moving speeds (standing, walking, or jogging) and different locations (phone held freely in the hand, or phone placed inside a shoulder bag) on eyes-free input gestures with smartphone. Our results from 12 male participants showed gesture’s entering duration is not affected by moving speed or phone location, however, other features of gesture, such as length, height, width, area, and phone orientation, are mostly affected by moving speed or phone location. So, eyes-free gestures’ features vary significantly as the user’s environmental factors, such as moving speed or phone location, change and should be considered by designers.

Milad Jamalzadeh, Yosra Rekik, Laurent Grisoni, Radu-Daniel Vatavu, Gualtiero Volpe, Alexandru Dancu
Hap2Gest: An Eyes-Free Interaction Concept with Smartphones Using Gestures and Haptic Feedback

Smartphones are used in different contexts, including scenarios where visual and auditory modalities are limited (e.g., walking or driving). In this context, we introduce a new interaction concept, called Hap2Gest, that can give commands and retrieve information, both eyes-free. First, it uses a gesture as input for command invocation, and then output information is retrieved using haptic feedback perceived through an output gesture drawn by the user. We conducted an elicitation study with 12 participants to determine users’ preferences for the aforementioned gestures and the vibration patterns for 25 referents. Our findings indicate that users tend to use the same gesture for input and output, and there is a clear relationship between the type of gestures and vibration patterns users suggest and the type of output information. We show that the gesture’s speed profile agreement rate is significantly higher than the gesture’s shape agreement rate, and it can be used by the recognizer when the gesture shape agreement rate is low. Finally, we present a complete set of user-defined gestures and vibration patterns and address the gesture recognition problem.

Milad Jamalzadeh, Yosra Rekik, Alexandru Dancu, Laurent Grisoni
User-Centered Evaluation of Different Configurations of a Touchless Gestural Interface for Interactive Displays

Approaches for improving the user experience when interacting with touchless displays have been proposed, such as using activation gestures and representing users as avatars in real-time. However, the novelty of such approaches may hinder users’ natural interaction behavior bringing challenges such as ease of use. In this paper, we investigate how the presence of avatars and their configurations, the usage of activation gestures, and the arrangement of interactive tiles in a touchless visual interface impact users’ experience, usability and task performance. We also compare users’ willingness to promote the interaction setup, perceived task difficulty, and time consumed to perform four different tasks in each configuration. We found that using a squared arrangement of elements, adopting activation gestures to trigger actions, and showing a moving avatar, resulted in the highest perceived usability and user experience, also reducing errors, task completion time, and perceived task difficulty. Our findings support the design of interactive displays to ensure high usability and user experience.

Vito Gentile, Habiba Farzand, Simona Bonaccorso, Davide Rocchesso, Alessio Malizia, Mohamed Khamis, Salvatore Sorce

Haptic Interaction

Frontmatter
Assignment of a Vibration to a Graphical Object Induced by Resonant Frequency

This work aims to provide tactile feedback when touching elements on everyday surfaces using their resonant frequencies. We used a remote speaker to bring a thin wooden surface into vibration for providing haptic feedback when a small graphical fly glued on the board was touched. Participants assigned the vibration to the fly instead of the board it was glued on. We systematically explored when that assignment illusion works best. The results indicate that additional sound, as well as vibration, lasting as long as the touch, are essential factors for having an assignment of the haptic feedback to the touched graphical object. With this approach, we contribute to ubiquitous and calm computing by showing that resonant frequency can provide vibrotactile feedback for images on thin everyday surfaces using only a minimum of hardware.

Marco Kurzweg, Simon Linke, Yannick Weiss, Maximilian Letter, Albrecht Schmidt, Katrin Wolf
GuidingBand: A Precise Tactile Hand Guidance System to Aid Visual Perception

Computerised guidance systems can help alleviate tedious everyday tasks such as identifying a desired object in a collection of similar objects. Such guidance systems can prove useful as microinteractions if they are made accessible as a consumer wearable that delivers tactile feedback. We designed a wrist-wearable tactile guidance system called GuidingBand that provides vibrational cues to help the user pick visual targets out of an array. We conducted two studies to evaluate it. The studies involve presenting visual targets to users on a screen and giving them visual search tasks. In study 1, we identified the error rate of our guidance system. We presented users (N = 20) with arrays of identical, square targets to pick from, progressively reduced the target sizes and evaluated error rates for each size. Notably, we observed a 4% error rate at a target size of 10 mm. In study 2, we compared the error rate of the guidance system with and without the help of human visual perception in a visual search task. We constructed a task that involved showing users an array of rectangles varying only in length and asked them to identify the correct target which was previously shown to them. Users (N = 13) had fewer errors when they tried to identify targets with tactile guidance alone, followed by guidance and perception combined and perception alone. It was surprising that instead of improving the precision of the users’ performance, their visual perception in fact deteriorated it.

Atish Waghwase, Anirudha Joshi
Mid-air Haptic Cursor for Physical Objects

We investigate whether mid-air tactile stimuli generated using ultrasonic arrays can be used as a haptic cursor for physical objects. We combined an ultrasonic array and an interactive haptic map into one setup. Evaluation with 15 participants showed that the method is efficient for guiding user hands to physical objects – miniatures of room equipment. The average error rate was 14.4 %, and the best participant achieved a 5.1 % error rate. Our in-depth analysis provided insights into issues of the method, like signal reflections and user-induced interference of problems with distinguishing physical objects that are too close.

Miroslav Macík, Meinhardt Branig
Stress Embodied: Developing Multi-sensory Experiences for VR Police Training

VR applications primarily rely on audio-visual stimuli, limiting the sense of immersion. Multi-sensory stimuli show promise in enhancing presence, realistic behavior, and overall experience. Existing approaches are either stationary or wearable, and movement-intensive. Multi-user VR police training requires a mobile device for intensive multi-sensory stimuli. This paper presents the design and development of a mobile platform for multi-sensory feedback, introducing heat, wind, mist, and pain to improve immersion. Preliminary evaluations indicate promising effects on stress in VR. The paper concludes with lessons learned for designing multi-sensory experiences in police VR training.

Jakob Carl Uhl, Georg Regal, Michael Gafert, Markus Murtinger, Manfred Tscheligi

Healthcare Applications and Self-Monitoring

Frontmatter
Co-designing an eHealth Solution to Support Fibromyalgia Self-Management

Fibromyalgia is a rheumatic condition that causes a wide range of symptoms, such as pain, fatigue, attention and concentration deficit, and sleep disorders. Guidelines recommend a combination of pharmacological and non-pharmacological approaches, such as physiotherapy, emphasizing the relevance of the latter as first-line therapy. Usually, patients have difficulties in self-managing their condition. We designed an eHealth solution based on a mobile application that allows people with fibromyalgia to self-manage their condition and perform hybrid sessions with physiotherapists. The solution was created by applying a co-design process, where patients and physiotherapists were involved from start to finish, following the design thinking methodology. The paper also includes a preliminary user study with expected positive and encouraging results due to the co-design process.

Pedro Albuquerque Santos, Rui Neves Madeira, Hugo Ferreira, Carmen Caeiro
Designing Remote Patient Monitoring Technologies for Post-operative Home Cancer Recovery: The Role of Reassurance

While cancer patients are recovering in hospital after major surgery, they are continually monitored by clinical teams. However, once discharged, they spend their remaining recovery isolated at home with minimal contact with the clinical team. The first 30 days upon returning home after surgery are identified to be a critical and challenging period for patients not only emotionally, practically, and mentally, but also poses a real danger of further complications, readmission, and potentially surgical related death. Remote Patient Monitoring (RPM) systems are extremely promising, allowing clinicians to care for and support patients remotely, however, although these technologies are mature, the level of adoption by the patients is still very low. To address this challenge, we focus on identifying and understanding the patients’ concerns and requirements when adopting a novel RPM technology. We conducted a series of iterative Patient Public Involvement workshops following a user-centred approach. We explored various scenarios based on prototypes and facilitated reflective discussions with cancer patients to identify existing barriers preventing them from adopting RPM technologies. The workshops revealed a wide range of concerns expressed by participants, categorised in five themes. However, lack of reassurance was identified as the central theme during the 30-day post-operative post-discharge period. In conclusion, reassurance proves to be central in engaging patients and making RPM technologies fit for purpose, potentially leading to elevated levels of adoption and improvement on health outcomes and quality of life.

Constantinos Timinis, Jeremy Opie, Simon Watt, Pramit Khetrapal, John Kelly, Manolis Mavrikis, Yvonne Rogers, Ivana Drobnjak
SELFI: Evaluation of Techniques to Reduce Self-report Fatigue by Using Facial Expression of Emotion

This paper presents the SELFI framework which uses information from a range of indirect measures to reduce the burden on users of context-sensitive apps in the need to self-report their mental state. In this framework, we implement multiple combinations of facial emotion recognition tools (Amazon Rekognition, Google Vision, Microsoft Face), and feature reduction approaches to demonstrate the versatility of the framework in facial expression based emotion estimation. The evaluation of the framework involving 20 participants in a 28-week in-the-wild study reveals that the proposed framework can estimate emotion accurately using facial image ( $$83\%$$ and $$81\%$$ macro-F1 for valence and arousal, respectively), with an average reduction of $$10\%$$ self-report burden. Moreover, we propose a solution to detect the performance drop of the model developed by SELFI, during runtime without the use of ground truth emotion, and we achieve accuracy improvements of 14%.

Salma Mandi, Surjya Ghosh, Pradipta De, Bivas Mitra
Usability and Clinical Evaluation of a Wearable TENS Device for Pain Management in Patients with Osteoarthritis of the Knee

An evaluation of the usability and clinical benefits of a Transcutaneous Electrical Nerve Stimulation (TENS) device offered as an adjunct to standard care for thirty patients with Osteoarthritis (OA) of the knee was carried out. A four-stage approach was adopted for this evaluation using a mix of surveys, semi-structured interviews, user diaries, and patient reported outcome measures (PROMS) collected over a three-month period. The findings of the study demonstrate that a combined approach generates a richer picture of patient experience while using a TENS device to manage pain at home. The study also points to how such an approach, that captures insights into the user’s experience alongside PROMS can explain the differences between patients who adopt and benefit from these devices and those who do not.

Fatma Layas, Billy Woods, Sean Jenkins
Backmatter
Metadaten
Titel
Human-Computer Interaction – INTERACT 2023
herausgegeben von
José Abdelnour Nocera
Marta Kristín Lárusdóttir
Helen Petrie
Antonio Piccinno
Marco Winckler
Copyright-Jahr
2023
Electronic ISBN
978-3-031-42280-5
Print ISBN
978-3-031-42279-9
DOI
https://doi.org/10.1007/978-3-031-42280-5