Human-Computer Interaction – INTERACT 2021
18th IFIP TC 13 International Conference, Bari, Italy, August 30 – September 3, 2021, Proceedings, Part I
- 2021
- Buch
- Herausgegeben von
- Carmelo Ardito
- Rosa Lanzilotti
- Alessio Malizia
- Helen Petrie
- Antonio Piccinno
- Dr. Giuseppe Desolda
- Kori Inkpen
- Buchreihe
- Lecture Notes in Computer Science
- Verlag
- Springer International Publishing
Über dieses Buch
Über dieses Buch
The five-volume set LNCS 12932-12936 constitutes the proceedings of the 18th IFIP TC 13 International Conference on Human-Computer Interaction, INTERACT 2021, held in Bari, Italy, in August/September 2021.
The total of 105 full papers presented together with 72 short papers and 70 other papers in these books was carefully reviewed and selected from 680 submissions. The contributions are organized in topical sections named:
Part I: affective computing; assistive technology for cognition and neurodevelopment disorders; assistive technology for mobility and rehabilitation; assistive technology for visually impaired; augmented reality; computer supported cooperative work.
Part II: COVID-19 & HCI; croudsourcing methods in HCI; design for automotive interfaces; design methods; designing for smart devices & IoT; designing for the elderly and accessibility; education and HCI; experiencing sound and music technologies; explainable AI.
Part III: games and gamification; gesture interaction; human-centered AI; human-centered development of sustainable technology; human-robot interaction; information visualization; interactive design and cultural development.
Part IV: interaction techniques; interaction with conversational agents; interaction with mobile devices; methods for user studies; personalization and recommender systems; social networks and social media; tangible interaction; usable security.
Part V: user studies; virtual reality; courses; industrial experiences; interactive demos; panels; posters; workshops.
The chapter ‘Stress Out: Translating Real-World Stressors into Audio-Visual Stress Cues in VR for Police Training’ is open access under a CC BY 4.0 license at link.springer.com.
The chapter ‘WhatsApp in Politics?! Collaborative Tools Shifting Boundaries’ is open access under a CC BY 4.0 license at link.springer.com.
Inhaltsverzeichnis
-
Frontmatter
-
Keynote Speeches
-
Frontmatter
-
Human-Centered AI: A New Synthesis
Ben ShneidermanAbstractResearchers, developers, business leaders, policy makers and others are expanding the technology-centered scope of Artificial Intelligence (AI) to include Human-Centered AI (HCAI) ways of thinking. This expansion from an algorithm-focused view to embrace a human-centered perspective, can shape the future of technology so as to better serve human needs. Educators, designers, software engineers, product managers, evaluators, and government agency staffers can build on AI-driven technologies to design products and services that make life better for the users. These human-centered products and services will enable people to better care for each other, build sustainable communities, and restore the environment. -
Multisensory Experiences: Where the Senses Meet Technology
Marianna ObristAbstractMultisensory experiences, that is, experiences that involve more than one of our senses, are part of our everyday life. We often tend to take them for granted, at least when our different senses function normally (normal sight functioning) or are corrected-to-normal (using glasses). However, closer inspection to any, even the most mundane experiences, reveals the remarkable sensory world in which we live in. While we have built tools, experiences and computing systems that have played to the human advantages of hearing and sight (e.g., signage, modes of communication, visual and musical arts, theatre, cinema and media), we have long neglected the opportunities around touch, taste, or smell as interface/interaction modalities. Within this keynote I will share my vision for the future of computing/HCI and what role touch, taste, and smell can play in it. -
Nicolas Cage is the Center of the Cybersecurity Universe
Luca ViganòAbstractNicolas Cage, prolific actor, Oscar winner but also master of ham acting, star of many excellent films and of at least as many really bad ones. A conspicuous number of these films directly revolve around cybersecurity or can be used to explain cybersecurity notions in such a way that they can be understood by non-experts. This paper is part of a research program that I have been carrying out on how to use films and other artworks to explain cybersecurity notions, and thus provide a form of “cybesecurity show and tell” (in which telling, i.e., explaining notions in a formal, technical way, can be paired with showing through visual storytelling or other forms of storytelling). Here, I focus on 15 of Cage’s films in which he has played a wide variety of roles: a stunt motorcyclist, a paroled ex-con and former U.S. Ranger, a U.S. Marine in World War II, an agent of an authoritarian agency, a retired master car thief, a criminal mastermind, an illegal arms dealer, a master sorcerer, a treasure hunter historian, a Batman-like superhero, Spider-Man Noir, Superman, an NSA employee, and a hacker star-nosed mole. -
Future Digital Challenges: Social-Emotional Skills as Critical Enablers for Good Technical Design Work
Geraldine FitzpatrickAbstractTechnology is becoming increasingly entangled in all aspects of our lives often with unintended negative consequences. While there is increasing recognition of the ethical and value-based aspects in our technology design work, and while there is increasing support for the multidisciplinary collaborations needed to address current and future challenges, little focus has been put on the skills needed by people practically engaging in this work. Good technical and design skills are necessary but not sufficient. We also need good social-emotional-ethical skills, with implications for education, collaborations, and leadership development. -
POISE: A Framework for Designing Perfect Interactive Systems with and for Imperfect People
Philippe PalanqueAbstractThe operator is frequently considered as the main sources of vulnerability in command and control systems; for example, in a 2006 survey 79% of fatal accidents in aviation were attributed to “human error.” Beyond the case of command and control systems, users’ faults occur not only at use time but also during the design and development of systems. Following Avizienis et al.’s taxonomy for faults, human-made error can be characterized as the operator’s failure to deliver services while interacting with the interactive system. Non human-made errors are called natural faults and may occur during development or set the interactive system as well as its users into an error-state during its use. Focusing on interactive systems specificities, this paper presents a comprehensive description of faults covering both development and operation phases. In correspondence with this taxonomy, we present mechanisms to avoid, remove, tolerate and mitigate faults in order to design and develop what we call “perfect” interactive systems taking into account the organization, the interactive system, the environment and the people operating them. We define an interactive system as perfect when it blends multiple and diverse properties such as usability, security, user experience, dependability, learnability, resilience … We present multiple concrete examples, from aviation and other domains, of faults affecting socio-technical systems and associated fault-tolerant mechanisms.
-
-
Affective Computing
-
Frontmatter
-
A Field Dependence-Independence Perspective on Eye Gaze Behavior within Affective Activities
Christos Fidas, Marios Belk, Christodoulos Constantinides, Argyris Constantinides, Andreas PitsillidesAbstractEvidence suggests that human cognitive differences affect users’ visual behavior within various tasks and activities. However, a human cognitive processing perspective on the interplay between visual and affective aspects remains up-to-date understudied. In this paper, we aim to investigate this relationship by adopting an accredited cognitive style framework (Field Dependence-Independence – FD-I) and provide empirical evidence on main interaction effects between human cognition and emotional processing towards eye gaze behavior. For doing so, we designed and implemented an eye tracking study (n = 22) in which participants were initially classified according to their FD-I cognitive processing characteristics, and were further exposed to a series of images, which triggered specific emotional valence. Analysis of results yield that affective images had a different effect on FD and FI users in terms of visual information exploration time and comprehension, which was reflected on eye gaze metrics. Findings highlight a hidden and rather unexplored effect between human cognition and emotions towards eye gaze behavior, which could lead to a more holistic and comprehensive approach in affective computing. -
A Systematic Review of Thermal and Cognitive Stress Indicators: Implications for Use Scenarios on Sensor-Based Stress Detection
Susana Carrizosa-Botero, Elizabeth Rendón-Vélez, Tatiana A. Roldán-RojoAbstractA systematic literature review aiming to identify the characteristics of physiological signals on two types of stress states - single moderate thermal stress state and moderate thermal stress combined with cognitive stress state – was conducted. Results of the review serve as a backdrop to envision different scenarios on the detection of these stress states in everyday situations, such as in schools, workplaces and residential settings, where the use of interactive technologies is commonplace. Stress detection is one of the most studied areas of affective computing. However, current models developed for stress detection only focus on recognizing whether a person is stressed, but not on identifying stress states. It is essential to differentiate them in order to implement strategies to minimize the source of stress by designing different interactive technologies. Wearables are commonly used to acquire physiological signals, such as heart rate and respiratory rate. Analysis results of these signals can support a user to make a decision for taking actions or to make an automatic system undertake certain strategies to counteract the sources of stress. These technologies can be designed for educational, work or medical environments. Our future work is to validate these use scenarios systematically to enhance the design of the technologies. -
Emotion Elicitation Techniques in Virtual Reality
Radiah Rivu, Ruoyu Jiang, Ville Mäkelä, Mariam Hassib, Florian AltAbstractIn this paper, we explore how state-of-the-art methods of emotion elicitation can be adapted in virtual reality (VR). We envision that emotion research could be conducted in VR for various benefits, such as switching study conditions and settings on the fly, and conducting studies using stimuli that are not easily accessible in the real world such as to induce fear. To this end, we conducted a user study (N = 39) where we measured how different emotion elicitation methods (audio, video, image, autobiographical memory recall) perform in VR compared to the real world. We found that elicitation methods produce largely comparable results between the virtual and real world, but overall participants experience slightly stronger valence and arousal in VR. Emotions faded over time following the same pattern in both worlds. Our findings are beneficial to researchers and practitioners studying or using emotional user interfaces in VR. -
Expanding Affective Computing Paradigms Through Animistic Design Principles
Arjun Rajendran Menon, Björn Hedin, Elina ErikssonAbstractAnimistic and anthropomorphic principles have long been investigated along with affective computing in both HCI and HRI research, to reduce user frustration and create more emotive yet relatable devices, robots, products and artefacts. Yet such artefacts and research have mainly been from user-centric perspectives and the animistic characteristics localised to single objects. In this exploratory paper, we take these principles in a new direction by attempting to invoke animistic characteristics of a room or a space itself. Designing primarily for space itself rather than the user or a single product, allows us to create new interactions and narratives that can induce animism and empathy for the space, in users. This leads to the creation of a prototype space, which we use to investigate how users approach, interact and behave in such a space, yielding several insights and user behaviour, all of which can be used for further studies, capable of generating new interaction perspectives and providing insights into user behaviour. We conclude by discussing the potentiality of such spaces in developing new strategies for behaviour change and HCI.
-
-
Assistive Technology for Cognition and Neurodevelopmental Disorders
-
Frontmatter
-
AI-Based Clinical Decision Support Tool on Mobile Devices for Neurodegenerative Diseases
Annamaria Demarinis Loiotile, Vincenzo Dentamaro, Paolo Giglio, Donato ImpedovoAbstractRecently, the role of AI in the development of eHealth is becoming increasingly ambitious since AI is allowing the development of whole new healthcare areas. In many cases, AI offers the possibility to support patient screening and monitoring through low-cost, non-invasive tests. One of the most relevant sectors in which a great contribution from AI is expected is that of neurodegenerative diseases, which represent one of the most important pathologies in Western countries with very serious follow up not only clinical, but also social and economic.In this context, AI certainly represents an indispensable tool for effectively addressing aspects related to early diagnosis but also to monitoring patients suffering from various neurodegenerative diseases. To achieve these results, AI tools must be made available in test applications on mobile devices that are also easy to use by a large part of the population. In this sense, the aspects related to human-machine interaction are of paramount relevance for the diffusion of these solutions.This article presents a mobile device application based on artificial intelligence tools for the early diagnosis and monitoring of patients suffering from neurodegenerative diseases and illustrates the results of specific usability tests that highlight the strengths but also the limitations in the iteration with application users. Some concluding remarks are highlighted to face the actual limitations of the proposed solution. -
COBO: A Card-Based Toolkit for Co-Designing Smart Outdoor Experiences with People with Intellectual Disability
Giulia Cosentino, Diego Morra, Mirko Gelsomini, Maristella Matera, Marco MoresAbstractThis paper presents a co-design toolkit that aims to involve people with Intellectual Disability (ID) in the ideation of smart outdoor experiences. We conducted a series of workshops with young adults with ID and special-education professionals of a social-care center, to identify the physical and digital experience that could favour reflection and engagement of the addressed users, and empower them in solving daily-life challenges. Multiple refinements of iterative prototypes led to the design of an interactive toolkit, COBO, that integrates inspirational cards, interactive smart objects and multimedia contents to guide users during the conception of novel ideas. This paper illustrates the main insights of the conducted workshops and the design of COBO. It also reports on a preliminary evaluation we conducted to validate some design choices for the integration of physical and digital components within COBO. -
Digital Producers with Cognitive Disabilities
Participatory Video Tutorials as a Strategy for Supporting Digital Abilities and Aspirations Petko Atanasov Karadechev, Anne Marie Kanstrup, Jacob Gorm DavidsenAbstractThis paper presents ‘participatory video tutorials’—a strategy developed to support the digital empowerment of young people living with cognitive disabilities. The support strategy complements and expands dominant perspectives on the target group, which is often seen as disabled and in need of assistive technology, by foregrounding the young participants’ digital abilities and facilitating them as active producers of digital content, which already plays a major role in their everyday social interactions. We present the background and framework for participatory video tutorials and the results from staging digital production with sixteen young participants. Empirically, the results contribute perspectives on this target group as producers (vs. users) with abilities (vs. disabilities). Methodologically, the results outline four principles (socio-technical belonging, technical accessibility, elasticity, and material reusability) that can assist HCI researchers, professionals, and caretakers in their efforts to support the target group in digital production. These principles are guidelines for a participatory staging, driven by the young people’s motivation for self-expression. The study and the results contribute an example and a strategy for how to work toward digital inclusion by engaging a marginalized target group in digital production. -
Dot-to-Dot: Pre-reading Assessment of Literacy Risk via a Visual-Motor Mechanism on Touchscreen Devices
Wonjeong Park, Paulo Revés, Augusto Esteves, Jon M. Kerridge, Dongsun Yim, Uran OhAbstractWhile early identification of children with dyslexia is crucial, it is difficult to assess literacy risks of these children early on before they learn to read. In this study, we expand early work on Dot-to-Dot (DtD), a non-linguistic visual-motor mechanism aimed at facilitating the detection of the potential reading difficulties of children at pre-reading age. To investigate the effectiveness of this approach on touchscreen devices, we conducted a user study with 33 children and examined their task performance logs as well as language test results. Our findings suggest that there is a significant correlation among DtD task and series of language tests. We conclude the work by suggesting different ways in which DtD could be seamlessly embedded into everyday mobile use cases. -
Exploring the Acceptability of Graphical Passwords for People with Dyslexia
Polina Evtimova, James NicholsonAbstractAlphanumeric passwords are still the most common form of user authentication despite well-known usability issues. These issues, including weak composition and poor memorability, have been well-established across different user groups, yet users with dyslexia have not been studied despite making up approximately 10% of the population. In this paper, we focus on understanding the user authentication experiences of people with dyslexia (PwD) in order to better understanding their attitudes towards a graphical password system that may provide a more inclusive experience. Through interactive interviews, participants were encouraged to try three different knowledge-based authentication systems (PIN, password, and graphical password) and then discuss their strategies behind code composition. We found that PwD employed potentially dangerous workarounds when composing passwords, in particular an over-reliance on pattern-based composition. We report on how PwD do not immediately see the benefits of graphical passwords, but upon experiencing the mechanism we see opportunities for more inclusive authentication. -
IntraVox: A Personalized Human Voice to Support Users with Complex Needs in Smart Homes
Ana-Maria Salai, Glenda Cook, Lars Erik HolmquistAbstractWe present IntraVox, a novel voice-based interaction system that supports users with complex needs by introducing a highly personalized, human voice command between smart home devices and sensors. Voice-enabled personal assistants (e.g., Amazon Alexa, Google Home) can be used to control smart home devices and increase the quality of life. However, people suffering from conditions such as dementia or learning disabilities and autism (LDA) can encounter verbal and memory problems in communicating with the assistants. IntraVox supports these users by verbally sending smart home control commands, based on the sensor data collected, to a voice-enabled personal assistant using a human voice. Over 12 months, we conducted 7 workshops and 9 interviews with 79 participants from multiple stakeholder categories - people with LDA, older and frail people, social and health care organisations. Results show that our solution has the potential to support people with complex needs and increase independence. Moreover, IntraVox could have the added value of reinforcing the learning of specific commands and making visible the inner workings of a home automation system, by giving users a clear cause-and-effect explanation of why certain events occur. -
LIFT: An eLearning Introduction to Web Search for Young Adults with Intellectual Disability in Sri Lanka
Theja Kuruppu Arachchi, Laurianne Sitbon, Jinglan Zhang, Ruwan Gamage, Priyantha HewagamageAbstractFor users with intellectual disability new to the Internet, mastering web search requires both conceptual understanding and supported practice. eLearning offers an appropriate environment for both self-paced learning and authentic practice since it can simulate the types of interactions encountered during web searching. Yet eLearning approaches have not been explored towards supporting young adults with intellectual disability (YAWID) to search the web or supporting them to learn web search. This paper presents a study that examines the experiences of 6 YAWID learning to search the web with LIFT, an eLearning tool designed with a specific focus on supporting memory through mental models and practice exercises. The content and approach of LIFT are tailored to the Sri Lankan context, incorporating the use of the Sinhala virtual keyboard for Google search and culturally relevant metaphors. We collected a range of observations as the participants used the eLearning tool, subsequently independently searched for information online, and finally described the Google search process to a teacher. We surveyed their understanding through drawings they created about the Internet and Web search. The findings establish the significant potential for eLearning to engage and support YAWID learn web search, with a realistic environment for safely practicing web search and navigation. This study contributes to human-computer interaction by integrating three aspects of eLearning: accessible interface, content, and interactive practice, also accommodating web search in the Sinhala language. It acknowledges that understanding specific users is critical if interaction designs are to be accepted by their users. -
Social Robots in Learning Experiences of Adults with Intellectual Disability: An Exploratory Study
Alicia Mitchell, Laurianne Sitbon, Saminda Sundeepa Balasuriya, Stewart Koplick, Chris BeaumontAbstractThe use of social robots has the potential to improve learning experiences in life skills for adults with intellectual disabilities (ID). Current research in the context of social robots in education has largely focused on how children with Autism Spectrum Disorder (ASD) interact with social robots primarily without a tablet, with almost no research investigating how beneficial social robots can be in supporting learning for adults with IDs. This research explores how interactions with a social humanoid robot can contribute to learning for communities of adults with ID, and how adults with ID want to engage with these robots. This exploratory study involved observation and semi-structured interviews of eleven participants with ID (in three groups, supported by their support workers) receiving information from a semi-humanoid social robot and interacting with the robot via its tablet. Two robot applications were developed to deliver content based on the participating disability support organization’s life skills curriculum for healthy lifestyle choices and exercise, considering a variety of modalities (visual, embodied, audio). The study identified four ways in which participants interact, and our findings suggest that both the physical presence of the robot and the support of the tablet play a key role in engaging adults with ID. Observation of participant interactions both with the robot and with each other shows that part of the robot’s value in learning was as a facilitator of communication.
-
- Titel
- Human-Computer Interaction – INTERACT 2021
- Herausgegeben von
-
Carmelo Ardito
Rosa Lanzilotti
Alessio Malizia
Helen Petrie
Antonio Piccinno
Dr. Giuseppe Desolda
Kori Inkpen
- Copyright-Jahr
- 2021
- Electronic ISBN
- 978-3-030-85623-6
- Print ISBN
- 978-3-030-85622-9
- DOI
- https://doi.org/10.1007/978-3-030-85623-6
Informationen zur Barrierefreiheit für dieses Buch folgen in Kürze. Wir arbeiten daran, sie so schnell wie möglich verfügbar zu machen. Vielen Dank für Ihre Geduld.