Skip to main content
Top

2021 | Book

Human-Computer Interaction. Interaction Techniques and Novel Applications

Thematic Area, HCI 2021, Held as Part of the 23rd HCI International Conference, HCII 2021, Virtual Event, July 24–29, 2021, Proceedings, Part II

insite
SEARCH

About this book

The three-volume set LNCS 12762, 12763, and 12764 constitutes the refereed proceedings of the Human Computer Interaction thematic area of the 23rd International Conference on Human-Computer Interaction, HCII 2021, which took place virtually in July 2021.

The total of 1276 papers and 241 posters included in the 39 HCII 2021 proceedings volumes was carefully reviewed and selected from 5222 submissions.

The 139 papers included in this HCI 2021 proceedings were organized in topical sections as follows:

Part I, Theory, Methods and Tools: HCI theory, education and practice; UX evaluation methods, techniques and tools; emotional and persuasive design; and emotions and cognition in HCI

Part II, Interaction Techniques and Novel Applications: Novel interaction techniques; human-robot interaction; digital wellbeing; and HCI in surgery

Part III, Design and User Experience Case Studies: Design case studies; user experience and technology acceptance studies; and HCI, social distancing, information, communication and work

Table of Contents

Frontmatter

Novel Interaction Techniques

Frontmatter
Performance Evaluation and Efficiency of Laser Holographic Peripherals

The introduction of virtual and voice recognition technologies have been at the forefront of HCI development in the last decade. The tech world has embraced the user experience when it comes to design and innovation. The way we communicate with computers has evolved over the years, things are moving from the more manual way of passing information to the computer into a more virtual style. Keyboards and mouses have been the two most important tools for doing this job, and innovation has transitioned them from the standard setups into virtual and holographic setups that take up minimal space. They are a part of green computing which promotes recycling and less plastic and electronic wastes. This paper would explore the evolution of the keyboard and explain how the new technology works. The paper would analyze both the hardware and the software to observe the design and the inner workings of two selected virtual devices: A virtual keyboard and ODin holographic mouse. There are several approaches used to process the information such as 3D optical ranging, the paper would include details about the mechanism behind it. An important concern is whether these virtual machines are accurate enough when it comes to performance and durability as compared to the standard styles. This paper would discuss the experiment performed to determine the user experience and accuracy based on monitored tasks that were repeated on the standard version as well. Deep Learning, Recurrent neural networks, in particular, is another important feature of this paper with their ability to improve the devices.

Alexander Fedor, Mulualem Hailom, Talha Hassan, Vu Ngoc Phuong Dinh, Vuong Nguyen, Tauheed Khan Mohd
Using Real-Pen Specific Features of Active Stylus to Cope with Input Latency

Despite the growing quality of touch screen mobile devices, most of them still suffer from input latency. This paper presents a new way to cope with this problem for users who use an active stylus to perform pointing tasks. It is based on the usage of real-pen specific features of an active stylus that can be obtained without any additional wearables. To assess latency compensation that uses orientation, tilt, and pressure values, two studies are conducted. The first study shows that using these features decreases prediction error by 2.5%, improves the distribution of deviation angle from the target direction, and has a good ability to reduce lateness and wrong orientation side-effect metrics by 9.4% and 3.3%, respectively. The second study reveals that users perceive fewer visual side-effects with latency compensation using real-pen specific features of the active stylus. The obtained results prove the effectiveness of utilizing orientation, tilt, and pressure to cope with the latency compensation problem.

Roman Kushnirenko, Svitlana Alkhimova, Dmytro Sydorenko, Igor Tolmachov
Comparing Eye Tracking and Head Tracking During a Visual Attention Task in Immersive Virtual Reality

The use of eye tracking (ET) and head tracking (HT) in head-mounted displays allows for the study of a subject’s attention in virtual reality environments, expanding the possibility to develop experiments in areas such as health or consumer behavior research. ET is a more precise technique than HT, but many commercial devices do not include ET systems. One way to study visual attention is to segment the space in areas of interest (AoI). However, the ET and HT responses could be similar depending on the size of the studied area in the virtual environment. Therefore, understanding the differences between ET and HT based on AoI size is critical in order to enable the use of HT to assess human attention. The purpose of this study was to perform a comparison between ET and HT technologies through the study of multiple sets of AoI in an immersive virtual environment. To do that, statistical techniques were developed with the objective of measuring the differences between the two technologies. This study found that with HT, an accuracy of 75.37% was obtained when the horizontal and vertical angular size of the AoIs was 25°. Moreover, the results suggest that horizontal movements of the head are much more similar to eye movements than vertical movements. Finally, this work presents a guide for future researchers to measure the precision of HT against ET, considering the dimensions of the AoI defined in a virtual scenario.

Jose Llanes-Jurado, Javier Marín-Morales, Masoud Moghaddasi, Jaikishan Khatri, Jaime Guixeres, Mariano Alcañiz
Investigation of Motion Video Enhancement for Image-Based Avatars on Small Displays

We investigate a method for enhancing the motion video sequences of an image-based avatar so that the body motion can be perceived as natural on a small display. If some avatar motions are too small, then the users cannot perceive those motions when the avatar is viewed on a small display. In particular, the motion of an upright posture that the avatar uses when waiting to start interacting with the user is very small. In this paper, we enhance the motion of the upright posture so that the user naturally perceives the movement of the avatar as human-like, even on a small display. To do this, we use an existing method for phase-based video motion processing. This method allows us to control the amount of avatar movement using a pre-defined enhancement parameter. The results of our subjective assessment show that the users sometimes perceived the avatar’s motion as natural on a small display when the body sway motions of the avatar were appropriately enhanced to the extent that no significant noise was included.

Tsubasa Miyauchi, Wataru Ganaha, Masashi Nishiyama, Yoshio Iwai
Sound Symbolic Words as a Game Controller

We developed a game system driven by sound symbolic words that given by a user. While most of other voice-interactive games released in the past focused on making a communication between a player and in-game characters by understanding the meaning of speech, our game system interprets sounds of voice into operable commands. The player’s character can move around in a 2D side-scrolling platform game runs on Unity, and its action is controlled by the degree of dynamically and elasticity expressed in the sound symbolic words given by the user. We conducted experiments on 18 subjects to evaluate the difference in user experience between the operation with a conventional game controller and proposed method. As the result of an analysis on questionnaires, it was found that proposed method can provide new experiences to users and can make the game more enjoyable though the difficulty of the game increases by the unfamiliar controlling method.

Yuji Nozaki, Shu Watanabe, Maki Sakamoto
Towards Improved Vibro-Tactile P300 BCIs

The vibro-tactile P300 based Brain-Computer Interface is an interesting tool for severe impaired patients which cannot communicate using the muscular and visual vias. In this study we presented an improved tactile BCI for binary communication that reduces the wrong answers by adding a threshold to the decision value to achieve a valid answer, otherwise, the BCI gives an indecisive answer to the question. The threshold is calculated using a statistical test on the EEG, data recorded during the question to the patient. In total 7 tactile stimulators were placed on different parts of the subject’s body. We tried the new BCI with 4 healthy subjects and using the statistical test they were able to get no wrong answers after 10 questions asked to each participant. The spelling accuracy and information transfer rate with and without statistical test are presented as well as examples of event related potentials.

Rupert Ortner, Josep Dinarès-Ferran, Danut-Constantin Irimia, Christoph Guger
Talking Through the Eyes: User Experience Design for Eye Gaze Redirection in Live Video Conferencing

In the post-corona era, more institutions are using videoconferencing (VC). However, when we talked in VC, we found that people could not easily concentrate on the conversation. This is a problem with computer architecture. Because the web camera records a person looking at a computer monitor, it appears as if people are staring at other places and having a conversation. This was a problem caused by overlooking eye contact, a non-verbal element of face-to-face conversation. To solve this problem, many studies focused on technology in literature. However, this study presented ER guidelines in terms of user experience: the function of selecting the direction of the face, the function of selecting a 3D avatar face divided into four stages through morphing, and a guideline on the function of intentionally staring at the camera using the teleprompter function.

Wooyeong Park, Jeongyun Heo, Jiyoon Lee
Evaluating the Accuracy and User Experience of a Gesture-Based Infrared Remote Control in Smart Homes

To enhance user experience while satisfying basic expectations and needs is the most important goal in the design of assistive technical devices. As a contribution, the user experience with the SmartPointer, a novel hand-held gesture-based remote control for everyday use in the living environment, is being explored in comprehensive user tests. The concept and design of the SmartPointer exploits the user’s familiarity with TV remotes, flashlights or laser pointers. The buttonless device emits both an infrared (IR) and a visible (VIS) laser beam and is designed to be universally and consistently used for a large variety of devices and appliances in private homes out of arm's reach. In the paper, the results of three user studies regarding recognition rates and usability issues are summarized. Study One was a mixed-method study in the pre-implementation stage with 20 older adults, gathering the expectations towards a gesture-based remote control and exploring simple, quasi-intuitive controlling gestures. In Study Two, the acceptance and usability of a prototype of the SmartPointer remote control was verified and compared with a group of 29 users from the target group, exploring 8 most frequently used gestures from Study One. In Study Three, comprehensive gesture-recognition tests with an updated version of the remote were carried out with a group of 11 younger adults in various light conditions, postures and distances to the operated device. All three studies confirm the feasibility of the underlying principle, the usability and satisfaction among the participants and the robustness of the technical solution along with a high success rate of the recognition algorithm.

Heinrich Ruser, Susan Vorwerg, Cornelia Eicher, Felix Pfeifer, Felix Piela, André Kaltenbach, Lars Mechold
Detection of Finger Contact with Skin Based on Shadows and Texture Around Fingertips

This paper proposes a method to detect contact between fingers and skin based on shadows and texture around fingertips. An RGB camera installed on a head-mounted display can use the proposed method to detect finger contact with the body. The processing pipeline of the method consists of extraction of fingertip image, image enhancement, and contact detection using machine learning. A fingertip image is extracted from a hand image to limit image features to those around fingertips. Image enhancement reduces the influence of different lighting environments. A contact detection utilizes deep learning models to achieve high accuracy. Datasets of fingertip images are built from videos recording where a user touches and releases the forearm with his/her fingers. An experiment is conducted to evaluate the proposed method in terms of image enhancement methods and data augmentation methods. Results of the experiment show that the proposed method has a maximum accuracy of 97.6% in cross-validation. The results also show that the proposed method is more robust to different users than different lighting environments.

Yuto Sekiya, Takeshi Umezawa, Noritaka Osawa
Character Input Method Working on 1-in. Round Screen for Tiny Smartwatches

We have developed a method for entering Japanese hiragana characters using a 1-in. circular screen. The circumference of the smartwatch screen is divided every 90°. Hiragana is divided into 10 groups of 5 characters each, so the first half of the group is assigned to the left segment and the second half is assigned to the upper segment.First, select a segment in the slide-in, which is the operation that first touch a finger to the outside of the screen then move it inside the screen as it is. The slide-in crosses the edge of the screen, so it can be detected in contour areas that are only 2 mm wide. As your fingertips pass through the segments, the screen is divided into five areas, each displaying one of the five group names assigned to the passed segment. The group displayed in the central area is selected when you release your finger in that area. The surrounding group is selected by slide-out in that direction. The slide-out is the operation that sliding outward beyond the screen edge while touching the screen. Then, the five hiragana characters that are members of the selected group are displayed. The letter in the center is entered by tapping, and other letters are entered by flicking to each direction.The input speed for beginners was 23.1 CPM after using for about 14 min. The speed exceeded 30 CPM after 1 h of use. The error rate was about 3%.

Ojiro Suzuki, Toshimitsu Tanaka, Yuji Sagawa
One Stroke Alphanumeric Input Method by Sliding-in and Sliding-out on the Smartwatch Screen

We have developed a character input method for smartwatches with circular screen, that allows you to enter one alphanumeric character with a single stroke. With this method, characters are selected by the combination of the start and end positions of the stroke. On the standby screen, a square inscribed inside the circular display is assigned to the text area. The left, top, and right of the remaining area is split in two, with each segment used as a key. As a result, the key length is longer than the fingertip size of 10 mm. The screen occupancy of the keyboard is 36%.Each key is selected by slide-in, which is the operation of moving the fingertip that touches the outside of the screen inward. The slide-in ensures that it crosses the edge of the screen, so that passing of the fingertip can be detect with very thin keys. As the fingertip passes the edge of the screen, the screen is separated to 12 areas. Each area is assigned one alphanumeric character or symbol. By moving the fingertip to one area then releasing from the screen, the character assigned to that position is entered.The key assignment is based on the telephone keyboard. The two keys on the telephone keyboard are combined into one key in our method. As the slide-in passes through a key, the two numbers assigned to the key and the 6 or 7 alphabets associated with those numbers are displayed in each split area.

Toshimitsu Tanaka, Hideaki Shimazu, Yuji Sagawa
Research on Hand Detection in Complex Scenes Based on RGB-D Sensor

Human gesture has the characteristics of intuitive, natural and informative, and is one of the most commonly used interaction methods. However, in most gesture interaction researches, the hand to be detected is facing detection camera, and experiment environment is ideal. There is no guarantee that these methods can achieve perfect detection results in practical applications. Therefore, gesture interaction in complex environments has high research and application value. This paper discusses hand segmentation and gesture contour extraction methods in complex environments and specific applications: First, according to the characteristics of depth map, an adaptive weighted median filtering method is selected to process depth data; then use depth information to construct background model which can reduce interference of noise and light changes, and combine RGB information and depth threshold segmentation to complete hand segmentation; finally, region grow method is used to extract precise gesture contour. This paper verifies the proposed method by using vehicle environment material, and obtains satisfactory segmentation and contour extraction results.

Jin Wang, Zhen Wang, Shan Fu, Dan Huang
It’s a Joint Effort: Understanding Speech and Gesture in Collaborative Tasks

Computers are evolving from computational tools to collaborative agents through the emergence of natural, speech-driven interfaces. However, relying on speech alone is a limitation; gesture and other non-verbal aspects of communication also play a vital role in natural human discourse. To understand the use of gesture in human communication, we conducted a study to explore how people use gesture and speech to communicate when solving collaborative tasks. We asked 30 pairs of people to build structures out of blocks, limiting their communication to either Gesture Only, Speech Only, or Gesture and Speech. We found differences in how gesture and speech were used to communicate across the three conditions and found that pairs in the Gesture and Speech condition completed tasks faster than those in Speech Only. From our results, we draw conclusions about how our work impacts the design of collaborative systems and virtual agents that support gesture.

Isaac Wang, Pradyumna Narayana, Dhruva Patil, Rahul Bangar, Bruce Draper, Ross Beveridge, Jaime Ruiz

Human-Robot Interaction

Frontmatter
Analysing Action and Intention Recognition in Human-Robot Interaction with ANEMONE

The ANEMONE is a methodological approach for user experience (UX) evaluation of action and intention recognition in human-robot interaction that has activity theory as its theoretical lens in combination with the seven stages of action model and UX evaluation methodology. ANEMONE has been applied in a case where a prototype has been evaluated. The prototype was a workstation in assembly in manufacturing consisting of a collaborative robot, a pallet, a tablet, and a workbench, where one operator is working in the same physical space as one robot. The purpose of this paper is to provide guidance on how to use ANEMONE, with a particular focus on the data analysis part, through describing a real example together with lessons learned and recommendations.

Beatrice Alenljung, Jessica Lindblom
A Robot that Tells You It is Watching You with Its Eyes

The eyes play important roles in human communication. In this study, a robot tells a user it is watching her/him in a shopping scenario . First, we conducted experiments to determine the parameters of the eyes on the screen of the robot . Next, we conducted a scenario experiment, assuming a shopping scene, to demonstrate the effectiveness of the eye-based interaction compared to common push-type interaction. The results showed that the robot achieved modest and casual interaction in a shopping scene.

Saizo Aoyagi, Yoshihiro Sejima, Michiya Yamamoto
Am I Conquering the Robot? the Impact of Personality on the Style of Cooperation with an Automatic System

From washing machines, automatic doors, robot vacuum cleaners, to the self-driving system, or robot arms, automation has become common in modern life. The experience of using these products or systems depends on the performance quality and users’ trust in the automation system. The experience can also differ from person to person since people have different desirability of control. This paper constructs a research framework and experiment design to explore the correlation between the humans’ trust in robots, individual desirability of control, and their experiential quality in a human-robot cooperative task. When people can participate in a robot’s task performance, our result suggests a positive correlation between trust and desirability of control.

Rou Hsiao, Wei-Chi Chien
Kansei Evaluation of Robots in Virtual Space Considering Their Physical Attributes

In recent years, the demand for robots has increased, and their roles have changed. In other words, there are increasing opportunities for them to be used not only in industrial applications such as factories, but also in daily life. In order for users to continue using these robots, the impression they have towards the robots needs to be considered. However, at present in the new coronavirus disease, it is difficult to actually assemble a robot. Instead, designing and developing a robot in virtual space can be an alternative. In this study, we evaluated the affective values of robots in a virtual space. We created a virtual space as a university campus, along with three pairs of robots with different shapes. Then, we performed kansei evaluation of the robots employing Semantic Differential (SD) for questionnaire. The results show how the ratings differ for each of the robot pairs and adjective pairs. In particular, we found that some adjective pairs had higher ratings than other adjective pairs, suggesting different impressions on our designed robots.

Shun Imura, Kento Murayama, Peeraya Sripian, Tipporn Laohakangvalvit, Midori Sugaya
The Use of a Sex Doll as Proxy Technology to Study Human-Robot Interaction

We studied via a survey (n = 187) how people think about the interaction with sex robots by using a sex doll as a proxy technology. An interactive public installation was created centered around a female sex doll. People passing by could participate in the survey while interacting with the installation and the doll. The findings suggest that no gender differences were found, but that we did find a successful way to study a sensitive and uncommon topic. Finally, we also propose alternative ways to elicit responses about sex robots in future research.

An Jacobs, Charlotte I. C. Jewell, Shirley A. Elprama
Relationship Between Robot Designs and Preferences in Kawaii Attributes

As robots have been increasingly involved in human lives I modern society, it is necessary to further develop the robots that give positive impression to human. Therefore, we pursued a collaborative project in which Japanese and American university students designed and developed kawaii robots. Before and after the collaborative work, preferences of kawaii attributes were also evaluated by a questionnaire. As a result, we obtained eight different robot pairs. In addition, we performed cluster analysis using the questionnaires on kawaii preferences and obtained clusters of participants before and after the collaborative work. Finally, we analyzed the relationship between robot designs and the clustering results. The cluster analysis shows that more than half of the participants did not change their kawaii preferences, while some of them did especially those who did not have preconceived images of kawaii robots. Therefore, we concluded that participants developed a deeper understanding about kawaii after collaborating on this project about the diversity of opinions people have about the concept of kawaii.

Tipporn Laohakangvalvit, Peeraya Sripian, Midori Sugaya, Michiko Ohkura
Perceived Robot Attitudes of Other People and Perceived Robot Use Self-efficacy as Determinants of Attitudes Toward Robots

The emergence of artificial intelligence and robotization is prospected to transform societies remarkably. This study examined the associations between perceived robot attitudes of other people, perceived robot use self-efficacy, and attitudes toward robots. An online survey was collected from respondents living in the United States (N = 969). Analyses were conducted using t-tests, linear regression models, and mediation analyses with bootstrapped estimates. Results showed that participants with prior robot use experience expressed more positive attitudes toward robots, more positive perceived robot attitudes of other people, higher robot use self-efficacy, and higher general interest in technology and its development compared to participants without prior robot use experience. Perceived positive robot attitudes of other people, perceived robot use self-efficacy, and general interest in technology correlated with more positive attitudes toward robots among all study participants. Further, results showed that the association between perceived robot use self-efficacy and attitudes toward robots was particularly strong among those without prior robot use experience, highlighting the importance of self-efficacy beliefs in the early stages of technology adoption. The mediation analysis showed that the association between perceived robot attitudes of other people and attitudes toward robots was indirect through perceived robot use self-efficacy. The association between perceived robot use self-efficacy and attitudes toward robots was indirect through general interest in technology. Results indicate the importance of social psychological aspects of robot use and their usefulness in professionals’ implementation of new robot technologies.

Rita Latikka, Nina Savela, Aki Koivula, Atte Oksanen
Research on Interactive Experience Design of Peripheral Visual Interface of Unmanned Logistics Vehicle

With the continuous development and maturity of autonomous driving systems, the huge demands of the logistics and distribution industry collide with autonomous driving systems, and the inspiration for intelligent logistics and distribution has been burst out, and driverless logistics vehicles have emerged as the times require. In order to improve the transportation safety, distribution efficiency, and interactive experience of unmanned logistics vehicles, the design of the peripheral visual interface of unmanned logistics vehicles is studied, and the application scenarios of unmanned logistics vehicles are analyzed at different levels. The logistics vehicle has conducted preliminary cognitive efficiency and psychological comfort research in the road transportation scene, providing a theoretical reference for the design of the peripheral visual interface of the unmanned logistics vehicle. It is believed that the design of the peripheral visual interface will be more perfect and promote unmanned the rapid development of driving logistics vehicles.

Zehua Li, Qianwen Chen
A Measurement of Attitude Toward Working with Robots (AWRO): A Compare and Contrast Study of AWRO with Negative Attitude Toward Robots (NARS)

Organizations are increasingly relying on a workforce that includes humans and robots working collaboratively. Yet, many humans are reluctant to work with robots. To help identify and predict who is likely to want to work with a robot, this paper introduces a new scale called attitude toward working with a robot (AWRO). The author conducted a study to assess the AWRO scale’s construct validity and reliability along with its predictive power in relation to NARS. The AWRO scale was administered to 220 restaurant employees. AWRO demonstrated good construct validity and reliability and was also much more predictive of worker outcomes than NARS. Results of this study have implications for a workforce that includes humans and robots working collaboratively.

Lionel P. Robert Jr.
Service Sector Professionals’ Perspective on Robots Doing Their Job in the Future

After a long history of industrial automation, robots are entering service fields at an accelerating rate due to the recent technological advances in robotics. Understanding the acceptance and applicability of robots is essential for successful introduction, desired benefits, and well-managed transformation of the labor market. In this work, we investigated whether service sector professionals consider robots applicable to their field compared to professionals from other sectors. We collected survey data from Finnish (N = 1817) and U.S. participants (N = 1740) and analyzed them using ordinary least squares regression. Results showed that Finnish and U.S. participants from the service sector disclosed a less positive attitude toward robots’ suitability to their own occupational field compared to participants from other fields. Younger age, technological expertise, prior experience interacting with robots at work, and positive attitude toward robots were associated with higher perceived robot suitability. Perceived robot suitability was also found to mediate the relationship between occupational sector and positive interaction attitudes. The results indicate that robots entering into service industries evokes some resistance and doubt in professionals of these fields. Increasing technological knowledge and prior experience with robots at work are central factors when introducing robots in a socially sustainable way.

Nina Savela, Rita Latikka, Reetta Oksa, Atte Oksanen
User Experience Best Practices for Human-Robot Interaction

User experience (UX) design of human-robot interaction (HRI) is an emerging practice [1]. Best practices for this discipline are actively evolving as robotics expands into commercial markets. As is typical of emerging technologies, the technology itself takes center stage, continuing to present challenges to proper functioning as it is tested in real world applications. Also, deployment comes at a high price until the market is competitive enough to drive hardware and development prices down. All these aspects preclude an emphasis on UX design, until the technology and associated market reaches a tipping point and good user experience is in demand. If robots continue to be deployed at rates the industry currently predicts, the need for user experience design knowledge and best practices for HRI is eminent. Best practices are a collection of methods, specifically principles, heuristic evaluators and taxonomies [2]. Principles embody high level guidance for design direction and design processes [3]. Heuristics ensure measurable “must have” base-functionality [4–6]. Taxonomies provide a conceptual understanding of possible categories of interactivity [7–9]. This paper focuses on two aspects of best practices, 1.) proposing a robustly user-centric set of emerging technology principles for HRI, which is the area of best practices least explored by the literature, and 2.) proposing a design matrix as a beginning step in addressing the complexity of HRI.

Dorothy Shamonsky
Application for the Cooperative Control of Mobile Robots with Energy Optimization

Cooperative control of mobile robots allows to transport heavy loads collaboratively between 2 or more robots, these applications have motivated the development of new control strategies to coordinate multiple robots automatically. On the other hand, energy optimization in robotic systems is increasingly important to ensure autonomy and take care of resources. This article introduces an application for cooperative control of 3 mobile robots in open loop, the goal is to coordinate the position and parameters of the triangular shape created from the distances between robots. The control algorithm is designed using the Pontryagin principle, starting from the training model and solving differential equations with numerical methods. Mobile robots are built using 3D printing technology and free hardware with wireless Bluetooth communication to receive orders from the remote station. The application is implemented on a computer by inserting the developed algorithm and generating the control orders, this program is developed in Matlab through a main menu for user management. The results present the simulation and experimentation of the system, highlighting the positions and velocities generated by cooperative control with energy optimization, as well as the images of the movements made by the robots according to the orders sent. Finally, a usability score for the app that demonstrates high acceptance is obtained.

José Varela-Aldás, Christian Ichina, Belén Ruales, Víctor H. Andaluz
Educational Robot European Cross-Cultural Design

Educational robots have been used successfully in a variety of teaching applications and have been proven beneficial in teaching STEM studies. Although educational robots are already been using, it is important to identify the robot’s characteristics -appearance, functionality, voice- that is closer to the users’ needs. Our target is to use participatory design procedures to identify the users’ attitudes and needs to construct an educational robot based on them. In this paper, we introduce the STIMEY Robot, which created through these procedures after a cross European study where five different countries participated. The robot evaluated in real classroom environment with students aged between 13 and 18 years old, who had a STEM les-son with the aid of the robot. Our results clearly suggest that students agreed with the robot’s interactive skills and ability to provide feedback and also they statistically significantly changed their attitudes towards its usability after having a lesson with it.

Anna-Maria Velentza, Stavros Ioannidis, Nefeli Georgakopoulou, Mohammad Shidujaman, Nikolaos Fachantidis

Digital Wellbeing

Frontmatter
Designing for Self-awareness: Evidence-Based Explorations of Multimodal Stress-Tracking Wearables

Early wearable devices using multimodal data to promote stress-awareness are emerging on the consumer market. They prove to be effective tools to support users in tracking their daily activities, yet their potential still needs to be further explored. From a user experience design perspective, such wearable devices could help users understand how they feel stress and ultimately shed light on its psychophysiological bases. Based on this rationale, this paper reports the results of evidence-based explorations aimed at formalizing knowledge regarding the use of multimodal stress-tracking wearables. Following a human-centered design process, we design an interactive prototype that tracks two stress-related parameters, namely physiological and perceived stress. We employ a smartwatch to track blood volume pulse and heart rate variability to assess physiological stress, whereas we rely on self-reports gathered through a smartphone to assess perceived stress. We then test the prototype in a controlled setting with 16 end-users. Tests combine qualitative and quantitative research methods, including in-depth interviews, eye-tracking, and surveys encompassing a Kano model-based and AttrakDiff questionnaires. In-depth interviews reveal insights about the type and quantity of information users expect. Ocular scanpaths provide directions to leverage the cognitive effort required by users when interacting with multiple devices. Lastly, evidence from surveys highlights the features and functions that multimodal stress-tracking apps should include. Based on our findings, we create a set of considerations on personal informatics promoting stress awareness from a user experience design perspective. Lastly, we outline future directions for the research design of wearable solutions to promote self-awareness.

Riccardo Chianella, Marco Mandolfo, Riccardo Lolatto, Margherita Pillan
Annoyed to Discontinue: Factors Influencing (Dis)Continuance of Using Activity Tracking Wearables

What influences the continued usage of activity tracking wearables? Several studies investigated the acceptance of activity tracking technologies, but the reasons for a continued usage or abandonment of this technology are rarely addressed. This study is focusing on current and former users of activity tracking wearables and their different perceptions of the technology. For this purpose, an online survey was conducted and a convenience sample of 235 valid cases was obtained. Factors potentially influencing the continued usage included age, gender, community feeling, motivation, design, or perceived data sensitivity.

Kaja J. Fietkiewicz, Aylin Ilhan
Human Computer Interaction Challenges in Designing Pandemic Trace Application for the Effective Knowledge Transfer Between Science and Society Inside the Quadruple Helix Collaboration

In the last decade, smartphone users grown from 2.8 billion worldwide in 2018 to 3.8 billion in 2021. This fact associates with greater ease of publishing and accessing fake news. This is a particularly concerning issue in a global crisis situation such as the COVID-19 pandemic. As stated by the WHO, this is a global health crisis and the spread of fake information could have a direct impact on people’s wellbeing.Due to this situation, all systems which compose the quadruple helix (i.e., science, economy, politics and media and culture-based public) are under great pressure. On the one hand, citizens demand fast and trusted information, and on the other hand, the scientific community is pushed to publish, resulting in scientific papers published very fast and, sometimes, without adequate peer review processes, as reflected by the unprecedented number of retreats.The PandeVITA ecosystem will contribute to offering a better understanding of how societal actors’ behave, understanding their reaction to and interaction with science and health developments in the context of pandemics, with the aim to encourage citizens to contribute to scientific research with different kinds of data.This paper describes a novel approach to citizen science interventions and user engagement based on motivational theory and behavioral science, aiming to provide a set of architectural components, technologies, tools and analytics to assess citizens’ activities, system performance and stakeholders-related key performance indicators (KPIs) in an observatory fashion, allowing to investigate the motivation of the target participants, user engagement and long-term retention.

A. Gallego, E. Gaeta, A. Karinsalo, V. Ollikainen, P. Koskela, L. Peschke, F. Folkvord, E. Kaldoudi, T. Jämsä, F. Lupiáñez-Villanueva, L. Pecchia, G. Fico
A Study on the Usability of Different Age Groups to the Interface of Smart Bands

The usability of smart band interface is very important to the user experience. The purpose of this research study was to explore the usability of different age groups in the operation of different smart band interfaces. The experiment was carried out by using a 3 × 2 mixed two-factor design. The experimental factors were the smart band style and the participants’ age. The conclusions are as follows: (1) Clear, simple, and consistent operation logic should be established for the interface interaction of smart band products to prevent users from being confused. (2) The combination of visual and tactile sensation and timely and effective feedback are beneficial to enhance the interactive experience of smart band interface based on small screen interaction. The form of touch screen in addition to touch buttons is easier to be accepted by users. (3) The designers should try to meet the needs of various consumer groups, or make choices based on the characteristics of the target consumer groups. Younger people usually hope to get faster feedback, for the older people, on the contrary, too many freedom of operation will reduce usability and cause confusion. (4) Faster operation efficiency may not lead to users’ better system usability evaluation, and the number of misoperations in interaction should be minimized to avoid the frustration and loss of patience.

Xiao-Yu Jia, Chien-Hsiung Chen
Attention to Breathing in Response to Vibrational and Verbal Cues in Mindfulness Meditation Mediated by Wearable Devices

As mental healthcare services such as digital mindfulness meditation spread, research to improve user experience is expected to become increasingly important. Thus, this study investigated user perception when a guide for breathing awareness during digital mindfulness meditation is provided through a vibration cue. Focusing on the breath during mindfulness meditation is important, but beginner and intermediate meditators find it difficult due to inner and outer distractions. For this reason, we propose a design guideline for an intervention method that allows the user to concentrate on breathing without disturbing the surrounding environment. In particular, vibration cues can be effective for breathing awareness, because they induce positive neurophysiological changes in the brain, allowing for improved focus and attention. In addition, we measured EEG and HRV to compare changes in user perception. The experiment was designed as within-subjects, and 12 beginner meditators participated. Results of EEG and HRV analysis showed that when verbal and vibration cues were provided at the same time, positive neurological changes were induced and that the user could focus on breathing most effectively. This study’s results provide insights on the design of mindfulness wearable-vibration applications in practical terms, along with expanded knowledge of digital mental healthcare and HCI research.

Eunseong Kim, Jeongyun Heo, Jeongmin Han
CHIAPON: An Anthropomorphic Character Notification System that Discourages Their Excessive Smartphone Use

Smartphones have become a central part of information processing devices. Smartphones are used for information searching, taking photographs, listening to music, and socializing. However, as the number of smartphone users continues to expand, a number of problems related to smartphone overuse have emerged. Smartphone overuse is most commonly discouraged by issuing warnings to users or imposing a time limit on usage. However, although these methods discourage smartphone use at the time of issue, simple warnings are ineffective and excessively severe restrictions can frustrate users. This paper proposes a system that messages users through an anthropomorphic character when their smartphone has been overused. The system, called “CHIAPON”, was experimentally evaluated on 25 college students. The results showed that message notification by an anthropomorphic character can improve users’ motivation to reduce their smartphone use, but the extent of this reduction was not clarified. However, the participants using CHIAPON perceived the system positively, suggesting that the effect of the system can increase with long-term use.

Kazuyoshi Murata
Designing for App Usage Motivation to Support a Gluten-Free Diet by Comparing Various Persuasive Feedback Elements

A gluten-free diet (GFD) is critical for people who are affected by celiac disease. To understand how to support people affected by celiac disease through persuasive technology, two apps called Snackfinder and CeliApp were designed within the EU-funded Erasmus + project DESQOL. While Snackfinder is based on the persuasive principle of social support and the CeliApp on self-monitoring, both applications require user entries to be effective. Therefore, various persuasive design elements were implemented in both applications to motivate users to make entries. The extent to which these persuasive design elements contributed to app usage motivation was evaluated in two comparative quasi-experimental user studies.A significant difference in the motivation to make further entries in the Snackfinder App was found for positive versus negative feedback. No significant difference was found for the comparison of the two rating systems (Star rating versus Like rating) or the comparison of the two variants of color-based feedback in the CeliGear (a physical computing object) connected to the CeliApp. The evaluation of the social feedback in two rating systems and of two variants of the color-based feedback showed a high variance in the respondents’ answers. To increase the persuasiveness of the apps presented, user- and context-adaptive design elements seem to be more promising compared to a one-size-fits-all approach.

Katrin Paldán, Andreas Künz, Walter Ritter, Daire O. Broin
Better Performance Through Mindfulness: Mobile Application Design for Mindfulness Training to Improve Performance in College Athletes

Collegiate student athletes commit almost all of their time and energy to extremely demanding and competing pressures as both a full-time student and athlete. With very limited time for self-care or mindfulness practices, athletes are restricted to their exhausting schedules that are set for them each and every day. In this paper, we introduce an application design named Muses which facilitates the coordination between coaches/trainers and student-athletes with their mindfulness training. Through a complementing two-faced interface design approach, we began our process by researching current competing mindfulness applications along with the NCAA guidelines to required participation hours each week as well as current programs’ mindfulness practices. With two questionnaire surveys completed and a second round of research, we were able to obtain key insights from both general and collegiate athlete mindfulness surveys. This enabled us to begin and determine the final design direction and methods. The personal progress tracking and customization strategies were chosen as the primary features of the application motivating collegiate athletes and programs to practice mindfulness more often.

Félicia Roger-Hogan, Tayler Wullenweber, Jung Joo Sohn
Holdable Devices: Supporting Mindfulness, Psychological Autonomy and Self-Regulation During Smartphone Use

It has been argued that consuming social and micro-targeted digital content rapidly and continuously arouses the brain into an impulsive, dopamine-fueled, ‘automatic’ flow state that leads to excessive and unhealthy smartphone use. The ubiquity of advertising-based products that exploit users’ vulnerabilities to maximize engagement is leading to detrimental impacts on well-being and widespread addiction symptoms. In the UK, about 40% of adults think they spend too much time online, 60% consider themselves ‘hooked’ and 33% find disconnecting difficult.The current digital solutions quantify and block app usage. However, guilt and self-coercion are unhealthy motivators, digital interventions rapidly desensitize users, and experiences of varying quality may occur on one app.Here we introduce Holdable devices, biofeedback-based tangible interfaces that sense when smartphones are used inattentively or compulsively from the motion of the hand behind the phone and gently alert users to regain mindfulness through haptic feedback and abstract visualization. We describe our design process and a pilot study with three prototypes that evaluated user preferences and the intervention’s impact on psychological factors related to problematic smartphone use. Results reveal the potential for beneficial impacts on cognitive and behavioral metrics and inform scopes for future designs.

Federico Julien Tiersen, Rafael Alejandro Calvo
Measurement and Analysis of Body Movements in Playing Futsal Using Smartphones

Sport activities have been analyzed using an Information Technology measuring system to improve players’ performances in recent years. However, such approaches tend to require expensive systems, and, as a result, most users are professional players. In this study, to expand this approach to amateur players, we investigated the possibility of evaluating sport activities, particularly futsal, using only sensors on smartphones. It was possible to evaluate the activity of the team using an accelerometer sensor, which decreased as the game progressed. The activity increased after a sufficient rest. In addition, the degree of synchronization between body movements reflected the important situation of the game. For example, when the degree of synchronization was high, a change from defense to offense was often observed. On the other hand, when the degree of synchronization was low, the team was mostly pushed by the opponent team. Using the measurement results, it is possible to evaluate various activities during sports by sensors on smartphones.

Tomohito Yamamoto, Kento Sugiyama, Ryohei Fukushima
Using e-Health in the Prevention Against Covid-19: An Approach Based on the Theory of Planned Behavior

The development of Information and Communication Technologies (ICTs) has encouraged the introduction of many innovations in healthcare services, including a new form of health management. This study applies the Theory of Planned Behavior (TPB) in order to explain e-health use behavior to prevent Covid19. The research has used an internet based-survey in which 180 people took part. The results indicate that the attitude towards behavior, subjective norms as well as perceived control increase the intention to use e-health to prevent against Covid-19.

Meryem Zoghlami, Salma Ayeb, Kaouther Saied Ben Rached

HCI in Surgery

Frontmatter
Construction of a Knowledge Base for Empirical Knowledge in Neurosurgery

Neurosurgeons accumulate a variety of empirical knowledge through surgeries. Post-operative reports, incident reports, and accident reports are effective means of recording and sharing empirical knowledge, but they are costly to analyze them, and new methods of sharing knowledge are needed. In addition, the interface using CG technology, which is used in surgical planning, is actively used as a means for doctors to easily obtain information, but it is not widely used for knowledge sharing. In this research, we aim to build a knowledge base to convey and utilize physicians’ know-how, which is difficult to convey, by accurately expressing empirical knowledge using CG technology. One of the challenges in sharing empirical knowledge is the difficulty of handling medical information from the viewpoint of personal information protection. Medical information is data that can easily identify individuals and that has a very high importance of rare data. Therefore, we examined the environment for using medical information and appropriate anonymization. First, we proposed a method for constructing an ontology for neurosurgery based on the medical ontology that has been studied in the medical field by organizing the structure of empirical knowledge of doctors. In addition, we designed and fabricated a prototype interface using 3D models as a system for data input and search display. We selected the glTF format as the 3D model format to be used in this study. In this paper, we report on the construction of a knowledge base for sharing empirical knowledge of neurosurgery, and the evaluation of the ontology constructed by the proposed method.

Ayuki Joto, Takahiro Fuchi, Hiroshi Noborio, Katsuhiko Onishi, Masahiro Nonaka, Tsuneo Jozen
VR-Based Surgery Navigation System with 3D User Interface for Robot-Assisted Laparoscopic Partial Nephrectomy

We have been developing a surgical support system for Robot-Assisted Laparoscopic Partial Nephrectomy (RAPN) using augmented reality (AR) technology since April 2014. In our system, three-dimensional computer graphics (3DCG) models including kidneys, arteries, veins, tumors, and urinary tracts are generated from tomographic images (DICOM) preoperatively. The 3DCG models are superimposed on the endoscopic images and projected onto the operator’s console and the operating room monitor. The position and orientation of the 3DCG models are automatically controlled in real-time according to the movement of the endoscope camera image, and the display position and transparency of the 3DCG models can be changed manually by the assistant if necessary. Now, we are developing a VR system that can manually control the 3DCG intuitively. In this paper, we describe the details of this system.

Masanao Koeda, Akihiro Hamada, Atsuro Sawada, Katsuhiko Onishi, Hiroshi Noborio, Osamu Ogawa
Comparative Study of Potential-Based and Sensor-Based Surgical Navigation in Several Liver Environments

Medical navigators have gained recent attention for application in surgery to provide the surgeon with the position and orientation of malignant tumors as accurately as possible. Currently, extensive research is being conducted on producing surgical navigators; however, their practical application is difficult in the case of organ deformation. This study presents a medical navigation system using two types of algorithms to select a pathway from the scalpel tip near a malignant tumor in a 3D liver environment, which avoids blood vessels (arteries, veins, and portal veins) and the malignant tumor when they are regionally segmented inside the liver. The first algorithm involves potential-based navigation, in which the scalpel tip reaches its destination by adjusting the attraction to the destination (currently the malignant tumor) and the repulsion from the blood vessels. The second algorithm involves sensor-based navigation where the surgeon moves the scalpel in a straight line in the direction of the malignant tumor. When a blood vessel is encountered, the scalpel is moved in a clockwise or counterclockwise direction in a certain plane over a certain distance to avoid the blood vessel. It is then moved in a straight line in the direction of the malignant tumor again after approaching the malignant tumor monotonically or asymptotically.In this study, both navigation algorithms select a path near the malignant tumor in the 3D liver environments with diverse shapes and arrangements of blood vessels and malignant tumors. The potential-based navigation algorithm then selects a shorter straight path near the malignancy, but the direction of the incision varies in a complex manner. Lastly, the sensor-based navigation algorithm selects a longer circuitous path near the malignancy, but the incision direction is uniform.

Takahiro Kunii, Miho Asano, Kanako Fujita, Katsunori Tachibana, Hiroshi Noborio
Voxel-Based Route-Search Algorithm for Tumor Navigation and Blood Vessel Avoidance

In this study, we propose an algorithm that determines a simple-shaped surgical path from the liver surface to its malignant tumor in 3D voxel space. The method has a good affinity with DICOM (Digital Imaging and Communications in Medicine standards) captured by MRI (Magnetic Resonance Imaging) or CT (computed tomography). It also accounts for voxel density, which reflects the probability of the existence of blood vessels along the cutting path. The algorithm selects a path that avoids high-density voxels and the entangled blood vessels that spawn them.

Takahiro Kunii, Miho Asano, Hiroshi Noborio
Development of a VR/HMD System for Simulating Several Scenarios of Post-Operative Delirium

In this study, we used a virtual reality system, referred to as Unity, to simulate the world of the onset of postoperative delirium, as observed by patients with postoperative delirium, which cannot be realized in the real world. Here, we had the subject view the data created with Unity by connecting it to Oculus Quest 2 (head mounted display, HMD). Subsequently, we asked the participants to simulate the onset of delirium in two ways: by connecting the data created by Unity to Oculus Quest 2 (HMD) and by watching the video created by Unity on the display. We allowed nursing students to evaluate whether the system was highly realistic and convenient based on subjective data (i.e., questionnaire survey form) and objective data (i.e., physiological data, such as an autonomic nervous system).

Jumpei Matsuura, Takahiro Kunii, Hiroshi Noborio, Kaoru Watanabe, Katsuhiko Onishi, Hideo Nakamura
Selection and Evaluation of Color/Depth Camera for Imaging Surgical Stoma

We aim to develop a system to evaluate pouch attachment to and removal from stoma (artificial anus). For preliminary evaluation, we devoted this study to verify if a color/depth camera can accurately capture the stoma shape. Specifically, a 3D scanner was used to take precise images of three stoma models made of human tissue-mimicking gel and with diameters of 3, 6, and 9 cm to obtain the corresponding point clouds. Then, three stoma models were imaged using a color/depth camera to obtain the point clouds. We used the “CloudCompare” software and its implementation of the iterative closest point algorithm to compare the point clouds. Furthermore, the point clouds in the 3D Euclidean space were represented in a polar coordinate system with origin at the center of gravity of the point cloud to obtain the corresponding histograms. Finally, each point cloud and histogram captured by depth camera and 3D scanner were compared. The evaluation results showed that a larger stoma model provides a smaller difference between the point clouds. Nevertheless, the histograms from the three models were similar. Therefore, histogram comparison may be effective for shape recognition of real stoma from point clouds toward the extraction of information for a system to guide pouch management.

Michiru Mizoguchi, Masatoshi Kayaki, Tomoki Yoshikawa, Miho Asano, Katsuhiko Onishi, Hiroshi Noborio
Investigation of the Hashing Algorithm Extension of Depth Image Matching for Liver Surgery

We have developed some liver posture estimation methods for achieving a liver surgical navigation system that can support surgeons who needs to know very precise information about vessels in the liver. Those methods use 3D liver models scanned from patients and 2D images scanned by depth cameras for estimating the liver posture as accurately as possible. Since a new posture estimation method using simple and high-speed image hashing algorithm was developed last year, we are trying to improve the method in accuracy and applicability for the real-time liver posture tracking. In this paper, we examine how deep learning methods can be used for liver posture estimation and its tracking over the 2D images scanned from depth cameras. We study how can a multi-layer perceptron neural network learn and estimate the liver rotation expressed in quaternion form. The real-time surgical navigation system should be efficiently implemented by combining multiple estimation methods including the deep learning method.

Satoshi Numata, Masanao Koeda, Katsuhiko Onishi, Kaoru Watanabe, Hiroshi Noborio
Study on the Image Overlay Approach to AR Navigation System for Transsphenoidal Surgery

In this study, we propose a system for assisting doctors with transsphenoidal surgery by understanding the positions of tumors during surgery. Transsphenoidal surgery requires examining endoscopic images to assess the situation depicted for surgery; determining the positions of tumors and organs is more difficult for transsphenoidal surgeries compared to other more general procedures. Under the proposed system, a three-dimensional (3D) model is created based on the patient’s preoperative MRIs, and a superimposed image is displayed in real time. This system is expected to assist the surgeon in understanding the situation around the operating field.Markers are used to obtain the data necessary to create an image overlay. The markers are affixed to an operating table and to the end of an endoscope. The patient’s head is held in place during the intraoperative period. The positions of features within the patient’s cranium relative to the operating table marker are obtained through a camera installed on the operating table. The positions of tumors and organs are then estimated from the data obtained and the data from the 3D model created from the patient’s MRIs. The relative position from the marker at the end of the endoscope to the tip of the endoscope is obtained as well, allowing the position of the endoscope tip to be estimated even if the endoscope has been inserted into the body’s interior during the intraoperative period; the tip cannot be seen from the exterior. The relative position of the endoscope tip and the patient’s tumor is calculated, and a 3D model created from the MRI image combined with the current endoscopic image is displayed. The optimum number of simultaneous recognition markers for improving the accuracy of the measurements of the endoscope’s position and orientation was verified, and results of a trial run of the image overlay system conducted using a simplified model were reported.

Katsuhiko Onishi, Seiyu Fumiyama, Masahiro Nonaka, Masanao Koeda, Hiroshi Noborio
Evaluation of Depth-Depth-Matching Speed of Depth Image Generated from DICOM by GPGPU

We are developing a surgical support system to prevent surgical accidents in liver surgery. This system consists of three subsystems. First, The Liver Position and Posture Estimation System performs Depth-Depth-Matching of real depth image and virtual depth image to estimate the position and posture of the liver during surgery. The real depth image is the depth image of the real liver surface measured by the depth camera. The virtual depth image is the Z-buffer of the virtual liver generated from the DICOM captured preoperatively. Next, The Surgical Scalpel Tip Position Estimation System uses an optical 3D position tracker to measure the position of a grid pattern marker attached to the handle of the scalpel, and estimates the tip position by calculating the vector to the marker and the scalpel tip. And finally, The Liver Surgery Simulator receives the estimated position and posture of the liver and the position of the tip of the scalpel, and calculate the distance between the tip of the scalpel and the blood vessels. In this paper, we propose a high-speed Depth-Depth-Matching using GPGPU to estimate the liver position and posture using the real and virtual depth images. In the proposed method, a virtual depth image is generated from a 3D volume using GPGPU and Depth-Depth-Matching is continuously processed to speed up the position and posture estimation. We compared the performance of the proposed method with the conventional method, and confirmed the usefulness of the proposed method.

Daiki Yano, Masanao Koeda, Hiroshi Noborio, Katsuhiko Onishi
Backmatter
Metadata
Title
Human-Computer Interaction. Interaction Techniques and Novel Applications
Editor
Masaaki Kurosu
Copyright Year
2021
Electronic ISBN
978-3-030-78465-2
Print ISBN
978-3-030-78464-5
DOI
https://doi.org/10.1007/978-3-030-78465-2