Skip to main content

2021 | Book

HCI International 2021 - Late Breaking Posters

23rd HCI International Conference, HCII 2021, Virtual Event, July 24–29, 2021, Proceedings, Part I


About this book

This two-volume ​set CCIS 1498 and CCIS 1499 contains the late breaking posters presented during the 23rd International Conference on Human-Computer Interaction, HCII 2021, which was held virtually in July 2021.

The total of 1276 papers and 241 posters included in the 39 HCII 2021 proceedings volumes was carefully reviewed and selected from 5222 submissions. Additionally, 174 papers and 146 posters are included in the volumes of the proceedings published after the conference, as “Late Breaking Work” (papers and posters).

The posters presented in these two volumes are organized in topical sections as follows: HCI Theory and Practice; UX Design and Research in Intelligent Environments; Interaction with Robots, Chatbots, and Agents; Virtual, Augmented, and Mixed Reality; Games and Gamification; HCI in Mobility, Transport and Aviation; ​Design for All and Assistive Technologies; Physiology, Affect and Cognition; HCI for Health and Wellbeing; HCI in Learning, Teaching, and Education; Culture and Computing; Social Computing; Design Case Studies; User Experience Studies.

Table of Contents


HCI Theory and Practice

For a New Protocol to Promote Empathy Towards Users of Communication Technologies

We propose an experimental protocol to promote empathy towards non-verbal people, based in the training of communication with an eye-tracking device. Our framework includes one eye-tracker, one communication interface and one questionnaire. The questionnaire is applied before and after the intervention, assessing the empathy level of participants in both stages of the experiment and it extends a validated questionnaire (QCAE) to measure empathy. Our results show that, while the empathy levels seem to decrease in both control and test groups, the decrease in the test group is not as significant as in the control group. The statistical power to distinguish between the scores in both groups is 75%. While both QCAE and our extended questionnaire show a strong correlation ( $$R^2=0.95$$ R 2 = 0.95 ), our extended questionnaire is more sensitive in distinguishing between both groups than the standard QCAE for which we obtain a power of 63%. Finally, we discuss limitations and future directions, such as extending the study to a larger sample and applying it to different school or working contexts.

Samip Bhurtel, Pedro G. Lind, Gustavo B. Moreno e Mello
Unidentified Users of Design Documentation

When working with system development and design, one is urged to write design documentation. The design document should be constructed in a specific way and should include all necessary information. However, the users of the design document are not stated and seem to be secondary. This study's aim is therefore to investigate who the users are. A literature review was conducted to answer the question. The literature showed that design documentation is a tool to communicate design and it should include relatable information. The essential role of design documents within software projects were also discussed and hence, they are included in this study. The literature review revealed that the system developer and the individuals which are involved in the production of the software is the users. To investigate this further, a hypothesis was formed and thereafter an observation was done, where videos of system developers were observed. The observation showed that system developers do not verbalize the importance of design documentation. Secondly, the study showed how unidentified the users are and that the discussion of who they are is limited. Furthermore, the design documentation is a valuable tool for communicating amongst other designers but if the documentation is not read, it is argued to be a waste of valuable time. The results presented within this study, therefore indicates that a definition of who the user is of design documentation is required.

Agnes Cadier
Common Interactive Style Guide for Designers and Developers Across Projects

Advanced user interfaces continue to improve through better design which in turn translates to more effective development and end use. The intent of this work is to provide design guidelines that can be used across all new applications. An interactive prototype toolkit called the User Interface Prototyping Toolkit (UIPT) was developed to allow sponsors, customers, end users, designers, engineers, and software developers to work together in an iterative fashion to achieve a tailored but guided user interface for the application of interest. Modern styles were applied with a consistent and common look and feel while adhering to solid principle of design, workflow, tasks, layout, coloring, size, animation, and user cues. The design required a general repeatable structure for designers and developers with the intent to allow each new project to understand the overall design and incorporate the features and functionality in accordance with style guide principles. UIPT integrates multiple domains within in the same software application with the ability to efficiently transition an end user’s role from one display presentation to another. To date, several projects and user roles have been integrated into the UIPT. In support of this design philosophy an interactive style guide is in continuous development to apply a common reusable design across the multiple projects that utilize the UIPT-based interface. Both the UIPT application and the interactive style guide were built with Unity, a 2D and 3D game engine. Styles for size, color, layout, and other design factors are laid out in the style guide itself which can be run to visualize and examine interactions. This is advantageous for new and experience software developers because the guide itself contains the code and styles necessary to implement the various components of the user interface while supporting design consistency with reusable code for rapid deployment.

Bryan Croft, Jeffrey D. Clarkson, Eric Voncolln, Mike Nithaworn, Seana Rothman, Odalis Felix
(DT)2-Box – A Multi-sensory Approach to Support Design Thinking Teams

Creative innovations are of immense importance for companies in the modern working world as well as in research. Design thinking as a solution approach for complex problems has become an indispensable part of innovation development and is used in many industries for a wide variety of fields and technologies. However, the documentation of the process in such projects is often neglected despite its great importance for the transfer of information and the reflection of the process. As a result, errors occur in the implementation of the developed ideas or further improvements to the process cannot take place. Software tools that support documentation are therefore desirable, but exist in small numbers so far. Also for the joint reflection of Design Thinking teams, no known approaches to assistance by software exist so far. This paper therefore iteratively develops an IoT-tool for supporting documentation in Design Thinking projects within the framework of a design-scientific investigation, which also offers assistance in method selection as well as the joint reflection on the work carried out, and which is particularly easy to use due to an alternative form of operation. The developed solution is evaluated with the support of experts from innovation research and tested for its practical suitability to answer the questions to what extent software can support documentation, reflection, and method selection in Design Thinking projects. At the same time, it is shown how video recordings can be used to demonstrate the developed solutions in design science research digitally, even in times of contact restrictions.

Julien Hofer, Markus Watermeyer
The Ethic of “CODE”—To Pro Mortalism and Antisurvivalism from Antinatalism

Antinatalism is a philosophy that denies that a person is born in this world. It means that parents should not give the birth in this world, who should not be born, because the life is suffering. In particular, David Benatar does not deny continuing the life, but rather proposes antinatalism on the moral principle that the life should not harm others. The conclusion derives pro-mortalism and anti-survivalism.Pro-mortalism means if it is better not to be born, it is better to not exist after the birth, and admits the suicide, and I call the philosophy of admitting the homicide is anti-survivalism.In this article, we affirm the antinatalism advocated by David Benatar as a premise that we must think about the value in our lives, and by continuing to produce and consume until new technology becomes available. The question of whether the existence of is ethically justified is explained in relation to the three arguments related to the convergence of the Anthropocene. One is the extinction of human beings, the second is the continuation of life by machines, and the last is the continuation of quiet life. Finally, we consider the implications of endless antinatalism with respect to spiritual uploads, the metaphysics of personal self-identity around death, with a focus on the human spirit in the eusociality.

Sachio Horie
Theory and Practice in UX Design
Identification of Discrepancies in the Development Process of User-Oriented HMI

In many productions, work processes are becoming increasingly complex and are controlled, monitored and analysed with the help of computers with appropriate software. The task of user-centred design is to present this wealth of information to the user in such a way that the relevant facts can be quickly grasped and analysed in order to react accordingly. A customised interface can increase effectiveness, employee satisfaction and also error prevention. Legislation has also responded to this increasing relevance by issuing the standard on human-centred design of interactive systems [1].From experience, problems often arise in the practical implementation of theoretical concepts that could not be foreseen beforehand. For the successful completion of the project, these changes/events should be reacted to and the processes adapted if necessary. In this paper, the possible problems and deviations in the implementation of DIN EN ISO 9241-210 are to be pointed out. For this purpose, suitable methods for obtaining information in theory and practice are illustrated for each phase. It will also be shown that the entire design process can develop a momentum of its own, as external influences and new information often lead to an adaptation and readjustment of the selected methods.

Svenja Knothe, Thomas Hofmann, Christian Blessmann
Using Verbatims as a Basis for Building a Customer Journey Map: A Case Study

Customer Journey Map is currently a very used canvas in the UX (User Experience) practice and design processes. Although it is widely discussed, both in the academic and industrial domains; practitioners still present questions about how to model this diagram. Customer Journey Maps are typically generated from the Personas technique, which is generally created by interviews or observations. On the other hand, many organizations employ a tool called NPS (Net Promoter Score). This tool generates both quantitative and qualitative data. The tool obtains expressions from the customer about the service or product called “Verbatim” about the qualitative data. These verbatims capture faithfully the event that took place when customers interacted with the financial products, services, systems, or channels. In that sense, we present a case study where a different approach is employed to build a Customer Journey Map about customers and their User Experience interacting with ATMs in a financial institution in collaborative sessions. In this sense, by applying this approach, we could map the touchpoints by analyzing verbatims. This way, verbatims could constitute a better source of information over interviews. The integration of the verbatim analysis in the CJM process could effortlessly scale as the data gathered from customers grows, promotes the sharing of knowledge inside the organization, and the culture of data-driven decision-making. In the end, we could obtain crucial insights and pain points that could generate new opportunities, requirements, and even new projects for the channel development backlog. We shared the results with a multidisciplinary audience with positive feedback, and they suggested that for new initiatives and analysis, to apply this new approach.

Arturo Moquillaza, Fiorella Falconi, Joel Aguirre, Freddy Paz
Green Patterns of User Interface Design: A Guideline for Sustainable Design Practices

Despite aiming to lower the carbon cost, the growing digital footprint is concerning because of the energy consumed by data centers strewn across the globe. Analysis and distribution of large quantities of data, at these centers, account significantly to the rapidly rising global energy usage. Over the last decade, information service demand has increased manifold which has given rise to energy intensive computation techniques like Artificial Intelligence, Machine Learning and Data Mining, just to name a few. Surprisingly, study shows otherwise, the energy consumption at data centers has risen by only 6% from 2010 to 2018 [1] due to global push for sustainability and technological advancements in data management and cooling systems.This paper promotes the idea further by showing a new approach of efficient design practices, i.e. Green Patterns, in software product development cycle. We correlated the design choices made on User Interface (UI) to its contribution on energy cost, based on a study that suggests the energy consumption of a device is highly attributed to Displays [2]. We measured the energy efficiency of various UI components using three factors: (a) Page Weight (b) Interactive Time and, (c) Sync Time. In the second part of the study, we validated the proposed Green Patterns with users so that usability is not at all compromised while reducing the digital carbon footprint.

Jitesh Nayak, Apurva Chandwadkar
HCI Based Ethnography: A Possible Answer to Reduced Product Life

The reduced lifespan of Information Systems (IS) based products continues to trouble IS companies. We investigate if there are evidence of short time ethnographies in IS or Human-Computer Interaction (HCI) studies, including the researcher as a part of the concurrent engineering team. Our literature analysis reveals that organizations are making use of different forms of ethnographies while addressing customer needs. We propose a Human-Computer Interaction (HCI) framework that can help researchers understand existing research focusing on the shortened product life and, at the same time, appreciate the bridge between research and practice.

Maarif Sohail, Zehra Mohsin, Sehar Khaliq
Celebrating Design Thinking in Tech Education: The Data Science Education Case

Today, corporates are moving toward the adoption of Design-Thinking techniques to develop products and services, putting their consumer as the heart of the development process. Tim Brown, president, and CEO of IDEO, defines design thinking as “A human-centered approach to innovation that draws from the designer’s toolkit to integrate the needs of people, the possibilities of technology, and the requirements for business success”. The application of design thinking has been witnessed to be the road to develop innovative applications, interactive systems, scientific software, healthcare application, and even to utilize Design Thinking to re-think business operation as the case of Airbnb. Recently, there has been a movement to apply design thinking to machine learning and artificial intelligence to ensure creating the “waw” affect to consumers. ACM Taskforce on Data Science program states that “Data Scientists should be able to implement and understand algorithms for data collection and analysis. They should understand the time and space considerations of algorithms. They should follow good design principles developing software, understanding the importance of those principles for testability and maintainability” However, this definition hides the user behind the machine who works on data preparation, algorithm selection and model interpretation. Thus, Data Science program to include design thinking to ensure meeting the user demands, generating more usable machine learning tools, and developing new ways of framing computational thinking. In this poster, we describe the motivation behind injecting DT in Data Science programs, an example course, its learning objective and teaching modules.

Samar I. Swaid, Taima Z. Suid
Social Innovation and Design — Prototyping in the NICE2035 Future Living Labs

The “Horizon 2020” is initiated by the European Union, and it advocates design-driven innovation. It is generally believed that design is an essential impetus for innovation, and naturally, design methods and tools are applied in the field of innovation. In innovation, one field is social innovation that refers to new ideas and solutions developed to cater to social needs. Evidences have shown that social innovation can be fully demonstrated through design, especially in systems thinking, prototype design and visualization. However, people express their concerns about limitations of design in this field.In this paper, having discussed specific case - NICE2035 Future Living Lab, the author proposed innovative design, methods and concepts of social innovation, and emphasized prototype design and infrastructure. Different from project design, social innovation methods, activities in implementation and social connection network built aimed to help stakeholders establish long-term relationships and obtain design opportunities. Moreover, it stressed that prototypes, as a design method and tool, should be open and displayed to more people, and to spread knowledge of a sustainable lifestyle.

Jing Wang
On the Life Aesthetics of Packaging Design in the Context of Digital Economy

All the beautiful satisfying things in life have their own internal aesthetic mechanism. Digital economy promotes the new pattern of digital economy integration and transformation of industrial digitalization and digital industrialization. In digital economy, the life aesthetics of the integration of science and art advocates judging and optimizing the things in life through the organic unity of science and aesthetics. First of all, through the literature reading and data collection analysis of creative packaging works and smart packaging cases, to explore the differences between smart packaging and traditional packaging, summed up for the packaging design of the new economic model under the innovative path. Secondly, the semantic difference method is used to integrate and filter the emotional vocabulary of the public in the new economy, to enhance the public’s sense of experience, and to analyze the process of beauty in the literary and creative packaging design from the perspective of life aesthetics, this paper discusses the integration of functional beauty and formal beauty. From the perspective of life aesthetics and technology aesthetics, this paper examines the innovative design of all elements of packaging, making it possible to reflect on the connotation and explicit innovation of life aesthetics in packaging design under the background of digital economy, to achieve the original intention and vision of making people’s lives better. The theory and method are proved to be accurate and applicable by taking the packaging design of time-honored cakes as an example.

Yifei Zhu, Wei Yu

UX Design and Research in Intelligent Environments

Lego®-like Bricks to Go from the Real to the Virtual World

In this work, we prototype a system that allows for Lego®-like bricks to be used physically to create a model in the real world that is then mapped and recreated in an existing Lego CAD software. This allows users to build their designs tactically in the real world and have these designs translated into the virtual-world automatically. The key benefit is that bricks are, naturally, a tactile building tool, but there is a desire to have virtual representations of brick-based models for all sorts of reasons such as designing and creating instruction booklets, creating models that can be incorporated in virtual worlds (such as games), and simply sharing and archiving a design beyond the limits of a physical model. Our system shows that an embedded system that includes per brick computing intelligence is a viable method to achieve the simple goal of translating a real brick model into a virtual model.

Alejandro Cabrerizo, Will Zeurcher, Thomas Wright, Peter Jamieson
Systematic Literature Review of Nuclear Safety Systems in Small Modular Reactors

Safety in nuclear power plants is an important area in the realm of engineering safety [1]. Small Modular Reactors (SMRs) are an emerging form of nuclear technology with widespread application, notably in austere or remote communities. SMRs are defined as power plants that produce 300 MWe or less [2], designed with modular fabrication technology for ease of onsite construction. Historically, nuclear power plants have maximized large economies of scale and been constructed with ample resources for safety regulation concerns, such as access to reliable water tables. SMRs will be deployed to remote locations without the resources normally afforded to a large power plant, and so the safety systems of these new designs must be innovative and still be able to meet the stringent safety specifications set forth by regulatory bodies [3]. This is a very important aspect of SMR design and is critical for licensing and production. This paper outlines a systematic literature review of scientific papers concerned with nuclear safety systems of SMRs. To further expound upon this topic, literature analyses were completed using keyword searches for, “nuclear safety systems,” and “small modular reactor” and then evaluating trends between articles. The databases used for this research were Google Scholar, Scopus, Web of Science, and Mendeley. The analyses were completed by use of the following literature review software: VOSViewer, Publish or Perish, MAXQDA and Vincinitas. The results are presented in the form of trend analyses, keyword cluster analysis, co-author clusters analysis, co-citation analysis, word clouds and emergence analyses. The results show a strong correlation between the study of this material and the rising interest in green energy, as well as the particular safety systems necessary for the development of SMRs.

Tucker Densmore, Vincent G. Duffy
Bio-Spatial Study in the Urban Context: User Experience Analysis from New York, Preliminary Neurophysiological Analysis from Kuala Lumpur and Nairobi

Multimer is a new system that measures multimodal biosensor data to model how the built environment influences neurophysiological processes. This article presents participant feedback of user experience for a Multimer study in Manhattan south of Central Park, New York City, USA. The feedback was used to update and improve the entire Multimer system for use in other locales. This article will also present preliminary analyses for bio-spatial pilot studies in Kenya and Malaysia, conducted by the Multimer’s research team in partnership with the Sustainable Mobility Unit of the United Nations Human Settlements Programme (UN-Habitat). For all of these studies, participants used wearable sensors to record their electroencephalographic signals as they cycled or walked.

Arlene Ducao, Ilias Koen, Tania van Bergen, Yapah Berry, Scott Sheu, Tommy Mitchell, Landon Johnson
Speech Emotion Recognition Using Combined Multiple Pairwise Classifiers

In the current study, a novel approach for speech emotion recognition is proposed and evaluated. The proposed method is based on multiple pairwise classifiers for each emotion pair resulting in dimensionality and emotion ambiguity reduction. The method was evaluated using the state-of-the-art English IEMOCAP corpus and showed significantly higher accuracy compared to a conventional method.

Panikos Heracleous, Yasser Mohammad, Akio Yoneyama
QFami: An Integrated Environment for Recommending Answerers on Campus

With the development of the Internet, people increasingly use search engines to obtain the answers for their questions and quench their thirst for knowledge. Although search engines often provide some information quickly, they can fail to provide what people really need. Despite the ubiquitous availability of search engines, people ask questions to other people to seek valuable local information and enrich social experiences. Existing social applications such as Stack Overflow [1] and other Community Question Answering systems allow people to exchange knowledge efficiently. However, they mainly consider the expertise to recommend answerers, which would not be sufficient for supporting Q&A activities in real-world environments. In this work, we propose QFami, a novel integrated Q&A environment for physically-based Q&A scenarios on campus, QFami incorporates the interest expertise, proximity, locations, and other contextual factors that can be inferred by using various sensors on mobile phones.

Xiangyuan Hu, Shin’ichi Konomi
DoAR: An Augmented Reality Based Door Security Prototype Application

The advent of massive technological paradigm shift to subsequent interest in augmented reality (AR) paved the way for designers to design, develop and deploy new use cases of AR in the service industry. In this paper, we investigate the use of augmented reality as an option for providing entry into secured buildings by comparing it against the traditional radio frequency identification devices cards. 51 participants from various backgrounds were recruited to enter a secured building using an AR-based app and an RFID card. The results stipulate that AR-based app had a greater acceptance among participants in terms of security and ease of use. 40 participants thought AR-based app is more secure than RFID cards and 34 participants felt the application was easier to use compared to RFID cards. 32 participants indicated that they would prefer to use the AR-based on daily basis. The result only came short in favor of RFID-based systems in terms of faster access with a margin of 3 persons where RFID systems outperformed AR-Based systems. The results indicate that the system is a suitable option but further research is needed to deploy the research in real-world settings.

Muhammad Usama Islam, Beenish Chaudhry
Machine Learning-Based Font Recognition and Substitution Method for Electronic Publishing

This study proposes a font recognition algorithm based on a deep convolution neural network and a font substitution algorithm based on texture and grayscale features. The experiments show that the proposed font recognition method can effectively extract font features with a high recognition rate, without the prior knowledge of the text content and with good versatility. The substitution effect of the proposed font replacement method can better satisfy the subjective visual perception of the human eyes and easily expand. The research results can be used to improve the publication quality; ensure the best presentation effect when presented in different platforms; facilitate font retrieval and effectively protect font copyright.

Ning Li, Huan Zhao, Xuhong Liu
Collaborative Explainable AI: A Non-algorithmic Approach to Generating Explanations of AI

An important subdomain in research on Human-Artificial Intelligence interaction is Explainable AI (XAI). XAI attempts to improve human understanding and trust in machine intelligence and automation by providing users with visualizations and other information that explain decisions, actions, and plans. XAI approaches have primarily used algorithmic approaches designed to generate explanations automatically, but an alternate route that may augment these systems is to take advantage of the fact that user understanding of AI systems often develops through self-explanation [1]. Users engage in this to piece together different sources of information and develop a clearer understanding, but these self-explanations are often lost if not shared with others. We demonstrate how this ‘Self-Explanation’ can be shared collaboratively via a system we call collaborative XAI (CXAI), akin to a Social Q&A platform [2] such as StackExchange. We will describe the system and evaluate how it supports various kinds of explanations.

Tauseef Ibne Mamun, Robert R. Hoffman, Shane T. Mueller
Toothbrush Force Measurement and 3D Visualization

In recent years, a variety of oral care products have been developed. In this research, we propose a toothbrushing system that can detect force from the force-sensitive sensor. The results of the experiments show that the membrane force sensors attached to the brush side have a higher detection rate of force than the acceleration sensor.

Kasumi Sakuma, Haicui Li, Lei Jing
A Study on the Creativity of Algorithm Art Using Artificial Intelligence

Algorithmic art, otherwise known as generative art, is gaining popularity as a new medium for aesthetic creation using computer-generated automation. It is also seen as a potentially powerful platform toward enhancing the understanding of AI. Algorithmic art can be interpreted in a straightforward manner, translations of naturalistic elements within our society through algorithms is something that speaks truths to us and makes ask question things. Not only is generative art reflecting specific naturistic elements of society and our environment, but it also is a method of understanding how AI units function, how they process visual perceptions of our world. In this paper, I have shown that algorithmic art has existed for much longer than most people know. Algorithmic art has proven that the process of creating art and perceiving reality with machines is extraordinarily parallel to the creation of particular and historical forms of contemporary art and the perception of humans.

Ryan Seo
An Approach to Monitoring and Guiding Manual Assembly Processes

With the dawn of Industry 4.0, companies are constantly looking for ways to digitize their work flows in order to quantify, monitor, and improve work in industrial settings. While some processes, such as those using computer numerical control machinery, are either easy to embed sensors into or already have sensors embedded that are providing information, there still are core processes taking place in industrial settings that do not have easy ways to retrieve data from. One such process is the manual assembly process. In addition to the difficulty in retrieving data from assembly processes, the assemblies that rely on human operation with simple tools rather than machining centers are more difficult to immerse in Industry 4.0 practices due to its primarily offline nature. To better address this issue, we propose an approach and describe implementation of the corresponding system that is capable of providing information to assemblers based on data received during an active process. Specifically, the system informs the assembler of the next steps in the assembly process based on what the platform determines has already been done. In order to accomplish this, a convolutional neural network is used to analyze images on a camera that will be overseeing the physical assembly process. In this way, there will be a live digital record of the assembly process taking place as well as some, albeit basic, interaction between the assembler and the overarching system monitoring the process.

Benjamin Standfield, Denis Gračanin
Deep Learning Methods as a Detection Tools for Forest Fire Decision Making Process Fire Prevention in Indonesia

This research examines the collaboration between agencies in policy-making based on hotspot monitoring from satellites. Valid data regarding the number of hotspots from the satellite is needed in decision making because it provides information used to control forest and land fires in Indonesia. For instance, the Ministry of Forestry uses data from the NOAA-18 satellite for analysis, while the BMKG utilizes those from the Agua/Terra. However, the data generated by each satellite has differences in the number of hotspots. Therefore, this research aims to determine the collaboration between the Ministry of Forestry and BMKG in the use of satellite data for decision-makers to determine disaster alert status. This research uses a qualitative approach to analyze secondary data from two popular media sources collected using the Nvivo 12 plus application. The result showed that agencies involved in fire prevention lack collaboration due to institutional designs that lead to a lack of communication and unclear roles for each institution during the decision making process.

Dia Meirina Suri, Achmad Nurmandi
Intelligent Music Lamp Design Based on Arduino

With the rapid development of science and technology, people’s lives are becoming more and more intelligent. As a necessities of home life, desk lamps have always been the focus of smart home research. The current research on smart lamps is mostly concentrated on realizing the automatic adjustment of lights with environmental changes, improving the energy efficiency of the desk lamps, and enriching the auxiliary functions of the lamps. There is a lack of research on home entertainment lamps. This article proposes a design scheme of an intelligent music light based on Arduino UNO MCU. The device integrates apart from the colored lamp itself, a music player and a Bluetooth speaker that can be used to play music. We also designed an Android APP, Yin. Users can use the APP on their mobile phones to change the color of the light and control the music playback function. Practice has shown that this design is rich in functions, convenient and interesting, combines technical and artistic features, and can be widely used in daily life.

Yuanlu Wang, Xiaofang Li
Exploring Drag-and-Drop User Interfaces for Programming Drone Flights

Drones are commonly seen in today’s society, however, applications for drone swarms are still relatively new and emerging. As these applications grow in popularity, it becomes important to research new methods to control drone swarms while providing a better user experience to pilots and safer flights. In this paper, we present the design of a drag-and-drop user interface that allows both novice and experienced pilots to control drone swarms. Among its advantages, this type of interface can be easy-to-use, provide flexibility for flight modes and platforms, constant visual feedback, and accurate control. The user-interface design presented is based on the results of a focus group study conducted with seven participants divided into three sessions. Discussions and common characteristics found in participants’ drawings during the focus group sessions enabled the design of the high-fidelity wireframe presented in this work. Following, the benefits a drag-and-drop interface provides to human-drone interaction and future research directions are discussed.

Joshua Webb, Dante Tezza
Research on the Logical Levels and Roles of Human Interaction with Intelligent Creatures Under the Trend of Human-Computer Intelligence Integration

At present, the combination of computers and humans in the field of artificial intelligence is getting closer and closer, which is not only reflected in the physical level, but also in the psychological level. Such a combination of computer and human is more like the birth of a new life form, which contains both the characteristics of natural life and the characteristics of non-life. In this paper, the interactive roles that will be generated under the premise of completing tasks in the “human-computer intelligent fusion” are divided into three categories: “smart creatures”, biological humans and smart pieces. At the same time, these three types of roles will alternately produce three different types. The interactive relationship between “intelligent creatures” and “intelligent creatures”, biological humans and “intelligent creatures”, and biological humans and smart devices. After dividing the interactive relationship in the horizontal dimension, this article uses Donald Norman’s “Three Designs” Each level” divides all kinds of interactions vertically. Then use Donald Norman’s “Seven Stages of Action” to refine the design elements required for the three-tier interaction. Donald Norman’s “Three Levels of Design” and “Seven Stages of Action” have been verified to help guide interaction design and human action design, and provide a useful framework for design activities. By expounding the interaction characteristics of each role and level, a more detailed discussion of the effectiveness of this new interaction is made, which provides usability reference for the upcoming human-machine intelligent integration interaction.

Wei Yu, Xiaoju Wang
An AR-Enabled See-Through System for Vision Blind Areas

The manual assembly has a high proportion in industry. However, in many industrial scenarios, manual assembly in the Vision Blind Areas (VBAs) is time-consuming and challenging due to the lack of necessary visual information. This study presented a see-through Augmented Reality (AR) system to solve the problems during manual assembly in the vision blind area. This system enabled users to see the inner components of the VBAs cross the surface of mechanical products. The human hand and the mechanical part in a VBA were tracked and rendered in an AR HMD. We developed a prototype system and conducted a user study to evaluate the system usability, users’ performance and workload. The results indicated that this system was well integrated and easy to use. Moreover, participants worked with this system had a lower workload with improved performance.

Shaohua Zhang, Weiping He, Shuxia Wang, Shuo Feng, Zhenghang Hou, Yupeng Hu
IMGDS - Intelligent Multi-dimensional Generative Design System for Industrial SCADA

Through design and implementation of IMGDS in SCADA, we are able to create a more user-friendly information display system for industries. We offer comprehensive and high quality pre-designed industrial component system to use improve conventional industry design. Such a system can switch intelligently between monitoring mode and key information mode. In the meantime, to meet industrial needs, multi-dimensional display system with 2D and 3D switch mode is developed. Also, a theme switching mode is presented to meet brand need, nighttime operation and other purposes.

Wei Zhao, Ruihang Tian, Nan Zhao, Jiachun Du, Hanyue Duan

Interaction with Robots, Chatbots, and Agents

Storytelling Robots for Training of Emotion Recognition in Children with Autism; Opinions from Experts

Social robots are being increasingly used in the therapy of children with autism spectrum disorder (ASD). However, robot interaction is often designed by HRI researchers who are not fully familiar with cognitive challenges faced by children with autism. This study aimed to validate a social robot interaction designed for emotion recognition training for children with autism by seeking opinions from ASD educators and experts. A total of 26 participants (13 ASD experts and 13 non-experts) filled out a survey in which they watched videos of six emotional gestures performed by a NAO-robot. The emotional gestures were prepared with and without situational context presented in form of storytelling by the robot. Participants first made a recognition of the robot emotion in each gesture and then evaluated the feasibility of gesture recognition for children with ASD. Results showed that for almost all emotions, addition of context by storytelling significantly increased the feasibility of gesture recognition. Gestures were considered as not feasible for children with ASD when storytelling was missing and that in general, experts gave a significantly lower feasibility score to robot gestures as compared to non-experts. Our findings suggest that creation of context play an important role in the design of robot gestures which can make the training of social skills in children with ASD more effective. Additionally, the observed difference in the evaluation of the two groups suggests that social robot interventions should be validated by professionals who are more knowledgeable about social and cognitive difficulties experienced by these children.

Maryam Alimardani, Lisa Neve, Anouk Verkaart
A Study on the Usability Evaluation of Teaching Pendant for Manipulator of Collaborative Robot

Nowadays, a collaborative robot, known as a cobot, is a robot that can learn multiple tasks and assist alongside human workers. Thus, it plays a key role in the usage of a teaching pendant controlling the collaborative robot. However, it is gap to control a cobot and a teaching pendant proficiently between beginners and skilled engineer. Programming for teaching requires coding knowledge by line and it takes time to understand the instruction of existing teaching pendants.The teaching pendant controlling a cobot requires intuitive interfaces, not only for ease of use, but also for modifying existing execution programs. However, there is no standard or guideline for an intuitive teaching pendant controlling cobot, and it can be noticed that the usability of the teaching pendant’s interface is rather poor.This paper shows the result of usability evaluation concerning evaluation items of usability and measurement indications. We conducted a study based on performing several steps of teaching tasks.We recruited 30 participants consisting of beginners, intermediates, and advanced participants (F = 10, M = 20, 34.75 ± 2.50 years). Half of beginners are engineering majors and the rest are non-engineering majors. In this experiment, Indy 7 (Neuromeka) and Conty, or a teaching pendant, were used, and each participant conducted the experiment for 30 min, consisting of basic tasks and advanced tasks. Before the experiment, the coordinator explained the experimental method to the participants, After the experiment was over, after 10 min break, the interview, questionnaire and SUS about the experiment was performed.In the result, the average of SUS is 57.86 that means the teaching pendant is not easy to use and complicate. After analysis of the result, we found that the function of the teaching pendant was so complex and the participants needed help. In the future work, we plan to propose an improved teaching pendant reflecting the usability result.

Jeyoun Dong, Wookyong Kwon, Dongyeop Kang, Seung Woo Nam
The Design and Evaluation of a Chatbot for Human Resources

Technological innovations in artificial intelligence and machine learning enable business operators to engage with their customers 24/7 through chatbots. Many customers expect around-the-clock support which puts a strain on human resources; augmenting human resources with a chatbot can reduce costs for an organization and increase customer satisfaction. This work presents the HR Chatbot: a chatbot that answers general questions about human resource topics (i.e. payroll, benefits) for a private university. The research involves a collaboration between computer scientists, user experience researchers, and human resource administration. This work addresses two research questions: What are employees at a private university looking for from a chatbot for human resources?; and what are the appropriate methods to evaluate and measure the success of a chatbot for human resources? The HR Chatbot uses IBM Watson Assistant services, and an initial prototype was designed from a document of 31 frequently asked questions. Three rounds of user testing were conducted with employees of the university. The initial tests revealed that the chatbot was perceived as useful, but many were dissatisfied with the responses, specifically the lack of responses. Errors in the chatbot were classified into different categories; the most common being that the question was not in the content scope for the chatbot. Thus, data from the initial studies informed the scope of the chatbot; the number of unique questions grew to 157 and the total number of questions increased to 463. The HR Chatbot has 90% accuracy and an average sustained usability score of 69.5, surpassing the benchmark score. Following the initial tests, the HR Chatbot was deployed in real-time on the human resources website. This work describes how a chatbot was created, evaluated, and deployed online. We hope that this work inspires and informs others to explore similar use cases with chatbot technologies.

Jaimie Drozdal, Albert Chang, Will Fahey, Nikhilas Murthy, Lehar Mogilisetty, Jody Sunray, Curtis Powell, Hui Su
Relationship Between Eating and Chatting During Mealtimes with a Robot

Eating with someone makes mealtimes more enjoyable and enriches our lives. However, lifestyle changes and the current COVID-19 pandemic have forced many people to frequently eat alone. Communication robots can be good mealtime partners. People would not worry about matching their mealtime schedules with robots, and they present no risk of disease transmission. Chatting is an important component of mealtime interaction. Thus, we developed a chatting system that can respond with natural timing and investigated the relationship between eating and talking when eating with a robot. We combined the good points of speech content recognition and volume recognition. Conversation systems only based on speech-content recognition experience long response-lag times because they use complex technologies. Using volume recognition to recognize human speech, which is faster than speech-content recognition, we aimed to reduce this lag by using filler responses, such as “I see,” before the speech-content recognition finished Using this system, we conducted an experiment to analyze the relationship between utterances and eating behaviors from the recorded videos and questionnaire answers of 25 participants. The results suggest that the recognition of picking-up and eating motions could support the recognition of utterances from humans but not necessary in deciding when robots should start talking.

Ayaka Fujii, Kei Okada, Masayuki Inaba
When in Doubt, Agree with the Robot? Effects of Team Size and Agent Teammate Influence on Team Decision-Making in a Gambling Task

Human-machine teaming is expected to provide substantive benefits to team performance; however, introduction of machine agents will also impact teamwork. Agents are likely to exert substantial influence on team dynamics, even if they possess only limited abilities to engage in teamwork processes. This influence may be mitigated by team size and experience with the agent. The purpose of this experiment was to investigate the influence of an agent on team processes in a team consensus gambling task. Teams were either two or three humans and a machine agent. Participants completed fifty rounds of a gambling task, similar to the game roulette. In each round, team members entered their belief about what the next round outcome would be, a proposed wager, and how confident they were. The machine agent also made a suggestion regarding outcome and wager, but its accuracy was fairly low. The human team members then had to come to a consensus regarding outcome and wager. Overall, the agent exerted significant influence on team decision making, wagering, and confidence. Contrary to initial predictions, team size had only a modest effect, mostly on confidence ratings. Experience with the agent also did not have much effect on the agent’s influence, even as the team was able to observe that the agent’s accuracy was low. These results suggest that machine agents are likely to exert significant influence on team processes, even when they possess limited abilities to engage in teamwork.

Gregory J. Funke, Michael T. Tolston, Brent Miller, Margaret A. Bowers, August Capiola
Modeling Salesclerks’ Utterances in Bespoke Scenes and Evaluating Them Using a Communication Robot

A paradigm shift is taking place, from the era of common off-the-shelf products to that of personalized products. In this study, we developed a communication robot that could improve customers’ satisfaction in bespoke scenes, which is a sales method of personalized products. First, we extracted the model of the salesclerks’ utterances that would be useful for improving satisfaction in bespoke tailoring. We modeled the salesclerks’ utterances based on the utterance content. Next, we designed a bespoke origami task by communicating with a robot, which worked based on the salesclerks’ utterance model. Then, we analyzed how the robot’s utterances evoked customers’ emotions and improved satisfaction. As a result, we revealed that the utterances that encouraged customers’ decisions improved customer satisfaction.

Fumiya Kobayashi, Masashi Sugimoto, Saizo Aoyagi, Michiya Yamamoto, Noriko Nagata
User Satisfaction with an AI-Enabled Customer Relationship Management Chatbot

Chatbots’ ability to carry out focused, result-oriented online conversations with human end-users impacts user experience and user satisfaction. Using “Chatbots” as key identifiable examples utilized by Electronic Commerce (E-Commerce) firms in Customer Relationship Management (CRM), this study offers a user satisfaction model in the context of Artificial Intelligence (AI) enabled CRM in E-Commerce. The model is based on Expectation Confirmation Theory (ECT) and Uncertainty Reduction Theory (URT) within the Chatbot context. This model will allow us to investigate if chatbots can provide both businesses and consumers the opportunity to complete a journey from normal through abnormal to the new normal in situations similar to a covid pandemic.

Maarif Sohail, Zehra Mohsin, Sehar Khaliq
Evaluation of a NUI Interface for an Explosives Deactivator Robotic Arm to Improve the User Experience

TEDAX agents performe explosives handling tasks that require good assistance in handling EOD robotic arms. The interface used for handling explosive devices is relevant for explosive deactivating agents due to the fact that it can ensure greater immersion and performance in their operations. Generally, robotic arms are manipulated by joysticks, keyboard or buttons that are not very intuitive. NUI interfaces are intuitive and provide assistance in robotic arm manipulation. In this study, it is proposed to verify the feasibility of a NUI interface to manipulate EOD robotic arms. The degree of assistance provided by the interfaces through the NASA-TLX method in TEDAX agents of the UDEX-AQP (Unidad de Desactivación de Explosivos de Arequipa) is compared. The compared interfaces are: a MK-2 commercial robot from Allen Vanguard and a NUI interface based on specular Imitation. The tests show that the NUI interface evaluated can be applied in explosives disposal interventions.

Denilson Vilcapaza Goyzueta, Joseph Guevara Mamani, Erasmo Sulla Espinoza, Elvis Supo Colquehuanca, Yuri Silva Vidal, Pablo Pari Pinto
Attitudes Towards Human-Robot Collaboration and the Impact of the COVID-19 Pandemic

It is expected that human-robot collaboration will increase in the future. Some people are already experiencing this in their working life, but other people are still skeptical about it. The COVID-19 pandemic has brought new challenges to the world’s population and has a strong impact on our everyday working life. The question arises, whether the perceived involvement in the current situation as well as the occupational field influence the attitudes towards human-robot collaboration. Overall, 54 men, 45 women and 1 non-binary (N = 100) aged between 18 and 71 years (M = 29.87, SD = 14.00) participated in an exploratory online study. The results of the study show that the participants’ attitudes towards the use of collaborative robots in the three different categories assembly, logistics and cleaning were rather positive. Furthermore, assembly and logistics tasks were assessed as significant more conceivable for human-robot collaboration than cleaning tasks. Interestingly, participants that were more concerned about the COVID-19 pandemic assessed the use of collaborative robots overall significant more positive than other participants did. Attitude differences due to the different occupational fields of the participants did not reach the level of significance. In addition, the participants described different functions in which they could imagine collaborative robots in the three categories assembly, logistics and cleaning. The results of the presented exploratory study shall help to get more insight in this important future field.

Verena Wagner-Hartl, Kevin Pohling, Marc Rössler, Simon Strobel, Simone Maag
Older Adults’ Voice Search through the Human-Engaged Computing Perspective

Human-Engaged Computing (HEC) is a framework that addresses “synergized interaction” sustaining both humans and computers in the right balance, a relationship that consciously honors human inner capabilities over device creativity. Due to the growing interest and demand on voice search for older adults, it is critical to research on how to engage older adults with voice search to improve their healthiness and wellbeing. This paper presents two case studies to discuss the approaches and thoughts about applying HEC to the current voice search systems, in particular, how HEC can engage older adults with interaction of voice search systems and how we can measure older adults’ engagement with such systems.

Xiaojun (Jenny) Yuan, Xiangshi Ren

Virtual, Augmented, and Mixed Reality

Research on Projection Interaction Based on Gesture Recognition

With the continuous emergence and vigorous development of various new technologies in the information age, human-computer interaction is also changing constantly. People are increasingly connected with the virtual world, so it is necessary to study more natural ways of human-computer interaction. Projection interaction is one of the most important interaction methods. The purpose of this paper is to study how to overcome the misoperation problem caused by foreign objects (interferers) in the environment of projection interaction (Fig. 1 c). The method (named after FDGR) adopted is to use the fingertip detection algorithm based on user habits in image processing and the gesture recognition algorithm based on Leap Motion to complete the accurate and convenient natural human-computer interaction. Through user experiments, the experiences of completing the fixed assembly task in the mode of projection (P), gesture (G) and FDGR (PG) were compared, and the data was processed and analyzed. The experimental results showed that FDGR could effectively overcome the misoperation caused by foreign objects (interferers) and the non-intuitiveness of simple gestures, and effectively improved the user's operating experience.

Zhiwei Cao, Weiping He, Shuxia Wang, Jie Zhang, Bingzhao Wei, Jianghong Li
The Effects of Social Proneness and Avatar Primes on Prosocial Behavior in Virtual and Real Worlds
Yu-chen Hsu, Siao-wei Huang, Hsuan-de Huang
VR-Based Interface Enabling Ad-Hoc Individualization of Information Layer Presentation

Graphical user interfaces created for scientific prototypes are often designed to support only a specific and well-defined use case. They often use two-dimensional overlay buttons and panels in the view of the operator to cover needed functionalities. For potentially unpredictable and more complex tasks, such interfaces often fall short of the ability to scale properly with the larger amount of information that needs to be processed by the user. Simply transferring this approach to more complex use-cases likely introduces visual clutter and leads to an unnecessarily complicated interface navigation that reduces accessibility and potentially overwhelms users. In this paper, we present a possible solution to this problem. In our proposed concept, information layers can be accessed and displayed by placing an augmentation glass in front of the virtual camera. Depending on the placement of the glass, the viewing area can cover only parts of the view or the entire scene. This also makes it possible to use multiple glasses side by side. Furthermore, augmentation glasses can be placed into the virtual environment for collaborative work. With this, our approach is flexible and can be adapted very fast to changing demands.

Luka Jacke, Michael Maurus, Elsa Andrea Kirchner
Alleviate the Cybersickness in VR Teleoperation by Constructing the Reference Space in the Human-Machine Interface

The introduction of virtual reality into the teleoperation system can enhance the three-dimensional and immersive sense of visual feedback, but the serious cybersickness caused by it needs to be solved urgently. Scholars have proposed many methods proceed from the hardware or software aspect to alleviate cybersickness but increased the user's mental burden and physical exertion to some extent. Inspired by the static frame hypothesis (RFH), this research proposes a method to alleviate cybersickness by rendering virtual reference space in the virtual environment. This method aims to build a three-dimensional reference space through the rendering plane to help users establish a stable feeling on the ground of the real environment in the virtual environment, thereby alleviating cybersickness. The experiment results show that rendering the reference space in a virtual teleoperation environment can significantly alleviate the cybersickness. Specifically, the total cybersickness score (TS) of participants in the virtual environment with a reference space was significantly lower than that of a virtual environment without a reference space (0.023*), a decrease of 9%. Among them, the SSQ-D score of participants in the virtual environment with a reference space is significantly lower than the virtual environment without reference space (p < 0.001***), which is reduced by 19.7%.

Weiwei Jia, Xiaoling Li, Yueyang Shi, Shuai Zheng, Long Wang, Zhangyi Chen, Lixia Zhang
Translating Virtual Reality Research into Practice as a Way to Combat Misinformation: The DOVE Website

There are several barriers to research translation from academia to the broader HCI/UX community and specifically for the design of virtual reality applications. Because of the inaccessibility of evidence-based VR research to industry practitioners, freely-available blog-style media on platforms like Medium, where there is no moderation, is more available, leading to the spread of misinformation. The Design of Virtual Environments (DOVE) website, attempts to address this challenge by offering peer reviewed unbiased VR research, translating it for the layperson, and opening it up to contribution, synthesis and discussion through forums. This paper describes the initial user centered design process for the DOVE website through informal expert interviews, competitive analysis and heuristic review to redesign the site navigation, translation content, and incentivized forms for submission of research. When completed, the DOVE website will aid the translation of AR/VR research to practice.

Chidinma U. Kalu, Stephen B. Gilbert, Jonathan W. Kelly, Melynda Hoover
Software Usability Evaluation for Augmented Reality Through User Tests

Augmented Reality (AR) is becoming increasingly prominent in the market and society because it is a technology that provides new forms of interaction to users, and thus new experiences. However, with the advancement of AR interfaces, methods for assessing the usability of traditional (2D) software need to adapt to become effective when applied to software for 3D environments. This research aims to present an experimental study where tests were carried out with users to measure the usability of software for Augmented Reality glasses. Moreover, we model testing methodology to conduct user-based tests and propose improvements for software testing to evaluate 3D interfaces based on our experiments.

Guto Kawakami, Aasim Khurshid, Mikhail R. Gadelha
Virtual Reality to Mixed Reality Graphic Conversion in Unity
Preliminary Guidelines and Graphic User Interface

To date in the defence industry, virtual reality (VR) headsets are used primarily for training and simulation, while mixed reality (MR) headsets can be used in military operations. However, technological hardware advancements that blend VR and MR headset capabilities will result in increasing convergence of VR and MR applications. Accordingly, our primary objective in the current work was to present guidelines for Unity developers to convert graphical VR content to MR content in the Unity game engine. Guidelines herein address how to change camera settings to adapt from VR to MR applications, how to convert graphical user interfaces (GUIs) from VR to MR, and how to place graphical objects in MR environments. Another important objective of this work was to describe a user-controlled GUI we developed that allows real-time, progressive conversion of graphics from full VR to various levels of MR in a MR headset. This GUI provides end-users flexibility to customize their environment for gaming, training, and operational applications along an extended reality spectrum. In our future work, we are developing a tool that automates the VR to MR graphic conversion process. This tool, the guidelines, and the GUI will help researchers compare different levels of graphic content displayed in the scene and investigate the user-centered challenges that arise. It will also help Unity developers in defence, gaming, and commercial industries generate a scene in VR and quickly convert it to MR, saving the cost associated with generating separate MR scenes from scratch for multiple applications.

Ramy Kirollos, Martin Harriott
A Study on User Interface Design Based on Geo-Infographic and Augmented Reality Technology

The use of augmented reality applications based on location information has increased due to the recent development of smart devices and technological advances. Augmented reality is a technology that displays information on virtual objects in the real world and serves to improve the quality of user experience by providing information based on the user's location. The user interface (UI) in augmented reality requires constant interaction according to the user's location and environment, unlike the existing two-dimensional graphic user interface (GUI). In augmented reality, it is essential to configure the user interface to efficiently and easily recognize appropriate information according to the user's situation. In this paper, we suggested a method that could improve the users’ usability and experience by designing a UI based on geo-infographics. Geo-infographics are infographics created using location-based information that combines elements such as diagrams, images, and storytelling with information communication techniques to deliver messages quickly and easily. UIs designed using geo-infographics will provide an efficient method for users to intuitively understand the content and acquire the information they want easily and quickly by providing interactions with images and diagrams that match the content's purpose and intent. We produced a prototype for Beonhwa-ro Street in Seosan City in Korea. We introduced the art store on Beonhwa-ro Street and created a AR user interface based on geo-infographics in the Seosan Munhwaro application.

Heehyeon Park
Comparing the Impact of State Versus Trait Factors on Memory Performance in a Virtual Reality Flight Simulator

Situation Awareness (SA) is an important predictor of critical incidents in the aviation domain. Virtual Reality (VR) simulators provide a safe setting for training pilot SA. Despite the necessity of integrating auditory information for SA, it is unknown if the integration of auditory information is impacted by state and trait variables. The present work investigated the utility of a novel state versus trait framework in predicting SA, based on auditory information, in a VR flight environment. It was expected that VR induced states would account for most of the variance found in SA. Using structural equation modeling, causal models were developed to quantify the relationship of VR state, non-VR state, and trait variables to SA during VR flight. VR-induced state, non-VR-induced state, and trait variables predicted approximately two thirds of the variability in SA. VR flight simulation is increasingly integrated into military, commercial, and general aviation. Thus, VR flight training protocols and assessments should consider both state and trait factors examined in this study when recruiting or training new pilots.

Anya Pejemsky, Kathleen Van Benthem, Chris M. Herdman
Co-exploring the Design Space of Emotional AR Visualizations

Designing for emotional expression has become a popular topic of study in HCI due to advances in affective computing technologies. With increasing use of video-conferencing and video use in social media for different needs such as leisure, work, social communication, video filters, AR effects and holograms are also getting popular. In this paper, we suggest a framework for emotion visualization for natural user interface technologies such as Augmented or Mixed Reality. The framework has been developed based on our analysis of visualizations formed during a series of emotion visualization workshops.

Sinem Şemsioğlu, Asım Evren Yantaç
Exploring an Immersive User Interface in Virtual Reality Storytelling

Virtual reality provides a highly immersive experience as the player’s actions interact with the virtual world in real time, and feedback is immediately reflected. We explored a device that could enhance immersion by inducing a natural interaction between the player and the virtual world through VR storytelling. In this study, we mainly concentrated on utilizing affordance in the diegetic UI, which constitutes the virtual world, to increase immersion by inducing natural interaction for the player. To use affordance in VR storytelling, we classified physical affordance, cognitive affordance, sensory affordance, and functional affordance as roles that support diegetic UI. We suggest cognitive affordance and physical affordance to recognize and execute the need for interaction necessary to progress the story. We also suggest sensory affordance and functional affordance to detect the target smoothly and help the player’s intentional sequence of actions from a functional point of view. Finally, we suggest that the player’s immersion can be improved by supporting affordances so that the story can progress through the diegetic UI and natural interaction with the player.

Gapyuel Seo
Developing Spatial Visualization Skills with Virtual Reality and Hand Tracking

Spatial visualization skills are the cognitive skills required to mentally comprehend and manipulate 2D and 3D shapes. Spatial visualization skills are recognized as a crucial part of STEM education. Moreover, these skills have been linked to both students’ capacity for self-monitored learning and GPA in STEM programs. Also, many STEM fields have reported a correlation between spatial visualization skills and the level of academic success of students. Unfortunately, many students have significantly underdeveloped spatial visualization skills. Students traditionally learn spatial visualization skills via the manipulation of 3D objects and drawings. Because Virtual Reality (VR) facilitates the manipulation of 3D virtual objects, it is an effective medium to teach spatial visualization skills. In addition, it has been found that students are more motivated to learn and perform better when taught in an immersive VR environment. Several studies have presented promising results on leveraging VR to teach spatial visualization skills. However, in most of these studies, the users are not able to naturally interact with the 3D virtual objects. Additionally, many of the current VR applications fail to implement spatial presence for the user, which would lead to more effective instruction. The level of immersivity a VR application offers can have a direct impact on the users’ “first-person” experience and engagement. In light of this, a VR application integrated with hand tracking technology designed to develop spatial visualization skills of STEM students is presented. The inclusion of a user’s hand in the virtual environment could increase the users’ sense of spatial presence and first-person experience. In addition, this could act as an intuitive input method for beginners. The objective of this work is to introduce this VR application as well as future work on how to leverage Reinforcement Learning to automatically generate new 3D virtual objects [ ].

Liam Stewart, Christian Lopez
The Effect of Avatar Embodiment on Self-presence and User Experience for Sensory Control Virtual Reality System

The aim of this study is to explore the factors of virtual character embodiment, self-presence, user experience for the somatosensory control virtual reality (VR) system. We would like to understand the influences of avatar embodiment on self-presence and user experience. According to literature review, we constructed the hypotheses based on a systematic view of thinking from the emotional design theory. In order to efficiently realize the extent to which avatar embodiment, we make use of the quenstionaire of Gonzalez-Franco and Peck (Gonzalez-Franco, M., and Peck, T.C. Avatar Embodiment. Towards a Standardized Questionnaire. Front. Robot. AI, 2018, 5:74. ) to assess the embodiment level of subjects. Experimental design shows two kind of task types to understand the effect of avatar embodiment on self-presence and user experience. One is non-verbal behaviour experimental design such as dance practicing, while the other is verbal behaviour experimental design like sing practicing.

Huey-Min Sun
Presenting a Sense of Self-motion by Transforming the Rendering Area Based on the Movement of the User’s Viewpoint

Computer graphics is a core technology that can produce realistic images. However, it is difficult to provide sense of movement and immersion using only visual images. In this study, we propose to enhance the sense of self-motion by deforming the rendering area according to the vection illusion which induces a sense of immersion only with images. In general, the body is slouched backward while accelerating forward, and the field of view becomes relatively wide. Therefore, enlarging the drawing area should have a similar effect when the user’s viewpoint in the VR accelerates forward. If the user moves backward, the drawing area should be reduced.We constructed a driving simulation environment to demonstrate the proposed method. The deformation of the drawing area caused VR motion sickness because the expected and the actual deformation differed. It was also confirmed that the transfer of the rendering area enhances the sense of immersion.

Tomoya Yamashita, Wataru Hashimoto, Satoshi Nishiguchi, Yasuharu Mizutani
3D User Interface in Virtual Reality

Three-dimensional user interfaces (3D UI) allow users to interact with virtual objects, environments, or information on the physical or virtual space. With the development of virtual reality technology, Tons of VR products tend to occupy the market quickly. However, the formulation of interface design specifications and the enhancement of user experience always be ignored. More than one-third of the users interrupted their experience during the market research because they did not understand the program operation. 3D UI is fundamental to Interactive in virtual space. Thus, this article introduces the basic overview of virtual reality, focusing on analyzing the main design differences between traditional and three-dimensional interfaces. This study's specific objective was to propose three-dimensional interface construction strategies mainly include: 1) Building a natural harmonious interaction relationship between man and machine; 2) Driving interface experience optimization through mental models. In particular, this research tries to establish the Paradigm of Theoretical Research of 3D user interface, which provides theoretical support for design in the future.

Gu Yue
Manual Preliminary Coarse Alignment of 3D Point Clouds in Virtual Reality

The alignment of 3D point clouds consists of coarse alignment and precise alignment. The preliminary coarse alignment must be implemented for point clouds with a significant initial pose difference before time-consuming precise alignment. However, this procedure is normally finished on 2D interfaces manually, which leads to a partial perception of the 3D point clouds. The biased understanding may affect the operation efficiency and alignment accuracy. In this paper, we developed a VR-based prototype for manual preliminary coarse alignment of point clouds. A user study was conducted to compare the efficiency, accuracy, and usability in a controlled alignment task with both the 2D interface and the developed system. The task was graded into three levels based on the complexity of matched points clouds (simple, complex, and incomplete). The result indicated that the prototype system was effective and useful for supporting the preliminary coarse alignment task. It displayed outstanding performance for the coarse alignment of complex and incomplete point clouds.

Xiaotian Zhang, Weiping He, Shuxia Wang

Games and Gamification

Agrihood: A Motivational Digital System for Sustainable Urban Environments

Extreme industrialization and globalization have turned cities into the most voracious consumers of materials and they are overwhelmingly the source of carbon emissions through both direct and embodied energy consumption. Newly created cities and the urbanization process in rural areas replicate a lifestyle based on consumerism and the linear economy, causing destructive social and economic impact while compromising the ecology of the planet. To reduce this phenomenon, we need to re-imagine cities and the ways they operate, with the perspective of making them locally productive and globally connected. The purpose of this contribution is to make the citizens more aware about their consumption, ecological footprint, visible and invisible fluxes to suggest a new trend in the urban context. We propose a method to plan the city of tomorrow in a dynamic way, where the active participatory process and the gamification techniques are the core pillars of our vision. We analyze the issues of a pilot city (Trento) and report one of the possible outcomes: Agrihood. The provided solution shows how a physical temporary space and digital tools can be integrated and can interoperate to drive a more sustainable urban environment through citizens engagement and participation. If you create an Agrihood network for the whole city the system begins to have major impacts on it: new green spaces that become real lungs for the city, new interactions between neighborhoods, new production and savings in economic terms for each individual family.

Antonio Bucchiarone, Giulia Bertoldo, Sara Favargiotti
A Study on the Integration Method of Sports Practice and Video Games

It is difficult for recreational players to continue practicing sports, which they know is important to improve their technique and get more enjoyment. Therefore, we propose utilizing a concept of “toolification of games”, which enables players to perceive the effect of practicing while playing a game. Few papers have focused on sports training while playing a video game, but some sports players benefit from a training strategy similar to playing particular video games. This benefit suggests the possibility of using a video game for sports training. The proposed method will contribute to making monotonous training more enjoyable by “improving their sports skill while playing a game.” In this study, we focused on badminton training and playing Tetris together. When practicing badminton, the player is often required to hit the shuttlecock to different parts of the court anywhere they want to. This way of practicing is similar to a general strategy used when playing Tetris—the player attempts to distribute Tetriminos (block units in the Tetris game) to every row without bias. This paper describes experimental results from badminton training practice using Tetris compared with practice using conventional visual feedback methods.

Sakuto Hoshi, Kazutaka Kurihara, Sho Sakurai, Koichi Hirota, Takuya Nojima
Development of a Board Game Using Mixed Reality to Support Communication

Ice break has been attracting attention as a tool for enhancing communications during the first encounter. Previous research showed that collaborative tabletop games are effective for ice break. However, few research discuss the factors of these games that influence this enhancement. In this research, we developed a tabletop game where a piece on a board is manipulated by inclining the board with four levers to navigate the piece to follow the predefined path. Our hypothesis is that communication may be enhanced by counter-intuitive behavior of the piece, such as climbing up the slope that is against the laws of physics on the earth. Though what the players manipulate is the real levers, what they see is virtual board and piece rendered through MR device (HoloLens2). In this research we have implemented two behaviors of the piece, one is normal gravity behavior where the peace goes down the slope while the other is anti-gravity behavior where the peace climbs up the slope. To evaluate the effect of the counter-intuitive behavior on the enhancement of the communication, we have conducted experiment where the pair of players played the game under two different conditions. The results of the five-point Likert scale questionnaire demonstrated that the counter-intuitive behavior had positive effect on the freshness and interestingness of the game while it had no effect on the difficulty of the cooperation. However, it was confirmed that the verbal communication between the players increased 6.7% in average which partially support our hypothesis.

Shozo Ogawa, Kodai Ito, Ryota Horie, Mitsunori Tada
The Creative Design-Engineer Divide: Modular Architecture and Workflow UX

There are competing priorities between creative freedom and the need for robust, stable software frameworks to facilitate the rapid implementation of creative ideas in game development. This may result in a disparity between system and user requirements. Qualitative data extracted from seminars at the Game Developers Conference informs the design of several interviews with veteran game-system designers to explore this phenomenon. A survey of modular software plug-ins from the Unity Asset Store then validates the interview findings and explores the benefits of modular software architectures. Findings indicate that modifications to the native user experience (UX) design of Unity and plug-ins that reengineer for different workflows are most popular. The most popular workflows provide for data, asset, and project management. Discussion reflects on how modular architecture can alleviate points of failure within a game engine’s architecture whilst providing customized usability for different user needs.

Brian Packer, Simeon Keates, Grahame Baker
Training of Drone Pilots for Children with Virtual Reality Environments Under Gamification Approach

Today, drones have become one of the most desired technological products by children and adolescents. Unlike console or table games, the game based on learning to pilot a drone allows them to work on their problem-solving skills. Many parents look for activities to share with their children in their free time. However, in big cities, this can be tricky. Learning to pilot drones in virtual reality environments can be a hobby that can be fun for young and old, that can be shared as a family and that can also be useful to improve their learning capacity and their skills and knowledge. Under this situation, it is where gamification and virtual reality environments can be used to generate drone training scenarios for children and adolescents from a fun and safe environment. The work presents a design model of reality environments based on gamification and the design of the user task associated with the flight information guides. A case study is presented in which a proposed virtual reality environment is launched for the training of recreational drone pilots and the preliminary results obtained are presented.

Cristian Trujillo-Espinoza, Héctor Cardona-Reyes, José Eder Guzman-Mendoza, Klinge Orlando Villalba-Condori, Dennis Arias-Chávez
The Interaction Design of AR Game Based on Hook Model for Children's Environmental Habit Formation

Human beings are facing the crisis of deteriorating ecological environment. To strengthen the cultivation of children's environmental awareness and make them develop the habit of environmental protection life are important parts of ecological environment improvement and protection.This study investigate and analyze the current situation of children's environmental awareness education which had found that the limitations of it is the main reason why it’s so difficult for children to form environmental awareness and habits. The purpose of this study is to conduct an interactive design which based on AR technology, to enhance children's environmental protection awareness, to further enable them to form eco-friendly habits in their daily lives. Based on Hook model, the study designs an AR game mechanics by considering children's psychological characteristics so as to trigger children take actions. The design of game interaction mode adopts the Hook model. This study attempts to build an ecosphere management model in the game and the main interaction mode of the game is that players can use AR scanning function based on image recognition technology to obtain virtual models of plants and animals to build their own unique ecosphere. Players can experience the actions of cognizing and managing their own ecosystem.

Qitong Xie, Wei Yu
Conflicts: A Game that Simulates Cognitive Dissonance in Decision Making

This research aims to develop a game that accurately simulates situations that would cause cognitive dissonance to the player. Cognitive dissonance is a psychological concept that involves situations of conflicting attitudes, beliefs, or behaviors that result in feelings of mental discomfort. The game is a visual novel with choice-based decisions that will influence the ending of the story. The player will have to make choices that will satisfy either their family for their happiness or their boss for money to spend on their family. In order to assess and measure the player’s cognitive dissonance, the investigators used a modified scale derived from Sweeney, Hausknecht, and Soutar’s research which uses three dimensions to measure dissonance after a major decision. The questions were in the form of a 7-point Likert scale and were integrated within the game itself and presented after the participant makes a major decision. The data was averaged for analysis. Results had shown that the game developed was only able to simulate average levels of cognitive dissonance.

Morgan Spencer Yao, John Casey Bandiola, John Michael Vince Lim, Jonathan Casano
Development of ‘School Nocturnble’: A Sensitive Game with Eye Trackers

This paper presents “School Nocturnble,” a new type of eye-tracking game, and explains about the development process.Recently, the game industry has been actively applying various new technologies into games. It’s because gamers, who want more realistic and new experience through game, have been reacting positively towards the industry’s new attempts. One of the field that the industry tries to innovate through new technology is a gaming controller. Eye-tracking technology is one of many technological innovations in development of gaming controllers.“School Nocturnble” is a 2D side-scrolling puzzle adventure game that uses eye-tracker as an assisting gaming controller. A single player can control the protagonist with a keyboard while they can also control the assist character with the eye-tracker, solving puzzles and participating turn-based battles to save the magic school “School Nocturnble” from its curse. The game was developed with Unity 2019, Visual Studio 2019(C# code), and Tobii Unity SDK Version4. This game can be played in Windows PC and requires Tobii Eye Tracker5.The development of “School Nocturnble” focuses on providing new game mechanics for gamers who prefers various experience and contents. At last, it’s a meaningful attempt in both the gaming and eye-tracking field to use the Eye-tracker as an assiting gaming controller, since it’s not as commonly used as its potential.

Subeen Yoo, Dain Kim, Seonyeong Park, JungJo Na

HCI in Mobility, Transport and Aviation

Disruptive Technology in the Transportation Sector (Case in Indonesia)

Online transport regulatory policies are critical to the success of this new type of transport. Tariff setting is a problem for online transportation users and drivers who are considered unfair. The zoning of operational areas that need to be underlined is the gap between conventional drivers and online drivers. This study looks at online transportation users’ responses and drivers to tariff and zoning policies for operational areas. The method used in this research is qualitative analysis. This research uses Nvivo 12 Plus software to analyze qualitative data, which presents cross tab analysis and visual analysis. The stages in using Nvivo have five stages including; data collection, data import, data coding, data classification, and data presentation. The data that Nvivo had processed was then continued for qualitative analysis. The data source was obtained from the Twitter data set. This study's findings indicate that the tariff setting has not provided a solution to the gap between users and drivers of online transportation. The zoning of online transportation operations does not yet have a solid legal umbrella to enforce so that it still causes polemics between conventional and online transportation. This research's limitation is that this research only discusses the tariff and red zone for online transportation. Recommendations for further research are grouping online transportation problems in all urban areas in Indonesia.

Pahmi Amri, Achmad Nurmandi, Dyah Mutiarin
Collaborative Workspace – Concept Design and Proof of Concept of an Interactive Visual System to Support Collaborative Decision-Making for Total Airport Management

The future of aviation should be safe, efficient, measurable, collaborative, fair and transparent. New concepts like Collaborative Decision Making and Performance Based Management at airports address these challenges demanding human centered workspace approaches. Centralized and remote solutions have taken into account and technical realization has been addressed. Conventional widget based solutions, as well as gamification ideas are considered for the design of the future workplace in a multi-stakeholder environment at airports. The work results in possibilities to present complex data sets into a uniform and clear interface design that meets the needs of the operators. Design elements were used to ensure that the user will experience the most pleasant working atmosphere and will be able to perform their activities effectively and in a goal-oriented manner.

Mandra Bensmann, Alicia Lampe, Thomas Hofmann, Steffen Loth
From a Drones Point of View

Drones* are on every horizon. They are an essential part of military defense, telemedicine, agriculture, construction, security, firefighting, search and rescue operations, NASA space exploration and Hollywood film production.Drones are referred to by many different names. Whether referred to as Uninhabited Autonomous Vehicle (UAS), Unmanned Combat Air Vehicle (UCAV or Remotely Piloted Vehicle (RPV), the reference is to an aircraft piloted from the ground. The importance of psychomotor abilities (stick and rudder) in drone operations is a diminishing one. Considerations for autonomous flight success include the operator’s willingness to rely on a computer for decisions. This may be a particularly important factor for Uninhabited Air Systems (UAS) war fighting efficacy [1].*Drones, UAS, UAV and RPV are synonymous terms used interchangeably.

D. L. Dolgin, D. Van Der Like, J. London, C. Holdman
Desirable Backrest Angles in Automated Cars

This study was conducted to explore the effects of NDRTs (nondriving related tasks) on driving posture. For references on seating postures, the seat back angle was the examined measure. The objective of this study was to conclude if certain NDRTs, like reading and eating, result in specific NDP (nondriving postures). Another aspect of this study was to compare NDPs by drivers doing the same NDRTs when a steering wheel and pedals were present to when there were none. This was done, as in future scenarios there will be no more need of steering wheels and pedals in HAD (highly automated driving) vehicles. Therefore, the seat angle of 30 participants doing ten NDRTs in a modular driving simulator were examined. All NDRTs were performed twice by the participants. Once with the steering wheel and pedals present and once without. The seat angles were collected and analysed using a 2-way factorial repeated measures ANOVA. Following that, a pairwise t-test was conducted to explore correlations between the NDRTs and the differences in having a steering wheel and pedals present or not. The results showed that the absence of a steering wheel and pedals did not have a significant impact on the seat angle. Whereas NDRTs showed a big influence on the seating angle, of which the NDRT “relaxing” resulted in the highest seat back angles and therefore had the strongest influence on the seat back angle within all tasks.

Martin Fleischer, Nikko Wendel
Identifying Mobility Pattern of Specific User Types Based on Mobility Data

To better understand users and their information demands, it is useful to divide them into user groups. These user groups can be assigned characteristics and mobility preferences. With the help of these parameters, the individual user can be better addressed. In this work a user model was created for the commuter and validated with mobility data. Based on this model, an analysis tool for mobility data was developed. The mobility data analysis tool was designed to identify commuter routes in the dataset. The analysis tool was tested using daily mobility data collected by student in 2018 using the app “MobiDiary”. The results of the analysis show that filtering the trips with the criteria “trip purpose” and “start time” can be a first approach identifying commuter trips. However, a more precise filtering of commuter routes is much more complex. The general findings of this work indicate, that the model trained on the labeled data set, where the participants provided trip purposes, needs to be aware of more parameters for being able to identify commuter trips only based on not labeled trip data.

Tobias Gartner, Waldemar Titov, Thomas Schlegel
How to Find My Ride? Results of an HCI Expert Workshop for AR-Aided Navigation

In order to increase acceptance of automated mobility on-demand (AMoD) it is essential to provide high usability along the whole user journey. The user’s challenges of getting to flexible pick-up locations and the identification of the booked shuttle need to be addressed from a user-centered perspective. A workshop was conducted with HCI experts to create user-centered smartphone interface design solutions and elicit means of augmented reality (AR) for three scenarios: 1) navigation to the pick-up location, 2) identifying the pick-up location, and 3) identifying the SAV. Post-hoc fundamental AR information elements that provide users with high usability to overcome the aforementioned challenges were identified and visualized. Results of the workshop serve as a starting point to iteratively develop AR-aided user interfaces of virtual ride access points that cover the user journey of AMoD.

Fabian Hub, Michael Oehl
Smart Mobility: How Jakarta’s Developing Sustainable Transportation to Connect the Community

This paper aims to analyze the implementation of Trans-Jakarta (Bus Rapid Transit) in realizing smart, sustainable transportation. As a strategic area of mobility in Jakarta is very high, with community productivity reaching 71%, another factor that causes Jakarta’s density is the number of private vehicles. Intelligent transportation management deploys APS to ICT and infrastructure. The research uses a qualitative approach by analyzing the amount of infrastructure, and this theory of approach is seen as growth and phenomenon. Based on the analysis results, the study responded that the increase in BRT Jakarta users’ number is based on improved infrastructure and integrated planning. The number of BRT Jakarta users in 2015 (102,950,384), 2016 (12,3706,857), 2017 (1 44,859,912), 2018 (178,565,827) and 2019 (265,160,290). The procurement of BRT Jakarta infrastructure encourages an increase in BRT users, restricting private transportation, and the separation of routes between BRT and private transportation with a route range from 280.5 km2 to 438.8 km2 encouraging people to switch to using BRT. The increasing number of BRT users has a correlation between social sustainability and smart mobility, where both influence each other if the infrastructure is adequate so that the community can choose which route to go.

Mohammad Jafar Loilatu, Dyah Mutiarin, Achmad Nurmandi, Tri Sulistyaningsih, Salahudin
Analysis of the Daily Mobility Behavior Before and After the Corona Virus Pandemic – A Field Study

In 2020, the world have seen a global spread of the corona virus. Various measures were undertaken to slow down the spread so that the national healthcare systems will be able to treat infected people. This included lockdowns and restrictions of before common activities of public life such as visiting relatives, going to cinemas and theaters or even commuting to work or school. All off these activities allowed people to be mobile and caused traffic. In our research, we analyze to what extent the outbreak of the COVID-19 pandemic and the introduced counter measures in Germany influence the mobility behavior of students at the Karlsruhe University of Applied Sciences. In our mobility behavior analysis we compare collected mobility data from October 2018 being collected before the corona virus pandemic and October 2020, collected after the outbreak of SARSCoV-2. In both measuring periods, students were asked to record their daily mobility over a two-week period with our mobility evaluation tool “MobiDiary”.

Waldemar Titov, Thomas Schlegel
Research on Interaction Design Promote Aesthetic Changes in Car Styling Under the Background of Intelligent Driving

With the development of digitalization and networking of automobiles, the focus of car styling has gradually shifted from the study of three-dimensional modeling to the design of Driving & Riding Experience centered on human-automobile interaction. The motor drive greatly simplifies the internal structure of the car, saves more space, and brings more styling possibilities. At the same time, intelligent driving has changed the entire driving mode, and the new relationship between people and the car has brought unprecedented changes to the space layout of the car. All of these have had a huge and long-term impact on the aesthetics of the current car design and are changing the methods of the entire car design and people’s perception of the aesthetics of the car. From the perspective of interaction design, this article discusses the changes and trends in the aesthetics of car styling in the context of intelligent driving from several aspects.

Mangmang Zhang
HCI International 2021 - Late Breaking Posters
Prof. Constantine Stephanidis
Dr. Margherita Antona
Dr. Stavroula Ntoa
Copyright Year
Electronic ISBN
Print ISBN