Skip to main content

2021 | Buch

Virtual, Augmented and Mixed Reality

13th International Conference, VAMR 2021, Held as Part of the 23rd HCI International Conference, HCII 2021, Virtual Event, July 24–29, 2021, Proceedings

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of the 13th International Conference on Virtual, Augmented and Mixed Reality, VAMR 2021, held virtually as part of the 23rd HCI International Conference, HCII 2021, in July 2021.

The total of 1276 papers and 241 posters included in the 39 HCII 2021 proceedings volumes was carefully reviewed and selected from 5222 submissions.

The 47 papers included in this volume were organized in topical sections as follows: designing and evaluating VAMR environments; multimodal and natural interaction in VAMR; head-mounted displays and VR glasses; VAMR applications in design, the industry and the military; and VAMR in learning and culture.

Inhaltsverzeichnis

Frontmatter

Designing and Evaluating VAMR Environments

Frontmatter
Narrative Cognition in Mixed Reality Systems: Towards an Empirical Framework

In this paper, we propose an interdisciplinary theoretical and empirical framework to investigate the particular faculties related to human “narrative cognition”, in general, and in relation to MRT in particular. In order to contextualize our approach, we shortly review the cognitive turn in narratology, as well as state of the art in different domains that have undertaken psychophysiological studies that either characterize aspects that are relevant to narrative cognition, or which investigate mixed reality experiences. The idea is to bring together knowledge and insights from narratology, different branches of semiotics and cognitive sciences, with empirical strategies that bridge the gap between first-person phenomenological approaches and psychophysiological and behavioural methods. We propose a rationale in order to combine tools and techniques from MRT/VR/AR, interactive digital narratives and storytelling, with a suite of integrated psychophysiological methods (such as EEG, HR, GSR and eye tracking) and phenomenological-subjective approaches.

Luis Emilio Bruni, Hossein Dini, Aline Simonetti
Exploratory Study on the Use of Augmentation for Behavioural Control in Shared Spaces

Shared spaces are regulation free, mixed traffic environments supporting social interactions between pedestrian, cyclist and vehicles. Even when these spaces are designed to foster safety supported by reduced traffic speeds, unforeseen collisions and priority conflicts are always an open question. While AR can be used to realise virtual pedestrian lanes and traffic signals, the change in pedestrian motion dynamics using such approaches needs to be understood. This work highlights an exploratory study to evaluate how speed and path of pedestrians are impacted when using an augmented reality based virtual traffic light interface to control collisions in pedestrian motion. To achieve this objective we analyse the motion information from controlled experiments, replicating pedestrian motion on a lane supported by a stop and go interface and including scenarios such as confronting a crossing pedestrian. Our statistical and quantitative analysis gives some early insights on pedestrian control using body worn AR systems

Vinu Kamalasanan, Frederik Schewe, Monika Sester, Mark Vollrath
Pose Estimation and Video Annotation Approaches for Understanding Individual and Team Interaction During Augmented Reality-Enabled Mission Planning

Two video analysis approaches (pose estimation and manual annotation) were applied to video recordings of two-person teams performing a mission planning task in a shared augmented reality (AR) environment. The analysis approaches calculated the distance relations between team members and annotated observed behaviors during the collaborative task. The 2D pose estimation algorithm lacked scene depth processing; therefore, we found some inconsistencies with the manual annotation. Although integration of the two analysis approaches was not possible, each approach by itself produced several insights on team behavior. The manual annotation analysis found four common team behaviors as well as behavior variations unique to particular teams and temporal situations. Comparing a behavior-based time on task percentage indicated behavior-type connections and some possible exclusions. The pose estimation analysis found the majority of the teams moved around the 3D scene at a similar distance apart on average with similar variation in fluctuation around a common distance range between team members. Outlying team behavior was detected by both analysis approaches and included: periods of very low distance relations, infrequent but very high distance relation spikes, significant task time spent adjusting the HoloLens device during wearing, and exceptionally long task time with gaps in pose estimation data processing.

Sue Kase, Vincent Perry, Heather Roy, Katherine Cox, Simon Su
GazeXR: A General Eye-Tracking System Enabling Invariable Gaze Data in Virtual Environment

Controlling and standardizing experiments is imperative for quantitative research methods. With the increase in the availability and quantity of low-cost eye-tracking devices, gaze data are considered as an important user input for quantitative analysis in many social science research areas, especially incorporating with virtual reality (VR) and augmented reality (AR) technologies. This poses new challenges in providing a default interface for gaze data in a common method. This paper propose GazeXR, which focuses on designing a general eye-tracking system interfacing two eye-tracking devices and creating a hardware independent virtual environment. We apply GazeXR to the in-class teaching experience analysis use case using external eye-tracking hardware to collect the gaze data for the gaze track analysis.

Chris Lenart, Yuxin Yang, Zhiqing Gu, Cheng-Chang Lu, Karl Kosko, Richard Ferdig, Qiang Guan
SpatialViewer: A Remote Work Sharing Tool that Considers Intimacy Among Workers

Due to the influence of the new coronavirus disease (COVID-19), teleworking has been expanding rapidly. Although existing interactive remote working systems are convenient, they do not allow users to adjust their spatial distance to team members at will, and they ignore the discomfort caused by different levels of intimacy. To solve this issue, we propose a telework support system using spatial augmented reality technology. This system calibrates the space in which videos are projected with real space and adjusts the spatial distance between users by changing the position of projections. Users can switch the projection position of the video using hand-wave gestures. We also synchronize audio according to distance to further emphasize the sense of space within the remote interaction: the distance between projection position and user is inversely proportional to the audio volume. We conducted a telework experiment and a questionnaire survey to evaluate our system. The results show that the system enables users to adjust distance according to intimacy and thus improve the users’ comfort.

Sicheng Li, Yudai Makioka, Kyousuke Kobayashi, Haoran Xie, Kentaro Takashima
A Review of Virtual Therapists in Anxiety and Phobias Alleviating Applications

As more and more people suffer from anxiety disorders, new means of clinical and therapeutical interventions are required. For instance, virtual agents or embodied conversational agents, enriched with human-like appearance and verbal and non-verbal behavior have emerged in recent years. They simulate the real-world therapist, communicate with the patient and convey motivating messages. This paper presents an overview of virtual therapists that have been applied for relieving anxiety and phobias symptoms. It discusses aspects such as: whether the agents have been used for home-based or clinical-based therapy, their physical appearance, verbal and non-verbal behavior, attitude towards the patient, efficacy and results obtained in clinical trials. The field is characterized by a large complexity of models, ideas and study designs. The virtual agents, although many of them are in pilot phase, have been appreciated by the patients as a new and modern approach to traditional therapy, fact that is highly promising in the context of developing new and attractive modalities for relieving emotional distress.

Oana Mitruț, Alin Moldoveanu, Livia Petrescu, Cătălin Petrescu, Florica Moldoveanu
Designing Limitless Path in Virtual Reality Environment

Walking in a Virtual Environment is a bounded task. It is challenging for a subject to navigate a large virtual environment designed in a limited physical space. External hardware support may be required to achieve such an act in a concise physical area without compromising navigation and virtual scene rendering quality. This paper proposes an algorithmic approach to let a subject navigate a limitless virtual environment within a limited physical space with no additional external hardware support apart from the regular Head-Mounted-Device (HMD) itself. As part of our work, we developed a Virtual Art Gallery as a use-case to validate our algorithm. We conducted a simple user-study to gather feedback from the participants to evaluate the ease of locomotion of the application. The results showed that our algorithm could generate limitless paths of our use-case under predefined conditions and can be extended to other use-cases.

Raghav Mittal, Sai Anirudh Karre, Y. Raghu Reddy
A Comparative Study of Conversational Proxemics for Virtual Agents

This paper explores proxemics—interpersonal distances—in conversations with virtual agents in virtual reality. While the real-world proxemics of human-human interaction have been well studied, the virtual-world proxemics of human-agent interaction are less well understood. We review research related to proxemics in virtual reality, noting that the previous research has not addressed proxemics with actual conversation, describe an empirical methodology for addressing our research questions, and present our results. The study used a repeated-measures, within-subjects design and had 23 participants. In the study, participants approached and conversed with a virtual agent in three conditions: no crowd, small crowd, and large crowd. The participant's distance from the agent with whom they conversed was recorded at 60 frames/second by VAIF's proxemics tool. Our results suggest that humans in a virtual world tend to position themselves closer to virtual agents than they would relative to humans in the physical world. However, the presence of other virtual agents did not appear to cause participants to change their proxemics.

David Novick, Aaron E. Rodriguez
Real-Time Data Analytics of COVID Pandemic Using Virtual Reality

Visualizing data effectively is critical for the discovery process in the age of big data. We are exploring the use of immersive virtual reality platforms for scientific data visualization for COVID-19 pandemic. We are interested in finding ways to better understand, perceive and interact with multidimensional data in the field of cognition technology and human-computer interaction. Immersive visualization leads to a better understanding and perception of relationships in the data. This paper presents a data visualization tool for immersive data visualizations based on the Unity development platform. The data visualization tool is capable of visualizing the real-time COVID pandemic data for the fifty states in the USA. Immersion provides a better understanding of the data than traditional desktop visualization tools and leads to more human-centric situational awareness insights. This research effort aims to identify how graphical objects like charts and bar graphs depicted in Virtual Reality tools, developed in accordance with an analyst’s mental model can enhance an analyst’s situation awareness. Our results also suggest that users feel more satisfied when using immersive virtual reality data visualization tools and thus demonstrate the potential of immersive data analytics.

Sharad Sharma, Sri Teja Bodempudi, Aishwarya Reehl
Design Considerations for Interacting and Navigating with 2 Dimensional and 3 Dimensional Medical Images in Virtual, Augmented and Mixed Reality Medical Applications

The extended realities, including virtual, augmented, and mixed realities (VAMR) have recently experienced significant hardware improvement resulting in an expansion in medical applications. These applications can be classified by the target end user (for instance, classifying applications as patient-centric, physician-centric, or both) or by use case (for instance educational, diagnostic tools, therapeutic tools, or some combination). When developing medical applications in VAMR, careful consideration of both the target end user and use case must heavily influence design considerations, particularly methods and tools for interaction and navigation. Medical imaging consists of both 2-dimensional and 3-dimensional medical imaging which impacts design, interaction, and navigation. Additionally, medical applications need to comply with regulatory considerations which will also influence interaction and design considerations. In this manuscript, the authors explore these considerations using three VAMR tools being developed for cardiac electrophysiology procedures.

Jennifer N. Avari Silva, Michael K. Southworth, Christopher M. Andrews, Mary Beth Privitera, Alexander B. Henry, Jonathan R. Silva
Virtual Reality Sickness Evaluation in Exergames for Older Hypertensive Patients: A Comparative Study of Training Methods in a Virtual Environment

Virtual Reality (VR) sickness is an exclusive problem in immersive VR. However, the target group of hypertensive patients is a particularly vulnerable group that could be more sensitive to such a phenomenon. Therefore, the aim of this study was to determine whether there are differences in the symptoms of the VR sickness between a strength endurance-based and an endurance-based VR exergame. Participants over 65 years old with diagnosed essential hypertension were included. An assessment of cognitive status and an assessment of risk of falling were performed as inclusion criteria. All participants tested two VR exergames (strength endurance and endurance training) on two visits. The Simulator Sickness Questionnaire (SSQ) was used after the task-based application. The endurance exergame tended to have higher scores in the scales than the strength endurance exergame. The increased movement in space could be a cause for this. Significant differences were shown only for the scale “oculomotor”. Descriptively, more symptoms were found in women and participants without VR experience and participants over 76 years of age. There were no relevant self-reported cases of VR sickness in the sample. For the target group of older hypertensive patients an exergame, which is arranged as a frontal personal training and requires less movement through the room, could cause less VR sickness and be more appealing.

Oskar Stamm, Susan Vorwerg
Consistency in Multi-device Service Including VR: A Case Study

Nowadays, we have entered a world of multi-device experiences. This phenomenon poses a significant challenge because we have to deploy applications to different platforms, with a consistent UX for each targeted device preferably. Consistency provides users with a stable framework in similar contexts and helps to improve multi-device usability. Many previous studies focus on maintaining consistency among traditional platforms like smartphones apps and websites. However, with the rapid development of VR, it is crucial to add it to the spectrum because the design for VR differs from that for traditional 2D screens a lot. Therefore, this paper proposes a series of design principles to ensure the consistency across multiple devices including HMD-VR, along with four dimensions of consistency worth considering. We use Virtual Experiment of Film Language, a multi-device serious game as an example. Twelve participants were recruited to experience the VR and WebGL versions of the game to spot inconsistency issues. After that, the game was modified according to the survey results and evaluated by the participants again. The evaluation result showed that consistency was improved. We proposed three consistency design principles based on our findings. They can help multi-device applications improve consistency across devices so as to enhance inter-usability and user experience.

Tian Xie, Zhifeng Jin, Zhejun Liu, Entang He

Multimodal and Natural Interaction in VAMR

Frontmatter
The Effect of Body-Based Haptic Feedback on Player Experience During VR Gaming

As the interest in virtual reality (VR) technology as a game console has rapidly grown over the past few years, many new technologies are also being developed to further enhance the VR gaming experience. One of these technologies is haptic feedback vests, which are beginning to hit the market with claims of elevating the player experience (PX). Since the use of haptic vests during gameplay is still in its early stages, their influence on the PX during VR gaming is understudied. Accordingly, the current study investigated the effect of providing body-based haptic feedback on players’ sense of presence and overall PX during VR gaming. Participants played a VR-based rhythm game both with and without a haptic feedback vest and provided self-reported ratings of their sense of presence and PX. Results revealed that there was no significant difference between playing with a haptic vest and without a haptic vest in terms of players’ sense of presence and PX during the gameplay. As a whole, these results indicate that providing body-based haptic feedback using a haptic vest may not always increase sense of presence and PX levels when playing a rhythm game in VR.

Michael Carroll, Caglar Yildirim
User Defined Walking-In-Place Gestures for Intuitive Locomotion in Virtual Reality

Locomotion is one of the fundamental interactions in virtual reality (VR). As a simple yet naturalistic way to enable VR locomotion, walking-in-place (WIP) techniques have been actively developed. Even though various WIP gestures have been proposed, they were adopted or designed from the perspective of developers, not the users. This limits the benefits of WIP as unnatural gestures may result in a higher cognitive load to learn and memorize, worse presence, and increased sensory conflict. Therefore, this study elicited natural WIP gestures for forward, sideways, and backward walking directions from users. Twenty participants experienced the movement while wearing the VR headset and elicited the WIP gesture for 8 walking directions. The grouping results showed that Turn body + Stepping-in-place (SIP) and Step one foot + SIP/Rock/Stay were four promising WIP gesture sets for VR locomotion. A comparison between elicited and existing gestures revealed that elicited gestures have the potential to outperform existing gestures due to easier to perform, less fatigue, and higher presence. The generated WIP gesture sets could be used in gesture-based VR applications to provide a better user experience and greater movement options in VR locomotion.

Woojoo Kim, Eunsik Shin, Shuping Xiong
Exploring Human-to-Human Telepresence and the Use of Vibro-Tactile Commands to Guide Human Streamers

Human-to-human telepresence is rising to mainstream use, and there is opportunity to provide rich experiences through novel interactions. While previous systems are geared towards situations where two users are previously acquainted, or provide channels for verbal communication, our work focuses on situations where audio is not desirable or available, by incorporating vibro-tactile commands into a telepresence setup. We present results from a lab-based study regarding a human-to-human telepresence system which enables one person to remotely control another through these vibro-tactile cues. We conducted a study with 8 participants to solicit their feedback when acting as a Streamer, 8 additional participants to solicit feedback as a Viewer, and 30 bystanders, through surveys and debriefing sessions. While our participants generally found the application favorable, we did find mixed feelings towards vibro-tactile devices, and much room for improvement for the whole interaction. We discuss the implications of our findings and provide design guidelines for future telepresence developers.

Kevin P. Pfeil, Katelynn A. Kapalo, Seng Lee Koh, Pamela Wisniewski, Joseph J. LaViola Jr.
Pseudo-haptic Perception in Smartphones Graphical Interfaces: A Case Study

Human-computer interaction is a characteristic that strongly influences the user experience in computer systems, especially Virtual Reality and Augmented Reality. The ability to perform tasks using various human sensory channels (e.g., vision, hearing and touch) can increase the efficiency of these systems. The term pseudo-haptic is used to describe haptic effects (for example, stiffness and viscosity) perceived in touch interaction without actuators. Such effects are generated by visual changes that can improve the user experience. Pseudo-haptic interaction can be created on devices, such as smartphones, with graphical interfaces and touch screens. This paper presents an experiment that uses six types of materials (real and virtual) to check the perception and measure the level of perception of users in relation to the pseudo-haptic effect of stiffness, when the task of pressing the material is performed. A comparison of the perception of each participant in relation to virtual materials was also performed when the effect is applied alone and when it is combined with the device’s vibration motor. The results showed that the pseudo-haptic effects are perceived by the participants and in most materials the level of stiffness is similar to that of real materials. The use of the vibration feature combined with the pseudo-haptic approach can mitigate the differences in perception between real and virtual materials.

Edmilson Domaredzki Verona, Beatriz Regina Brum, Claiton de Oliveira, Silvio Ricardo Rodrigues Sanches, Cléber Gimenez Corrêa
A Research on Sensing Localization and Orientation of Objects in VR with Facial Vibrotactile Display

Tactile display technology has been widely proved to be effective to human-computer interaction. In multiple quantitative research methods to evaluate VR user experience (such as presence, immersion, and usability), multi-sensory factors are significant proportion. Therefore, the integration of VR-HMD and tactile display is a possible application and innovation trend of VR. The BIP (Break in Presence) phenomenon affects the user's spatial awareness when entering or leaving VR environments. We extracted orientation and localization tasks to discuss the influence of facial vibrotactile display on these tasks. Correlational researches are mainly focused on the parts of human body such as torso, limb, and head regions. We chose face region and to carry out the experiment, a VR-based wearable prototype “VibroMask” was built to implement facial vibrotactile perception. Firstly, the behavioral data of subjects' discrimination of vibrotactile information were tested to obtain the appropriate display paradigm. It was found that the discrimination accuracy of low-frequency vibration was higher with loose wearing status, and the delay offset of one-point vibration could better adapt to the orientation task. Secondly, the effect of facial vibrotactile display on objects’ localization and orientation discriminating task in VR was discussed. Finally, subjects’ feedback was collected by using open-ended questionnaire, it is found that users have a higher subjective evaluation of VR experience with facial vibrotactile display.

Ke Wang, Yi-Hsuan Li, Chun-Chen Hsu, Jiabei Jiang, Yan Liu, Zirui Zhao, Wei Yue, Lu Yao
HaptMR: Smart Haptic Feedback for Mixed Reality Based on Computer Vision Semantic

This paper focuses on tactile feedback based on semantic analysis using deep learning algorithms on the mobile Mixed Reality (MR) device, called HaptMR. This way, we improve MR’s immersive experience and reach a better interaction between the user and real/virtual objects. Based on the Mixed Reality device HoloLens 2. generation (HL2), we achieve a haptic feedback system that utilizes the hand tracking system on HL2 and fine haptic modules on hands. Furthermore, we adapt the deep learning model – Inception V3 to recognize the rigidity of objects. According to the scenes’ semantic analysis, when users make gestures or actions, their hands can receive force feedback similar to the real haptic sense. We conduct a within-subject user study to test the feasibility and usability of HaptMR. In user study, we design two tasks, including hand tracking and spatial awareness, and then, evaluate the objective interaction experience (Interaction Accuracy, Algorithm Accuracy, Temporal Efficiency) and the subjective MR experience (Intuitiveness, Engagement, Satisfaction). After visualizing results and analyzing the user study, we conclude that the HaptMR system improves the immersive experience in MR. With HaptMR, we could fill users’ sense of inauthenticity produced by MR. HaptMR could build applications on industrial usage, spatial anchor, virtual barrier, 3D semantic interpretation, and as a foundation of other implementations.

Yueze Zhang, Ruoxin Liang, Zhanglei Sun, Maximilian Koch
Position Estimation of Occluded Fingertip Based on Image of Dorsal Hand from RGB Camera

Development of Virtual Reality (VR) and popularization of motion capture enables a user’s hands and fingers to directly interact with virtual objects in a VR environment without holding any equipment . However, an optical motion capture device cannot track the user’s fingers where the fingers are occluded from the device by the hand. Self-occlusion hinders correct estimation of the fingertips, resulting in failure to achieve natural interaction. This paper proposes a method for estimating 3D position of occluded fingertips using an image of the dorsal hand which a head-mounted camera can capture. The method employs deep neural networks to estimate 3D coordinates of the fingertips based on an image of the dorsal hand. This paper describes a processing pipeline used to build training datasets for the deep neural networks. Hand segmentation is used to preprocess the image, and coordinate alignment is applied to transform coordinates in two camera coordinate systems. The collected dataset includes data with different hand types of subjects, different interaction methods, and different lighting conditions. Then, this paper evaluates the effectiveness of the proposed method based on the collected dataset. The model based on ResNet-18 had a smallest average 3D error of 7.19 mm among three models used to compare. The same model structure is used to build a model to estimate 3D coordinates from an image of a palm as a baseline. An evaluation on the models shows that the difference between errors of the models for palm and dorsal hands is smaller than 1.5 mm. The results show that can effectively predict a fingertip position with a low error from an image of dorsal hand even if the fingertip is occluded from a camera.

Zheng Zhao, Takeshi Umezawa, Noritaka Osawa

Head-Mounted Displays and VR Glasses

Frontmatter
Usability and User Experience of Interactions on VR-PC, HoloLens 2, VR Cardboard and AR Smartphone in a Biomedical Application

The Augmented (AR), Mixed (MR) and Virtual Reality (VR) technologies, here subsumed as “xR”, have matured in the last years. The research focus of this paper is to study interaction on four different xR platforms to improve medical applications in the future. Often medical applications could be created either as VR or AR scenario. It may depend on multiple factors which xR platform should be used in a specific use case. Therefore, it is interesting to know if there are any differences in usability or user experience between those.There are various manipulation possibilities that can be done with a virtual object in xR. These major interaction possibilities are explored here: addressing, grasping, moving and releasing of objects. We use a specific virtual scenario to assess user experience and usability of the aforementioned interactions on the four xR platforms. We compare differences and commonalities of the way the interactions are achieved on all platforms.A study with twenty-one participants gives insight on three main aspects: First, even though the used four platforms have very different interaction capabilities, they can be suitable for biomedical training tasks. Second, all four platforms result in high usability and user experience scores. Third, the HoloLens 2 was the most attractive device but usability and user experience stayed lower in some aspects than even the very simple VR Cardboard.

Manisha Suresh Balani, Johannes Tümler
Simulation of the Field of View in AR and VR Headsets

Technical parameters of today’s Optical See-Through Head Mounted Displays (OST-HMDs) do not fully match the industrial requirements yet: Especially the small field of view (FoV) of current OST-HMDs is a hindrance for industrial use. The FoV is a technical parameter the user is always confronted with while the immersive experience of Augmented Reality takes place: It defines the extent of the observable augmented world where virtual objects can be perceived. This experience is limited by the augmented objects being cut off at the screen boundaries. This paper describes a scientific approach to simulate the FoV of OST-HMDs with the help of AR and VR devices. It aims at enabling tests and validation of necessary FoV-specifications of HMDs. Therefore, a study to simulate different FoVs and evaluate the necessary FoV size for manual two-handed automotive assembly tasks is presented. Results show significant differences in ratings between AR and VR but nearly no differences between the participant groups “AR/VR experts” and “assembly line workers”.

Sarah Brauns, Johannes Tümler
Exploring Perspective Switching in Immersive VR for Learning First Aid in Lower Secondary Education

There is an increased interest in how immersive learning can be used to support educational practices. Immersive virtual reality (IVR) with head-mounted displays (HMDs) is a technology that provides for immersive and smart learning environments where learners can interact and participate in an entirely virtual world. Based on literature and lessons learned from an ongoing research project, we identify potential benefits and challenges by incorporating perspective switching in an immersive VR application with. The VR application will be used to teach and stimulate learning within first aid in lower secondary education. The overall goal is to develop a user-centered VR application through Participatory Design (PD). The main contribution of this paper is a conceptual framework and guidelines for the process of developing IVR where users can switch between a first-person perspective and a third-person perspective.

Tone Lise Dahl, Olve Storlykken, Bård H. Røssehaug
Beyond Visible Light: User and Societal Impacts of Egocentric Multispectral Vision

Multi-spectral imagery is becoming popular for a wide range of application fields from agriculture to healthcare, mainly stemming from advances in consumer sensor and display technologies.Modern augmented reality (AR) head-mounted displays already combine a multitude of sensors and are well-suited for integration with additional sensors, such as cameras capturing information from different parts of the electromagnetic spectrum. In this paper, we describe a novel multi-spectral vision prototype based on the Microsoft HoloLens 1, which we extended with two thermal infrared (IR) cameras and two ultraviolet (UV) cameras. We performed an exploratory experiment, in which participants wore the prototype for an extended period of time and assessed its potential to augment our daily activities. Our report covers a discussion of qualitative insights on personal and societal uses of such novel multi-spectral vision systems, including their applicability for use during the COVID-19 pandemic.

Austin Erickson, Kangsoo Kim, Gerd Bruder, Gregory F. Welch
No One is Superman: 3-D Safety Margin Profiles When Using Head-Up Display (HUD) for Takeoff in Low Visibility and High Crosswind Conditions

The role of crosswinds in the relationship between crew workload and size of safety margin during low visibility takeoffs using HUD was examined. 3-dimensional (3-D) safety margin profiles for different low visibility takeoff conditions and guidance types were generated utilizing the raw and standardized scores for the six NASA-TLX subscales, with and without the effect of crosswinds. The results underlined the critical role for aviation safety of a) building an evocative shared mental model within the pilots’ community of the multi-faceted nature of safety margin; and b) having a clear understanding of the complex interplay between the many factors that affect its shape and size.

Daniela Kratchounova, Inchul Choi, Theodore Mofle, Larry Miller, Jeremy Hesselroth, Scott Stevenson, Mark Humphreys
Robust Camera Motion Estimation for Point-of-View Video Stabilization

Point-of-View videos recorded by Augmented Reality Glasses contain jitters because they are acquired under users’ actions in varying environments. Applying video stabilization on such videos is difficult due to weakness of conventional keypoint-based motion estimation to environmental conditions. They are prone to fail to track in low-textured or dark environments. To overcome this limitation, we propose a neural network-based motion estimation method for video stabilization. Our network predicts frame-to-frame motion in high accuracy by focusing on global camera motion, while ignoring local motion caused by moving objects. Motion prediction takes only up to 10 ms so that we achieve real-time stabilization on modern smartphones hardware. We demonstrate our method outperforms keypoint-based motion estimation and the quality of estimated motion is good enough for video stabilization. Our network is trainable without ground truth and easily scalable to large datasets.

Wonwoo Lee, Byeongwook Yoo, Deokho Kim, Jaewoong Lee, Sunghoon Yim, Taehyuk Kwon, Gunill Lee, Jiwon Jeong
Rendering Tree Roots Outdoors: A Comparison Between Optical See Through Glasses and Smartphone Modules for Underground Augmented Reality Visualization

In this paper we propose augmented reality (AR) modes for showing virtual tree roots in the nature. The main question that we want to answer is “How is it possible to convincingly represent virtual 3D models of tree roots as being located in the forest soil, underground, while using the AR technology?”. We present different rendering and occlusion modes and how they were implemented on two different types of hardware. Two user studies, that we performed to compare the AR visualization for optical see-through glasses (HoloLens head mounted display) and for mobile devices (smartphone), are described. We specifically focus on depth perception and situated visualization (the merging of real and virtual environment and actions). After discussing the experiences collected during outdoor user tests and the results of a questionnaire, we give some directions for the future use of AR in the nature and as a part of an environmental educational setting. The central result of the study is that while supporting depth perception with additional depth cues, specifically occlusion, is very beneficial in the mobile device setting, this support does not change depth perception significantly in the stereo HMD setting.

Gergana Lilligreen, Philipp Marsenger, Alexander Wiebel
Using Head-Mounted Displays for Virtual Reality: Investigating Subjective Reactions to Eye-Tracking Scenarios

Virtual reality head-mounted displays (HMDs) have recently incorporated eye-tracking into hardware design. For the present study, the Varjo VR-1 and HTC Vive Pro Eye HMDs were used for three eye-tracking scenarios; and immersion, simulator sickness, and visual discomfort questionnaires measured subjective reactions. A lack of research exists comparing HMDs equipped with eye-tracking capabilities, in terms of the subjective measures employed. In this study, the HMDs were investigated using between-subjects and within-subjects designs. For the between-subjects design, the independent variable was the HMD (i.e., Varjo VR-1 or HTC Vive Pro Eye) and the dependent variables were the immersion, simulator sickness, and visual discomfort measurements. For the within-subjects design, simulator sickness, and visual discomfort were evaluated across scenarios. Forty participants were asked to detect and confirm the same target stimulus in each of the three eye-tracking scenarios of increasing ecological complexity. For the results, non-parametric statistics were used, given non-normal data. Since between-groups differences were not found for immersion, simulator sickness, and visual discomfort, recommendations are provided for updating the survey measures. Within-groups differences for both simulator sickness and visual discomfort are explained for plausible cause and reduction methods. Correlation results indicate overlap between visual discomfort and simulator sickness; immersion lacked significant correlations with the other measures. Overall, survey updates and individual differences (e.g., ocular aspects) should be considered for future research and development.

Crystal Maraj, Jonathan Hurter, Joseph Pruitt
Omnidirectional Flick View

We propose a novel interface that naturally and dynamically augments practical horizontal field-of-view via Head-Mounted Display (HMD). Instead of showing whole omnidirectional sight, we realize augmentation of dynamic field-of-view (FoV) by composition of two methods: (i) Neck-yaw Boosting: magnification of amount of movement of sight against neckyawing, and (ii) Dynamic FoV Boosting: slight translation between local view and global view. Neck-flicking motion of users expands FoV and extends direction of neck-yawing. Simultaneous usage of these methods extremely reduces burden of turning head while degrading optical flow for suppressing VR sickness.

Ryota Suzuki, Tomomi Sato, Kenji Iwata, Yutaka Satoh

VAMR Applications in Design, the Industry and the Military

Frontmatter
Virtual Fieldwork: Designing Augmented Reality Applications Using Virtual Reality Worlds

AR technology continues to develop and is expected to be used in a much wider range of fields. However, existing head-mounted displays for AR are still inadequate for use in daily life. Therefore, we focused on using VR to develop AR services and explored the possibility of virtual fieldwork. By comparing fieldwork using AR and virtual fieldwork using VR, we revealed the feasibility and limitations of virtual fieldwork. It was shown that virtual fieldwork is effective in an indoor environment where no other people are present. By using virtual fieldwork, researchers can obtain qualitative results more easily than with traditional fieldwork. On the other hand, it was suggested that it is difficult to realize virtual fieldwork in a large environment such as outdoors and in the presence of others. In particular, participants do not perceive the virtual fieldwork as fieldwork in the real world because the experience of walking in a large space is difficult to experience in conventional VR.

Kota Gushima, Tatsuo Nakajima
Contextually Adaptive Multimodal Mixed Reality Interfaces for Dismounted Operator Teaming with Unmanned System Swarms

US dismounted Special Operations Forces operating in near-threat environments must maintain their Situation Awareness, survivability, and lethality to maximize their effectiveness on mission. As small unmanned aerial systems and unmanned ground vehicles become more readily available as organic assets at the individual Operator and team level, these Operators must make the decision to divert attention and resources to the control of these assets using touchscreens or controllers to benefit from their support capabilities. This paper provides an end-to-end overview of a solution development process that started with a broad and future-looking capabilities exploration to address this issue of unmanned system control at the dismounted Operator level, and narrowed to a fieldable solution offering immediate value to Special Operation Forces. An overview of this user-centric design process is presented along with lessons learned for others developing complex human-machine interface solutions for dynamic and high-risk environments.

Michael Jenkins, Richard Stone, Brodey Lajoie, David Alfonso, Andrew Rosenblatt, Caroline Kingsley, Les Bird, David Cipoletta, Sean Kelly
Virtual Solutions for Gathering Consumer Feedback on Food: A Literature Review and Analysis

Addressing consumer needs is key to success in new product development. Due to COVID-19, however, gathering feedback on food products has become challenging. Our preliminary research on the food industry revealed that the socially distanced lifestyle has deprived food practitioners of in-person testing platforms, inspiring our research questions. Although a myriad of virtual methods for food testing have been reported in the past two decades, the literature does not provide systematic assessment of their applicability. Therefore, in this review of 108 papers, we delineate the landscape of virtual technologies for food testing and present their practical implications. From our analysis, VR emerged as a promising tool, yet it was not being used by practitioners. Other technologies (e.g. flavor simulators) were too preliminary to be adopted in industry. Furthermore, the types of technologies were fragmented, leaving much room for cross-tech integration. Future research goals to address the gaps are discussed.

Summer D. Jung, Sahej Claire, Julie Fukunaga, Joaquin Garcia, Soh Kim
Modernizing Aircraft Inspection: Conceptual Design of an Augmented Reality Inspection Support Tool

Aircraft maintenance is critical to the Navy’s fleet readiness, however, ongoing delays in completing maintenance tasks and increases in maintenance-related mishaps, highlight a need for improvement in the fleet’s maintenance processes. Central to correcting the current maintenance shortfall is improving the training and support of maintenance personnel, as many transitioning experts are currently being replaced by junior, inexperienced maintainers. Augmented, virtual, and mixed reality (AR/VR/MR) – known collectively as extended reality (XR) – present a promising avenue by which to fill these skill and knowledge gaps. The present paper details the conceptual design of an AR-based operational support tool targeting point-of-need support during nondestructive inspection of aircraft, referred to as Augmented Reality Technician Inspection for Surface Anomalies and Noncompliance (ARTISAN). ARTISAN is an AR tool that provides step-wise information for inspection procedures and overlays augmented AI predictions regarding location of anomalies to enhance operational support. It also allows for the real-time, continuous capture of identified anomalies; using an AR head-worn device (HWD), maintainers are able to take first-person point of view media and geo-spatially tag surface anomalies, reducing ambiguity associated with anomaly detection and ultimately increasing readiness. The current article describes how ARTISAN was conceptualized, from initial solution to placement of augmented content to finalized system architecture. Finally, the article concludes with a discussion on how ARTISAN was optimized for the end-user through iterative user testing via remote platforms with Naval stakeholders, SMEs, and relevant end users.

Clay D. Killingsworth, Charis K. Horner, Stacey A. Sanchez, Victoria L. Claypoole
The Potential of Augmented Reality for Remote Support

This paper addresses the question whether an AR enabled remote service based on standard hardware offers benefits for remote support or not. To answer this question a lab experiment has been conducted with 66 participants to examine the current potential of AR for field service tasks. The authors compare the productivity of a pure video-based remote support system with the productivity of an AR enabled remote support system. The AR system offers real-time video sharing and additionally virtual objects that can be used by the remote experts as “cues” to guide the field engineer. Thirdly a system consisting of a head mounted display with cues has been tested. The results of the lab experiment show that the guided engineers could clearly benefit from AR based remote support. Nevertheless, the influence of both the engineer’s prior skillset as well as the remote experts’ ability to instruct is bigger than of the deployed tool.

Stefan Kohn, Moritz Schaub
A Review of Distributed VR Co-design Systems

This paper presents a systematic literature review to identify the challenges and best practices of distributed VR co-design systems, in order to guide the design of such systems. It was found that VR, due to its intuitive format is useful at co-design review meetings since it allows everyone, not just experts, to provide their inputs, thus resulting in additional practical insights from project stakeholders. It also allows the identification of design deficiencies that were not observable using traditional methods and that were overlooked by the traditional panel of expert reviewers. Our review also indicates that VR should complement, not replace, traditional 2D CAD drawings. Our review, however, shows there is little conclusive evidence about the features and functions that should be included in the design of such systems and that more quantitative studies are needed to support the findings from qualitative studies.

Jean-François Lapointe, Norman G. Vinson, Keiko Katsuragawa, Bruno Emond
The Mobile Office: A Mobile AR Systems for Productivity Applications in Industrial Environments

This article proposes the Mobile Office, an AR-Based System for productivity applications in industrial environments. It focuses on addressing workers’ needs while mobilizing in complex environments and when workers want to do productive work similar to a workstation setting. For this purpose, the Mobile Office relies on a bi-modal approach (mobile and static modes) to addresses both scenarios while considering critical aspects such as UI, ergonomics, interaction, safety, and cognitive user abilities. Its development relies first on a systematic literature review for establishing guidelines and system aspects. Then AR-UI mockups are presented to incorporate literature and authors’ expertise in this area. Three hardware configurations are also presented for the implementation of the Mobile Office. Validation of the Mobile Office was made through a survey to potential users that provided their feedback about their perceptions of proposed system. The survey results revealed concordance of critical aspects of particular importance for effective productivity support on the Mobile Office’s focus scenarios. We incorporated these critical aspects and suggested future research direction for the Mobile Office’s further development and significance.

Daniel Antonio Linares Garcia, Poorvesh Dongre, Nazila Roofigari-Esfahan, Doug A. Bowman
Virtual Reality Compensatory Aid for Improved Weapon Splash-Zone Awareness

Military personnel often have very little time to make battlefield decisions. These decisions can include the identification of weapon splash-zones and of the objects in and near that zone. With the advances made in commercially available virtual reality technologies, it is possible to develop a compensatory aid to help with this decision-making process. This paper presents a virtual reality system that was designed to be such an aid. This system was tested in a user study where participants had to identify whether objects were in a splash-zone, where those objects were relative to that splash-zone, and to identify what the objects were. This user study found that the system was well-received by participants as measured by high system usability survey responses. Most participants were able to identify objects with high accuracy and speed. Additionally, throughout the user study participant performance improved to near-perfect accuracy, indicating that the system was quickly learned by participants. These positive results imply that this system may be able to serve as a viable, and easy to learn aid for splash-zone related decision making.

Richi Rodriguez, Domenick Mifsud, Chris Wickens, Adam S. Williams, Kathrine Tarre, Peter Crane, Francisco R. Ortega
Mixed Reality Visualization of Friendly vs Hostile Decision Dynamics

We present an investigation using mixed reality technology to visualize decision-making dynamics for a Friendly vs Hostile wargame in a Multi-Domain Operation environment. The requirement of penetrate and dis-integrate phases under Multi-Domain Operations aligns well with the advantages of Artificial Intelligence/Machine Learning because of 1) very short planning timeframe for decision-making, 2) simultaneous planning requirement for multiple operations, and 3) interdependence of operations. In our decision dynamics research, we propose to advance the art/science for wargaming by leveraging brain science to extend the use of Artificial Intelligence/Machine Learning algorithms and the use of mixed reality technology to visualize complex battlespace scenarios requiring a better understand of the dynamics in a complex decision making process.

Simon Su, Sue Kase, Chou Hung, J. Zach Hare, B. Christopher Rinderspacher, Charles Amburn
Doing Versus Observing: Virtual Reality and 360-Degree Video for Training Manufacturing Tasks

Virtual reality is increasingly used for training workers manufacturing processes in a low-risk, realistic practice environment. Elements of the manufacturing process may be time-consuming or costly to implement accurately in simulation. The use of 360-degree video on its own or combined with interactive elements can bridge the gap between affordable VR capabilities and requirements for effective training. This paper discusses use of 360-degree video to create “learning by observing” and “learning by doing” VR training modules. The “learning by doing” module addresses work task training entirely with 360-degree videos embedded in an interactive training environment. In the “learning by doing” module, a hybrid approach successfully leverages 360-degree video to show complex task processes while implementing fully interactive elements when it was cost-effective to do so. Participants suggest “learning by observing” is effective while “learning by doing” is more engaging.

Emily S. Wall, Daniel Carruth, Nicholas Harvel

VAMR in Learning and Culture

Frontmatter
Design and Research on the Virtual Simulation Teaching Platform of Shanghai Jade Carving Techniques Based on Unity 3D Technology

As one of the four major schools of Chinese jade carving, Shanghai style jade carving is a manifestation of Shanghai's unique regional culture and Shanghai style cultural spirit. And it has extremely high cultural and commercial value. However, in the teaching process of jade carving, there are problems such as high teaching cost, lack of teachers, and limited time and place. In order to solve these problems, we have designed a virtual teaching platform that combines Shanghai style jade carving skills with virtual simulation technology. We used Rhino and 3ds Max to model in a virtual environment. Based on Unity 3D development engine, we used C# scripting language to realize the human-computer interaction function of the system, and completed the design of teaching cognition, training of carving techniques, selection of carving tools, and teaching assessment. In addition, the research elaborates on the technical route, system framework and implementation process of the virtual teaching system in detail. Fifteen students of jade carving skills tested this system and verified the reliability and effectiveness of the system from their interviews. The system utilizes the immersive, interactive and imaginative functions of virtual technology. Therefore, it can effectively reduce the cost of jade carving training and improve learning efficiency, so that jade carving learners can’t be limited by space, time, and materials. Moreover, it plays an important role in promoting the application of virtual reality technology in education and training.

Beibei Dong, Shangshi Pan, RongRong Fu
IME: An MVC Framework for Military Training VR Simulators

In the military context, the traditional training methods based on field exercises have high costs, logistics complexity, spatial-temporal constraints, and safety risk. VR-based simulation environments can help address these drawbacks while serving as a platform for supplementing current training approaches, providing more convenience, accessibility, and flexibility. This paper identifies and discusses aspects that are crucial to a military training VR application and presents the first Brazilian Army VR framework (IME $$^{VR}$$ VR ), which seeks to facilitate the development and the reuse of solutions for common issues in different military training VR applications. The usage of the framework was validated by the development of the first Brazilian Army VR simulator for Army Artillery observer training (SAOA). Besides, this case study showed that the framework support provided savings of approximately one third in the application development effort (person-month).

Romullo Girardi, Jauvane C. de Oliveira
Extended Reality, Pedagogy, and Career Readiness: A Review of Literature

Recently, there has been a significant spike in the level of ideation with, and deployment of, extended reality (XR) tools and applications in many aspects of the digital workplace. It is also projected that acceptance and use of XR technology to improve work performance will continue to grow in the coming decade. However, there has not been a robust level of adoption and implementation of XR technology, to include augmented reality (AR), mixed-reality (MR), and virtual reality (VR) within academic institutions, training organizations, government agencies, business entities, and community or professional associations. This paper examines the current literature to determine how XR and related technologies have been explored, evaluated, or used in educational and training activities. As part of the literature review, we paid special attention on how XR tools, applications are being deployed to increase work and career readiness, performance, and resiliency of students, adult learners, and working professionals. Results from the study showed that XR applications are being used, often at pilot-testing levels, in disciplines such as medicine, nursing, and engineering. The data also show that many academic institutions and training organizations have yet to develop concrete plans for wholesale use and adoption of XR technologies to support teaching and learning activities.

Patrick Guilbaud, T. Christa Guilbaud, Dane Jennings
Development of an AR Training Construction System Using Embedded Information in a Real Environment

In recent years, emergency drills have been held in important facilities such as nuclear facilities. The problem with these drills is that the number of emergency scenarios is limited. In this study, we have developed an augmented reality training system that allows users to easily create scenarios by simply assembling and placing blocks representing the components of the scenario in the real-world environment. The system is capable of simulating various scenarios, involving the interaction between the trainee and the real-world environment. A subjective experiment was conducted at a nuclear facility to evaluate the usability of the system and the flexibility about scenario creation. The results showed that the users can construct an AR training environment easily in a shorter time than the conventional text-based programming.

Yuki Harazono, Taichi Tamura, Yusuke Omoto, Hirotake Ishii, Hiroshi Shimoda, Yoshiaki Tanaka, Yoshiyuki Takahashi
LibrARy – Enriching the Cultural Physical Spaces with Collaborative AR Content

In the last decade, technology has been rapidly evolving, making the humankind more dependent on digital devices than ever. We rely more and more on digitalization and we are excited to welcome emergent technologies like Augmented Reality (AR) into our everyday activities as they show great potential to enhance our standard of life. AR applications are becoming more popular every day and represent the future of human-computer interaction, by revolutionizing the way information is presented and transforming the surroundings into one broad user interface.This paper aims to present the concept of LibrARy – an AR system that intends to create innovative collaborative environments where multiple users can interact simultaneously with computer-generated content. Its main objective is to improve cooperation by enabling co-located users to access a shared space populated with 3D virtual objects and to enrich collaboration between them through augmented interactions.LibrARy will be developed as part of Lib2Life research project which addresses a topic of great importance – revitalizing cultural spaces in the context of recent technology advancements. In light of the research carried out for the Lib2Life project, a need for a more complex collaborative augmented environment has been identified. Therefore, we are now proposing LibrARy - an innovative component which has the potential to enhance the cooperation in multi-participant settings and provide unique interactive experiences.This paper will present in more detail the concept and the functionalities of LibrARy. It will then analyze the possible use case scenarios and lastly discuss further development steps.

Andreea-Carmen Ifrim, Florica Moldoveanu, Alin Moldoveanu, Alexandru Grădinaru
Supporting Embodied and Remote Collaboration in Shared Virtual Environments

The COVID-19 pandemic has had a tremendous impact on businesses, educational institutions, and other organizations that require in-person gatherings. Physical gatherings such as conferences, classes, and other social activities have been greatly reduced in favor of virtual meetings on Zoom, Webex or similar video-conferencing platforms. However, video-conferencing is quite limited in its ability to create meeting spaces that capture the authentic feel of a real-world meeting. Without the aid of body language cues, meeting participants have a harder time paying attention and keeping themselves engaged in virtual meetings. Video-conferencing, as it currently stands, falls short of providing a familiar environment that fosters personal connection between meeting participants. This paper explores an alternative approach to virtual meetings through the use of extended reality (XR) and embodied interactions. We present an application that leverages the full-body tracking capabilities of the Azure Kinect and the immersive affordances of XR to create more vibrant and engaging remote meeting environments.

Mark Manuel, Poorvesh Dongre, Abdulaziz Alhamadani, Denis Gračanin
A Survey on Applications of Augmented, Mixed and Virtual Reality for Nature and Environment

Augmented, virtual and mixed reality (AR/VR/MR) are technologies of great potential due to the engaging and enriching experiences they are capable of providing. However, the possibilities that AR/VR/MR offer in the area of environmental applications are not yet widely explored. In this paper we present the outcome of a survey meant to discover and classify existing AR/VR/MR applications that can benefit the environment or increase awareness on environmental issues. We performed an exhaustive search over several online publication access platforms and past proceedings of major conferences in the fields of AR/VR/MR. Identified relevant papers were filtered based on novelty, technical soundness, impact and topic relevance, and classified into different categories. Referring to the selected papers, we discuss how the applications of each category are contributing to environmental protection and awareness. We further analyze these approaches as well as possible future directions in the scope of existing and upcoming AR/VR/MR enabling technologies.

Jason Rambach, Gergana Lilligreen, Alexander Schäfer, Ramya Bankanal, Alexander Wiebel, Didier Stricker
Flexible Low-Cost Digital Puppet System

Puppet-basedsystems have been developed to help children engage in storytelling and pretend play in much prior literature.Many different approaches have been proposed to implement suchpuppet-based storytelling systems, and new storytelling systems are stillroutinely published, indicating the continued interest in the topic across domains like child-computer interaction, learning technologies, and the broader HCI community. This paper firstpresents a detailed review of the different approaches that have been usedfor puppet-based storytelling system implementations, and then proposesaflexible low-cost approach to puppet-based storytelling system implementation that uses a combination of vision- and sensor-based tracking. We contribute a framework that will help the community to make sense of the myriad of puppet-based storytelling system implementation approaches in the literature, and discuss results from a perceptionstudy that evaluated the performance of the system output using our proposed implementation approach.

Nanjie Rao, Sharon Lynn Chu, Ranger Chenore
Mixed Reality Technology Capabilities for Combat-Casualty Handoff Training

Patient handoffs are a common, yet frequently error prone occurrence, particularly in complex or challenging battlefield situations. Specific protocols exist to help simplify and reinforce conveying of necessary information during a combat-casualty handoff, and training can both reinforce correct behavior and protocol usage while providing relatively safe initial exposure to many of the complexities and variabilities of real handoff situations, before a patient’s life is at stake. Here we discuss a variety of mixed reality capabilities and training contexts that can manipulate many of these handoff complexities in a controlled manner. We finally discuss some future human-subject user study design considerations, including aspects of handoff training, evaluation or improvement of a specific handoff protocol, and how the same technology could be leveraged for operational use.

Ryan Schubert, Gerd Bruder, Alyssa Tanaka, Francisco Guido-Sanz, Gregory F. Welch
Backmatter
Metadaten
Titel
Virtual, Augmented and Mixed Reality
herausgegeben von
Jessie Y. C. Chen
Gino Fragomeni
Copyright-Jahr
2021
Electronic ISBN
978-3-030-77599-5
Print ISBN
978-3-030-77598-8
DOI
https://doi.org/10.1007/978-3-030-77599-5

Neuer Inhalt