Skip to main content

2022 | Buch

Virtual, Augmented and Mixed Reality: Design and Development

14th International Conference, VAMR 2022, Held as Part of the 24th HCI International Conference, HCII 2022, Virtual Event, June 26 – July 1, 2022, Proceedings, Part I

insite
SUCHEN

Über dieses Buch

This two-volume set LNCS 13317 and 13318 constitutes the thoroughly refereed proceedings of the 14th International Conference on Virtual, Augmented and Mixed Reality, VAMR 2022, held virtually as part of the 24rd HCI International Conference, HCII 2022, in June/July 2022.The total of 1276 papers and 241 posters included in the 39 HCII 2021 proceedings volumes was carefully reviewed and selected from 5222 submissions.
The 56 papers included in this 2-volume set were organized in topical sections as follows: Developing VAMR Environments; Evaluating VAMR environments; Gesture-based, haptic and multimodal interaction in VAMR; Social, emotional, psychological and persuasive aspects in VAMR; VAMR in learning, education and culture; VAMR in aviation; Industrial applications of VAMR.
The first volume focuses on topics related to developing and evaluating VAMR environments, gesture-based, haptic and multimodal interaction in VAMR, as well as social, emotional, psychological and persuasive aspects in VAMR, while the second focusses on topics related to VAMR in learning, education and culture, VAMR in aviation, and industrial applications of VAMR.

Inhaltsverzeichnis

Frontmatter

Developing VAMR Environments

Frontmatter
Integration of Augmented, Virtual and Mixed Reality with Building Information Modeling: A Systematic Review

Thanks to the digital revolution, the construction industry has seen a recognizable evolution, where the world has been heading towards modern constructions based on the use of Building Information Modeling (BIM). This evolution was marked by the integration of this paradigm with immersive technologies like Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR). During the last few years, the development of BIM started to emerge. This paper proposes a Systematic Literature Review (SLR) of recent studies about the integration of BIM with immersive environments using VR/AR/MR technologies. Four electronic databases were exploited to search for eligible studies, namely: Google Scholar, ACM Digital Library, IEEE Xplore, and Science direct. From an initial cohort of 239 studies, 28 were retained for analysis. The main findings of this review have been focused on stages of the projects’ life cycle in which the immersive technologies are being implemented, approaches/techniques used to ensure the integration of BIM with the three immersive technologies, along with the current limitations and perspectives.

Ahlem Assila, Amira Dhouib, Ziad Monla, Mourad Zghal
Visualization of Macroscopic Structure of Ultra-high Performance Concrete Based on X-ray Computed Tomography Using Immersive Environments

Ultra-high performance concrete (UHPC) is a cementitious composite material which uses steel fibers, cement, silica fume, fly ash, water, and admixtures to provide better structural performance and durability compared to conventional concrete. UHPC is an attractive novel material because of its higher compressive strength, higher tensile capacity, and ultralow permeability. Currently in the United States, UHPC is batched in smaller quantities on-site with very strict quality control by representatives from the producer. This has significantly increased the cost (more than 15 times higher) of poured UHPC compared to conventional concrete. It is not studied if substituting the representatives from a commercial producer with trained concrete technicians from public and other entities, would actually yield into substantial defects in the structure of poured UHPC. Similar to conventional concrete, mechanical properties of UHPC, a heterogeneous material from various structural scales (microscopic, mesoscopic, and macroscopic) is expected to be different from each other. Past research has shown that defects that exist on smaller scales can dictate the performance of UHPC overtime. However, macroscopic structural analysis may be the most effective method to capture the defects and uncertainties due to quality of on-site workmanship. This research focuses on the X-ray computed tomography (XCT) of macroscopic structures. XCT is an effective analysis tool for structural components (e.g., beam, columns, walls, slabs) in civil and critical infrastructures. This paper presents a detailed overview outlining how the macroscopic structure of concrete characterized by XCT can be visualized in an immersive environment using virtual reality while capturing and recording the details of a scanned object for both current and future analysis. The high resolution two dimensional (2-D) tomographic slices, and 3-D virtual reconstructions of 2-D slices with subsequent visualization, can represent a spatially accurate and qualitatively informative rendering of the internal structure of UHPC components poured by individuals who are not necessarily representatives of a commercial producer. Results from this research are expected to reduce the cost of UHPC by modifying the guidelines for on-site pour. This can contribute to wider adoption of UHPC in future projects.

Rajiv Khadka, Mahesh Acharya, Daniel LaBrier, Mustafa Mashal
Photographic Composition Guide for Photo Acquisition on Augmented Reality Glasses

Capturing meaningful moments by using cameras is a major application of Augmented Reality Glasses (AR Glasses). Taking photos with head-mounted cameras of AR Glasses bring a new photo acquisition experience to users because there are no viewfinders conventional cameras have. Users may experience difficulties to figure out the region the head-mounted camera will capture on AR Glasses. To address this issue, we propose a photographic composition guide for AR Glasses. The proposed method analyzes video streams from the camera and automatically determine the image region that has a high aesthetic quality score. Photos taken from the recommended position result in a better photographic composition. Our method achieved 4.03 in Mean-Opinion-Score (MOS) test, demonstrating that our method corresponds to human’s expectation on aesthetic quality of photos.

Wonwoo Lee, Jaewoong Lee, Deokho Kim, Gunill Lee, Byeongwook Yoo, Hwangpil Park, Sunghoon Yim, Taehyuk Kwon, Jiwon Jeong
Method to Create a Metaverse Using Smartphone Data

With the development of internet technology, several IT companies and users have become interested in virtual worlds, called metaverses. However, one of the main problems for a metaverse is the number of resources required to develop it. To reduce the burden of high computing power and other related resources, we propose a method that uses mobile phone functions and data to generate a personal virtual space as there is still a research gap in this area. In this study, we propose a method to intuitively generate a personal virtual space using smartphone data. We propose the development of a new type of metaverse application using the photo data saved on a smartphone. We hypothesized that using the new metaverse application induces more happiness and excitement than using the smartphone gallery application to view memorable photos. To evaluate the new metaverse application, we measured the emotional responses of users and compared the two applications. The results indicate that using the new metaverse application results in higher happiness and excitement.

Daehee Park, Jeong Min Kim, Jingi Jung, Saemi Choi
Development of Standards for Production of Immersive 360 Motion Graphics, Based on 360 Monoscopic Videos: Layers of Information and Development of Content

Virtual reality and immersive technologies are currently in full development. One of the most widely used formats in the medium are the 360 linear videos, which are proliferating thanks to the 360 cameras available today for the user. The forms of production, including filming and post-production, in this new medium have been transformed in many of their technical procedures. But what about computer-generated graphics, such as motion graphics? The creation of linear video content with the motion graphics technique, although increasingly common, requires specific procedures and techniques that differ from formats that do not fall into the category of 360, immersive or otherwise. In this paper, we aim to establish a series of mechanisms and standards, based on the knowledge gained from experience in filmed 360-degree videos, to help facilitate the development of motion graphics proposals, also considering the parameters of usability in virtual reality.

Jose Luis Rubio-Tamayo, Manuel Gertrudix, Mario Barro
Multi-user Multi-platform xR Collaboration: System and Evaluation

Virtual technologies (AR/VR/MR, subsumed as xR) are used in many commercial applications, such as automotive development, medical training, architectural planning, teaching and many more. Usually, existing software products offer either VR, AR or a 2D monitor experience. This limitation can be a hindrance. Let’s draw a simple application example: Users at a university shall join an xR teaching session in a mechanical engineering lecture. They bring their own xR device, join the session and experience the lecture with xR support. But users may ask themselves: Does the choice of my own xR device limit my learning success?In order to investigate multi-platform xR experiences, a software framework was developed and is presented here. This allows one shared xR experience for users of AR smartphones, AR/MR glasses and VR-PCs. The aim is to use this framework to study differences between the platforms and to be able to research for better quality multi-user multi-platform xR experiences.We present results of a first study that made use of our framework. We compared user experience, perceived usefulness and perceived ease of use between three different xR device types in a multi-user experience. Results are presented and discussed.

Johannes Tümler, Alp Toprak, Baixuan Yan
Using Multi-modal Machine Learning for User Behavior Prediction in Simulated Smart Home for Extended Reality

We propose a multi-modal approach to manipulating smart home devices in a smart home environment simulated in virtual reality. Our multi-modal approach seeks to determine the user’s intent in the form of the user’s target smart home device and the desired action for that device to perform. We do this by examining information from two main modalities: spoken utterance and spatial information (such as gestures, positions, hand interactions, etc.). Our approach makes use of spoken utterance, spatial information, and additional information such as the device’s state to predict the user’s intent. Since the information contained in the user’s utterance and the spatial information can be disjoint or complementary to one another, we process the two sources of information in parallel using multiple machine learning models to determine intent. The results of these models are ensembled to produce our final prediction results. Aside from the proposed approach, we also discuss our prototype and discuss our initial findings.

Powen Yao, Yu Hou, Yuan He, Da Cheng, Huanpu Hu, Michael Zyda
Virtual Equipment System: Toward Bag of Holding and Other Extradimensional Storage in Extended Reality

The term ‘storage’ in real life typically refers to a container that occupies physical space, such as rooms, closets, warehouses, etc. Our interactions with items stored within are constrained by physics in real life. However, Virtual Reality allows us to ignore certain rules. Inspired by a Bag of Holding from Dungeons & Dragons and other entertainment media, we envision a storage which can store more items than the physical space the container occupies by linking it to another dimension. Users can interact with the stored item as they would with containers in VR or physically enter the storage. We refer to this approach as ‘Extradimensional Storage’. During our design and implementation of Extradimensional Storage, we identified five core components of a generic storage system, which are storage space, container, access, stored items and interactor. By altering the properties associated with the core components, we are able to implement Extradimensional Storage. We further applied the five core components to reinterpret the inventory taxonomy proposed by Cmentowski et al. Thus, our contributions include a general framework for storage, an implementation of a specialized version known as Extradimensional Storage, additions to the inventory taxonomy, and how properties of the core storage components can be utilized for different scenarios.

Powen Yao, Zhankai Ye, Michael Zyda
Virtual Equipment System: First Evaluation of Egocentric Virtual Equipment for Sensory Settings

Virtual Equipment System is a system in extended reality that provides the user with equipment slots and equipment that serve as an interface for further interactions. The equipment slots are storage locations for equipment that is associated with the user spatially (egocentric). Due to the virtual nature of the system, these egocentric equipment slots do not have to be attached to the user’s body; they can instead belong in the user’s personal, peripersonal, or extrapersonal space, which greatly expands the potential space for storage. Virtual Equipment are virtual objects that fulfill specific roles or functions. Their look and feel provide cues for how to interact with them as well as potential functionalities associated with them. In this paper, we present our first results from an experimental evaluation of the Virtual Equipment System. We compare different interaction techniques available in our Virtual Equipment System with the standard technique for adjusting audio volume as well as look at the effect of having the same Virtual Equipment in different egocentric equipment slots located in the three different spaces.

Powen Yao, Shitong Shen, Michael Zyda

Evaluating VAMR Environments

Frontmatter
Effect of Personality Traits and Stressor Inducers on Users’ Cognitive Load During Interactions with VR Environments

In this paper, HCI-based design criteria focusing on managing the cognitive load of users during their interaction with Virtual Reality (VR) based training environments are presented. The design criteria explored in the paper help lay a foundation for the creation of Human Centric VR environments to train users in two healthcare domains. The first domain is orthopedic surgery, and the second domain is related to the Covid-19 pandemic. The HCI-based perspective presented in the paper investigates criteria such as personality traits and stress inducers and their impact on cognitive load. The paper delineates the implementation of the VR based environments and a set of attributes that guide and influence the content of the environments. Testing and assessment strategy is described and results are also included which provide insights into the impact of such HCI-based criteria on participants’ acquisition of skills and knowledge during interactions with the VR environments.

Aaron Cecil-Xavier, Avinash Gupta, Shelia Kennison, Miguel Pirela-Cruz
The Development and Validation of an Augmented and Mixed Reality Usability Heuristic Checklist

Augmented Reality (AR) and Mixed Reality (MR) are new emerging technologies that are becoming increasingly popular. Because these technologies are so new, there is a lack of standards and consistency in application and hardware design. This can make it difficult to learn and frustrating for users. One way to standardized design and enhance the usability of a product is through the use of heuristic evaluations. General heuristics, such as Nielsen’s 10 or Schneiderman’s 8 usability heuristics have been used to evaluate these technologies. These heuristics are a useful starting point because they bring attention to many crucial aspects of the usability of a product. However, additional aspects that could alter the users’ experience may not be assessed due to the uniqueness of AR and MR. There are very few validated AR and MR heuristics in the literature for practitioners to use. The purpose of this study was to create and validate a heuristic checklist that can be used to assess and inform design changes that influence the user experience of an AR or MR application and/or device. We followed an established and comprehensive 8-stage methodology developed by Quiñones, Rusu, & Rusu to create and validate our AR and MR usability heuristic checklist [4]. This included a search and summary of the current literature, formally defining heuristics based on this literature search, and validating through heuristic evaluations, expert reviews, and user testing. Our final revised heuristic checklist included 11 heuristics and 94 checklist items that encompasses usability aspects for AR and MR technologies.

Jessyca L. Derby, Barbara S. Chaparro
A Vibrotactile Reaction Time Task to Measure Cognitive Performance in Virtual and Real Environments

Cognitive load is an important concept to understand people's cognitive processing performance of information. To assess cognitive load, several methods can be applied. For performance-based measure, Reaction times (RT) tasks can be used. Compared to physiological measures such as electroencephalography, RT tasks can be easily implemented and can be used as an alternative to subjective questionnaires, like NASA-TLX. In this paper we present two evaluation studies of a vibrotactile wearable for RT tasks. The first study evaluates its potential for Choice Reaction Time (CRT) tasks to compare real and virtual settings, the second study uses a simple Reaction Time (RT) task to evaluate cognitive effort on two different VR locomotion techniques while working on tasks in VR. The system is based on a vibrotactile wearable for the cues/stimuli and is suited for VR settings as well as real environments. We argue that such systems allow to compare cognitive performance between real and virtual tasks and discuss the limitation of the system.

Markus Jelonek, Lukas Trost, Thomas Herrmann
Assessing User Experience of Text Readability with Eye Tracking in Virtual Reality

Virtual Reality (VR) technology is mostly used in gaming, videos, engineering applications, and training simulators. One thing which is shared among all of them is the necessity to display text. Text reading experience is not always in focus for VR systems because of limited hardware capabilities, lack of standardization, user interface (UI) design flaws, and physical design of Head-Mounted Displays (HMDs). With this paper, key variables from the UI design side were researched that can improve text reading user experience in VR. Therefore four important points for reading in VR application were selected to be focused on: 1) Difference in canvas type (flat/curved), 2) Contrast on virtual scene (light/dark), 3) Number of columns in layout (1 column/2 column/3 column) 4) Text distance from the subject (1.5 m/6.5 m). For a user study a VR app for Oculus Quest was developed, enabling the possibility to display text while varying some of the features important for readability in VR. This user experiment has shown parameters that are important for text reading experience in VR. Specifically, subjects performed very well when the text was on a 6.5-meter distance from the subject with font size 22pt, on a flat canvas with one column layout. When it comes to physiological variables, the conditions measurements were behaving similarly, as all of the selected parameters were in line with the design guidelines. Therefore, selection on final settings should be more oriented towards user experience and preferences.

Tanja Kojic, Maurizio Vergari, Sebastian Möller, Jan-Niklas Voigt-Antons
Virtual Reality is Better Than Desktop for Training a Spatial Knowledge Task, but Not for Everyone

Advances in virtual reality (VR) technology have resulted in the ability to explore high-resolution immersive environments, which seem particularly useful for training spatial knowledge tasks. However, empirical research on the effectiveness of training in VR, including for spatial knowledge-based tasks, has yielded mixed results. One potential explanation for this discrepancy is that key individual characteristics may account for differences in who benefits most from VR-based training. Previous research has suggested that immersive VR imposes high cognitive load on learners and thus impedes learning, but the amount of cognitive load experienced may be dependent on an individual’s video-game experience (VGE). Therefore, the goal of this experiment was to explore the effects of VGE on learning in VR versus a desktop-based training environment, since VGE has been demonstrated to affect performance in previous spatial navigation studies in virtual environments. In this experiment, 62 participants trained in a virtual scavenger hunt task to learn the locations of different equipment in a submarine’s machinery room. After training, participants’ spatial knowledge was assessed in a drawing task of the room’s layout. The results showed no differences overall for experimental condition (i.e., Desktop or VR) or VGE, but there was a significant interaction between these two variables. The high-VGE participants in the VR condition outperformed low-VGE participants in both the Desktop and VR conditions. This suggests that VR may be particularly useful for training experienced gamers, but both VR and Desktop seem to be equally effective for less experienced gamers in a spatial task.

Matthew D. Marraffino, Cheryl I. Johnson, Allison E. Garibaldi
Is Off-the-Shelf VR Software Ready for Medical Teaching?

Over the last decade, the use of computerized three-dimensional (3D) models in a virtual environment has become widespread to enhance medical teaching and learning, particularly in anatomy. Technologies such as Virtual Reality (VR) offer large potential to enhance medical training processes.The research focus of this paper is to investigate the effectiveness of using predesigned off-the-shelf VR software to teach medical knowledge in comparison with a conventional teaching method. Also investigated was the degree of satisfaction associated with using VR to learn. The teaching example focuses on the anatomy of the human heart because it is one of the most challenging topics to teach and comprehend due to its complex three-dimensional nature.A randomized controlled study was conducted with forty participants. They were equally distributed into two groups: 20 in control (non-VR) and 20 in experimental (VR) group. Two learning methods were used to study the heart. The non-VR group used PowerPoint presentation, whereas the VR group used immersive VR from off-the-shelf software.This study subsequently gives insight on three main aspects: First, there was significant difference in anatomy knowledge within the two groups. Second, the VR group found the learning experience to be significantly more engaging, enjoyable, and useful. Third, non-customizable predesigned software can be suitable and effective for medical training tasks and applications.

Angela Odame, Johannes Tümler
Ease of Use and Preferences Across Virtual Reality Displays

Head mounted displays have become popular, but it is uncertain whether the interactive quality of these systems is sufficient for educational and training applications. This work is a longitudinal study into a variety of VR systems, which examines interface restrictions, ease of use, and user preferences with an emphasis on educational settings. Four different systems were examined with a range of interaction elements. Certain interactions failed in some systems and users did not necessarily prefer the highest-end systems. Overlapping interaction elements were also discovered, which may direct future work in later interaction test suites.

Lisa Rebenitsch, Delaina Engle, Gabrielle Strouse, Isaac Egermier, Manasi Paste, Morgan Vagts
Objective Quantification of Circular Vection in Immersive Environments

Human interaction in the computer environment requires conduciveness with minimal cybersickness. One such sickness is vection, where the subjects undergo illusory perception of self-motion in response to visual stimulus. The present research quantifies the perceptual parameter. An optokinetic drum (OKD) is used to induce circular vection on a virtual reality (VR), and the inertial measurement unit (IMU) in a head-mounted display (HMD) is used to track the head rotation about x, y, z axes. The study quantifies the vection in terms of the vection index (VI). The VI depends on the ratio of the angular velocity of HMD to the angular velocity of OKD. There is a significant difference from resting state to higher angular speeds in clockwise (CW) as well as anticlockwise (ACW) direction $$(p<0.05)$$ ( p < 0.05 ) . Also, the circular vection along the y-axis imparts the motion along the x and z axes. The magnitude of vection increases with speed in CW and ACW directions till the optimum speed of OKD. The vection is absent during very low and high speeds of OKD. Most participants experience the self-motion in an angular displacement range of 30–97 $$^\circ /$$ ∘ / s in both CW and ACW directions. The vection in ACW compensates for the vection in CW direction about x, y and z axes.

Debadutta Subudhi, P. Balaji, Manivannan Muniyandi
Are You There? A Study on Measuring Presence in Immersive Virtual Reality

With the recent development of virtual reality (VR) technologies, we can create a virtual environment that exceeds the constraints of the real environment. A concept called “presence” is one of the most important elements of a VR experience. To provide more interesting VR experiences, we need to measure how much the users feel their presence in the VR experiences is important. However, when we measure this presence by a questionnaire inside or outside the VR, the subjects’ sense of presence in virtual reality environments is disrupted by the transition from the VR experience to the questionnaire. Therefore, in this study, we propose a method to integrate questionnaires into VR experiences. We conduct two comparative experiments between the proposed method and the existing method to validate the effectiveness of our proposed method. From the results of the two experiments, although we cannot determine the optimal design for the proposed method and cannot say our proposed method can measure presence more accurately, we confirmed the partial effectiveness of the proposed method.

Reiya Tamaki, Tatsuo Nakajima

Gesture-Based, Haptic and Multimodal Interaction in VAMR

Frontmatter
Tabletop 3D Digital Map Interaction with Virtual Reality Handheld Controllers

Immersive technologies, such as virtual reality, enable users to view and evaluate three-dimensional content, e.g., geographic data. Besides navigating this data at a life-size scale, a tabletop display offers a better overview of a larger area. This paper presents six different techniques to interact with immersive digital map table displays, i.e., panning, rotating, and zooming the map and indicating a position. The implemented interaction methods were evaluated in a user study with 12 participants. The results show that using a virtual laser pointer in combination with the buttons and joystick on a controller yields the best results regarding interaction time, workload, and user preference. The user study also shows that interaction methods should be customizable so that users can adapt them to their abilities. However, the proposed virtual laser pointer technique achieves a good balance between physical and cognitive effort and yields good results for users with varying experience levels.

Adrian H. Hoppe, Florian van de Camp, Rainer Stiefelhagen
Hand Gesture Recognition for User Interaction in Augmented Reality (AR) Experience

Augmented Reality (AR) has gained a lot of attraction in the recent past. Arguably, the most important tool that can make AR a household gadget is its interaction with the user. This may lead to two possible interaction methodologies: (i) Using an extra device for interaction; (ii) Using human hands as interaction. Former is probably the easy method, but it may increase the cost of the AR device, limiting its target users. Therefore, hand gestures are a feasible and efficient mode of interaction. However, for accurate and pleasant interaction, the AR device should be capable of hand gesture understanding. In this direction, we propose a hand gesture classification method, based on Convolutional Neural Networks (CNNs) that takes advantage of the pre-trained network weights for faster and efficient training, which also helps improve the quality of gesture classification. Moreover, the proposed approach takes advantage of hand detection for background elimination and efficient gesture recognition. The proposed approach is evaluated on the Hand gesture classification task for three datasets that differ in terms of the number of data samples, amount of gestures, and data quality. The obtained results show that our method outperforms state-of-the-art methods in most of the experimentation cases.

Aasim Khurshid, Ricardo Grunitzki, Roberto Giordano Estrada Leyva, Fabiano Marinho, Bruno Matthaus Maia Souto Orlando
Natural 3D Object Manipulation for Interactive Laparoscopic Augmented Reality Registration

Due to the growing focus on minimally invasive surgery, there is increasing interest in intraoperative software support. For example, augmented reality can be used to provide additional information. Accurate registration is required for effective support. In this work, we present a manual registration method that aims at mimicking natural manipulation of 3D objects using tracked surgical instruments. This method is compared to a point-based registration method in a simulated laparoscopic environment. Both registration methods serve as an initial alignment step prior to surface-based registration refinement. For the evaluation, we conducted a user study with 12 participants. The registration methods were compared in terms of registration accuracy, registration duration, and subjective usability feedback. No significant differences could be found with respect to the previously mentioned criteria between the manual and the point-based registration methods. Thus, the manual registration did not outperform the reference method. However, we found that our method offers qualitative advantages, which may make it more suitable for some application scenarios. Furthermore we identified possible approaches for improvement, which should be investigated in the future to strengthen possible advantages of our registration method.

Tonia Mielke, Fabian Joeres, Christian Hansen
Generating Hand Posture and Motion Dataset for Hand Pose Estimation in Egocentric View

Hand interaction is one of the main input modalities for augmented reality glasses. Vision-based approaches using deep learning have been applied to hand tracking and shown good results. To train a deep neural network, a large dataset of hand information is required. However, obtaining real hand data is painful due to a large number of annotations and lack of diversities such as skins, lighting conditions, and backgrounds. In this paper, we propose a method to generate a synthetic hand dataset that includes diverse human and environmental parameters. By applying constraints of a human hand, we can get realistic hand poses for hand dataset. We also generate dynamic hand animations which can be used for hand gesture recognition.

Hwangpil Park, Deokho Kim, Sunghoon Yim, Taehyuk Kwon, Jiwon Jeong, Wonwoo Lee, Jaewoong Lee, Byeongwook Yoo, Gunill Lee
Real-Time Bimanual Interaction Across Virtual Workspaces

This work investigates bimanual interaction modalities for interaction between a virtual personal workspace and a virtual shared workspace in virtual reality (VR). In VR social platforms, personal and shared workspaces are commonly used to support virtual presentations, remote collaboration, data sharing, and would demand for reliable, intuitive, low-fatigue freehand gestures for a prolonged use during a virtual meeting. The interaction modalities in this work are asymmetric hand gestures created from bimanual grouping of freehand gestures including pointing, holding, and grabbing, which are known to be elemental and essential ones for interaction in VR. The design and implementation of bimanual gestures follow clear gestural metaphors to create connection and empathy with hand motions the user performs. We conducted a user study to understand advantages and drawbacks amongst three types of bimanual gestures as well as their suitability for cross-workspace interaction in VR, which we hope are valuable to assist the design of future VR social platforms.

Chao Peng, Yangzi Dong, Lizhou Cao
Typing in Mid Air: Assessing One- and Two-Handed Text Input Methods of the Microsoft HoloLens 2

The Microsoft HoloLens 2 is a mixed reality (MR) headset that overlays virtual elements atop a user’s view of their physical environment. To input text, the device has the ability to track hands and fingers, allowing for direct interaction with a virtual keyboard. This is an improvement over the HoloLens 1 device, which required head tracking and single-finger air-tapping input. The present study evaluated the performance (speed and accuracy), perceived usability, mental workload, and physical exertion of one-handed and two-handed text entry. A sample of 21 participants (12 male, 9 female) aged 18–32 years typed standardized phrases presented in random order. Typing with two hands was faster and more preferred than one-handed input; however, this input method was also less accurate. Exertion in some body parts was also higher in the two-handed condition. Findings suggest that while two-handed text input was better than one-handed, there is room for improvement to approximate typing on a physical or mobile device keyboard.

Emily Rickel, Kelly Harris, Erika Mandile, Anthony Pagliari, Jessyca L. Derby, Barbara S. Chaparro
Learning Effect of Lay People in Gesture-Based Locomotion in Virtual Reality

Locomotion in Virtual Reality (VR) is an important part of VR applications. Many scientists are enriching the community with different variations that enable locomotion in VR. Some of the most promising methods are gesture-based and do not require additional handheld hardware. Recent work focused mostly on user preference and performance of the different locomotion techniques. This ignores the learning effect that users go through while new methods are being explored. In this work, it is investigated whether and how quickly users can adapt to a hand gesture-based locomotion system in VR. Four different locomotion techniques are implemented and tested by participants. The goal of this paper is twofold: First, it aims to encourage researchers to consider the learning effect in their studies. Second, this study aims to provide insight into the learning effect of users in gesture-based systems.

Alexander Schäfer, Gerd Reis, Didier Stricker
Flick Typing: A New VR Text Input System Based on Space Gestures

Text Entry is a significant topic in human-computer interaction in Virtual Reality. Most common text entry methods require users to interact with a 2D QWERTY keyboard in 3D space using a ray emitting from their hands or controllers. This requires the user’s head and hands to be in a specific position and orientation to do text entry. We propose a new text entry input method that we call Flick Typing that is agnostic to user posture or keyboard position. Flick Typing utilizes the user’s knowledge of a QWERTY keyboard layout, but does not explicitly provide visualization of the keys. To type with Flick Typing, users will move their controller to where they think the target QWERTY key is with respect to the controller’s starting position and orientation, often with a simple flick of their wrists. We provide a manually defined gesture-key mapping model of the QWERTY keys in 3D space. Experiments containing both quantitative and qualitative measurements are conducted and discussed this paper, which shows the potential of our method.

Tian Yang, Powen Yao, Michael Zyda

Social, Emotional, Psychological and Persuasive Aspects in VAMR

Frontmatter
A Design Framework for Social Virtual Reality Experiences: Exploring Social and Cultural Dimensions for Meaningful and Impactful VR

Virtual Reality has often been described as providing a means to “walk a mile in another’s shoes” offering powerful interventions for experiential learning. Such experiences can, for example, provide a safe and controlled means of engaging with often difficult, unsafe or emotionally charged situations. Crucial to this experience is the sense of presence, informed by place and plausibility, of the simulation design. However, past studies often create such experiences through the lens of the developer, and they may therefore lack the authenticity and social nuance of the situations they are attempting to model. Health care students, for example, will often face difficult conversations with seriously ill patients during their placement time when in study. Currently, there is an under-preparedness associated with placement shock when students’ previous assumptions and the reality of patient care do not match. VR would seem well suited to preparing students for this reality but only if the simulations capture the complexity and social nuance. As currently there is little consideration of the social and cultural dimensions for developing social VR experiences, this paper proposes a framework for designing such socially oriented VR applications. We case study the framework by designing a social VR application for health care students to prepare for placement.

Vanessa Cui, Thomas Hughes-Roberts, Nick White
Designing Virtual Environments for Smoking Cessation: A Preliminary Investigation

The recent pandemic of COVID-19 is placing smokers at a high risk of death as a result of the combination of smoking and COVID-19. This signals a need to address this problem among dual users (cigarette and vape users) and provides them with successful tools to quit tobacco. This pilot project aims to test a novel tool, a Virtual Reality and Motivational Interviewing combined approach that will assist dual users to quit tobacco products. The investigators wanted to pilot test the equipment and scenario for user-friendliness and interface. For the first phase of the pilot, we developed four Virtual Reality scenarios that contain different triggers for smoking, such as noise, stress, and cigarettes. We used Oculus Quest 2 for the hardware because the equipment does not require towers or connections to computers, operates utilizing WIFI, and is mobile. To develop the Software, we used the “Unity3D” game engine. A total of 21 participants tested the equipment and scenarios. The participants ranged between ages 18–71 with various gaming and virtual reality experience. The majority of the participants felt immersed in the Virtual reality environment. Some participants had some challenges with the equipment and the Software and provided valuable feedback to enhance the scenarios. The virtual reality environment promises to be a novel tool to assist tobacco users, mainly dual users, in quitting tobacco.

Elham Ebrahimi, Dana Hajj, Matthew Jarrett, Anastasiya Ferrell, Linda Haddad, Marc Chelala
Social-Emotional Competence for the Greater Good: Exploring the Use of Serious Game, Virtual Reality and Artificial Intelligence to Elicit Prosocial Behaviors and Strengthen Cognitive Abilities of Youth, Adolescents and Educators – A Systematic Review

This study sought to understand the learning benefits, impacts, and opportunities involved with the use of serious games (SG), extended reality (XR), Artificial Intelligence (AI) and other advanced technologies in the classroom and other educational settings. We conducted a systematic literature review focusing on the potential benefits of utilizing those technologies to build and strengthen prosocial behaviors and cognitive abilities of students and other learners. Results of the study reveal that those modern technologies can be used to improve students’ academic experiences and interactions with their peers while in school. In our rapidly changing global knowledge society, it is clear there is a need to develop and build the capacity of students to work effectively and cooperatively with all people including those from diverse socio-cultural and educational backgrounds. This paper highlights ways in which advanced technologies can support ongoing efforts to enhance students’ knowledge while building more inclusive and emotionally supportive learning environments in school settings.

Patrick Guilbaud, Carrie Sanders, Michael J. Hirsch, T. Christa Guilbaud
Body-Related Attentional Bias in Anorexia Nervosa and Body Dissatisfaction in Females: An Eye-Tracking and Virtual Reality New Paradigm

According to recent research, eating disorder (ED) patients tend to check unattractive body parts. However, few studies have studied this attentional bias (AB) phenomenon combining virtual reality (VR) with eye-tracking (ET). This study aims to examine whether anorexia nervosa (AN) patients have a longer fixation time and a greater fixations number on the weight-related body areas compared to the healthy sample with high body dissatisfaction (HBD) and low body dissatisfaction (LBD). It will also examine whether the HBD group will have more fixations and spend more time looking at weight-related areas than those with LBD. Forty-three college women (18 with HBD and 25 with LBD) and 23 AN patients were immersed in a virtual environment and then embodied in a virtual avatar with their real body measurements and body mass index (BMI). Eye movement data were tracked using an ET device incorporated in the VR headset (FOVE). The number of fixations and the complete fixations time were registered on the weight-related areas of interest (W-AOIs) and non-weight-related areas of interest (NW-AOIs). The results showed that AN patients have a longer fixation time and a greater fixations number on W-AOIs than both HBD and LBD groups, who did not show any statistical differences in the visual selective attention to NW-AOIs and W-AOIs.

José Gutierrez-Maldonado, Mar Clua i Sánchez, Bruno Porras-Garcia, Marta Ferrer-Garcia, Eduardo Serrano, Marta Carulla, Franck Meschberger-Annweiler, Mariarca Ascione
Enhancing Emotional Experience by Building Emotional Virtual Characters in VR Exhibitions

Virtual digital exhibitions attract public attention as they can provide an immersive aesthetic experience augmented by virtual reality (VR) technologies. However, since each visitor is placed in an isolated virtual environment while using VR devices, the exhibitions lose the immediate emotional perception between visitors. In order to enable visitors to perceive the communicative value of the digital exhibitions, increasing the user’s feeling of immersion and engagement in the virtual environment is crucial. Existing experience designs of VR exhibitions put more effort into developing the imitation of physical exhibited space, rather than display instant emotional facial expressions of visitors. Thus, a supporting system design on how to enhance the visitor’s real-time emotional communication in the VR exhibitions experience is needed at present. An emotional color label of visitors in exhibitions and an emotaion recognition and display model in the virtual environment are proposed to alleviate this issue. Our research has the potential to enhance the user’s emotional experience and engagement in VR exhibitions and other forms of virtual digital exhibition.

Yangjing Huang, Han Han
The Island of Play: Reflections on How to Design Multiuser VR to Promote Social Interaction

This article consists of reflections and considerations concerning a virtual reality design case: The Island of Play. It is a multiuser virtual reality prototype aimed at maintaining and encouraging social relationships between long term hospitalized children and their friends. The motivation behind this design is the dire situation long term hospitalized children often find themselves in. They experience isolation and marginalization due to constraints from hospitalization. A consequence hereof is a limited access to social interaction as well as a reduced opportunity to play with friends from either home or school. The Island of Play was essentially designed to set up a virtual meeting place to stimulate socialization through play. This article sits at the intersection between game design theory and actual design impressions, with a particular focus on how real-world design interweaves with theoretical considerations. The argument that follows is structured over five sections: 1) First, we contemplate the design of the player’s character. 2) Second, we scrutinize the relationship between game objects and playful interactions. 3) Then we move on to consider the design of social experiences, 4) followed by the fourth section where we inspect the value of the magic circle as a design metric. 5) Finally, in the fifth section, we reflect on the importance of weighing the player’s sensation of purpose and skill against interacting with the application. Overall, this design case pivots around design issues and considerations involved in the development of play and game scenarios in a multiuser VR-application aimed at bolstering the social fabric between long term hospitalized children and their friends.

Lasse Juel Larsen, Troels Deibjerg Kristensen, Bo Kampmann Walther, Gunver Majgaard
Development of an Invisible Human Experience System Using Diminished Reality

In this study, we have been developing a diminished reality system that allows users to experience becoming invisible human in order to reduce self-awareness and to improve their self-esteem. The system employs a camera to capture the user’s view and replaces his/her body images with background images in real time. These processed images are shown with a head mounted display to realize the immersive experience of becoming invisible human. The image inpainting is performed by a deep learning network. We also created a training and validation datasets and compared three networks. These networks are designed for image inpainting in this study. Moreover, we have made a hypothetical model of how psychological states and self-awareness will change when experiencing the developed system. In the future work, we are planning to conduct an experiment and confirm whether use of the system improves self-esteem. Also, we will investigate the process of changing the psychological state based on the hypothetical model by questionnaire surveys.

Maho Sasaki, Hirotake Ishii, Kimi Ueda, Hiroshi Shimoda
Towards a Social VR-based Exergame for Elderly Users: An Exploratory Study of Acceptance, Experiences and Design Principles

For many elderly individuals, the aging experience is associated with a lack of social interaction and physical exercise that may negatively affect their health. To address these issues, researchers have designed experiences based on immersive virtual reality (VR) and 2D screen-based exergames. However, very few have studied the use of social VR for elderly, in which users can interact remotely through avatars in a single, shared, immersive virtual environment, using a head-mounted display. Additionally, there is limited research on the experience of elderly in performing interactive activities, especially game-based activities, in social VR. We conducted an exploratory study with 10 elderly people who never experienced VR before, to evaluate an avatar-mediated interaction-based social VR game prototype. Based on a mixed-methods approach, our study presents new insights into the usability, acceptance, and gameplay experience of elderly in a social VR game. Moreover, our study reflects upon design principles that should be considered when developing social VR games for elderly to ensure an engaging and safe user experience. Our results suggest that such games have a potential among the user group. Direct hand manipulation, based on hand tracking for interaction with 3D objects, presented an engaging and intuitive interaction paradigm, and the social game activity in VR was found to be enjoyable.

Syed Hammad Hussain Shah, Ibrahim A. Hameed, Anniken Susanne T. Karlsen, Mads Solberg
Relative Research on Psychological Character and Plot Design Preference for Audiences of VR Movies

VR movies have been a new trend in recent years, and interactive VR films are getting popular among young audiences. The high-tech environment, sense of immersion and participation and decision in making choices at fork points in the story are all so appealing the audience are part of the movie, and their decision may change the flow and even the ending of the plot. Outstanding interactive VR movies can provide chances that can go up to a hundred choices, and more than 10 different endings, which would be heavy workload and huge budget for the producing team, including the screenwriters. The thesis reviewed and collected the development of VR movies and relevant theories on personality test and screenwriting, to find the barriers that hinder VR movie script and plot development. The purpose is to investigate the audience’s intuitive feelings and expectations of watching VR movies, as well as their understanding and acceptance of the story, to explore the relationship between personality and decision making at turning points of each fork of the pitchfork bifurcation structure plot. The authors hope to find an efficient way to lead the audiences to an ending which seems to be chosen by themselves rather than the writers, although it is within the designer’s expectation.

Lingxuan Zhang, Feng Liu
Backmatter
Metadaten
Titel
Virtual, Augmented and Mixed Reality: Design and Development
herausgegeben von
Jessie Y. C. Chen
Gino Fragomeni
Copyright-Jahr
2022
Electronic ISBN
978-3-031-05939-1
Print ISBN
978-3-031-05938-4
DOI
https://doi.org/10.1007/978-3-031-05939-1