Skip to main content
Top

2020 | Book

Virtual Reality and Augmented Reality

17th EuroVR International Conference, EuroVR 2020, Valencia, Spain, November 25–27, 2020, Proceedings

Editors: Dr. Patrick Bourdot, Dr. Victoria Interrante, Regis Kopper, Anne-Hélène Olivier, Prof. Hideo Saito, Gabriel Zachmann

Publisher: Springer International Publishing

Book Series : Lecture Notes in Computer Science

insite
SEARCH

About this book

This book constitutes the refereed proceedings of the 17th International Conference on Virtual Reality and Augmented Reality, EuroVR 2020, held in Valencia, Spain, in November 2020.

The 12 full papers were carefully reviewed and selected from 35 submissions. The papers are organized in topical sections named: Perception, Cognition and Behaviour; Training, Teaching and Learning; Tracking and Rendering; and Scientific Posters.

Table of Contents

Frontmatter

Perception, Cognition and Behaviour

Frontmatter
Effect of Social Settings on Proxemics During Social Interactions in Real and Virtual Conditions
Abstract
Virtual Reality (VR) offers unlimited possibilities to create virtual populated environments in which a user can be immersed and experience social interactions with virtual humans. A better understanding of these interactions is required to improve the realism of the interactions as well as user’s experience. Using an approach based on Interactionist Sociology, we wondered whether the social settings within which the individual interact has an impact on proxemics norms in real conditions and if these norms apply in VR. We conducted an experiment in real and virtual conditions where individuals experienced a transgression of proxemics norms at a train station and in a sports fan zone. Our results suggest that proxemics norms vary according to the subjective relationship of the individual to the social settings. This variation would translate directly into a modulation of bodily sensitivity to the proximity of the body of others. While we were able to show that social norms still exist in VR, our results did not show a main effect of the social settings on participants’ sensitivity to the transgression of proxemics norms. We discuss our results in the frame of the cross-fertilization between Sociology and VR.
Tristan Duverné, Théo Rougnant, François Le Yondre, Florian Berton, Julien Bruneau, Katja Zibrek, Julien Pettré, Ludovic Hoyet, Anne-Hélène Olivier
Influence of Dynamic Field of View Restrictions on Rotation Gain Perception in Virtual Environments
Abstract
The perception of rotation gain, defined as a modification of the virtual rotation with respect to the real rotation, has been widely studied to determine detection thresholds and widely applied to redirected navigation techniques. In contrast, Field of View (FoV) restrictions have been explored in virtual reality as a mitigation strategy for motion sickness, although they can alter user’s perception and navigation performance in virtual environments. This paper explores whether the use of dynamic FoV manipulations, referred also as vignetting, could alter the perception of rotation gains during virtual rotations in virtual environments (VEs). We conducted a study to estimate and compare perceptual thresholds of rotation gains while varying the vignetting type (no vignetting, horizontal and global vignetting) and the vignetting effect (luminance or blur). 24 Participants performed 60 or 90\(^\circ \) virtual rotations in a virtual forest, with different rotation gains applied. Participants have to choose whether or not the virtual rotation was greater than the physical one. Results showed that the point of subjective equality was different across the vignetting types, but not across the vignetting effect or the turns. Subjective questionnaires indicated that vignetting seems less comfortable than the baseline condition to perform the task. We discuss the applications of such results to improve the design of vignetting for redirection techniques.
Hugo Brument, Maud Marchal, Anne-Hélène Olivier, Ferran Argelaguet
User Experience in Collaborative Extended Reality: Overview Study
Abstract
Recent trends in Extended Reality technologies, including Virtual Reality and Mixed Reality, indicate that the future infrastructure will be distributed and collaborative, where end-users as well as experts meet, communicate, learn, interact with each other, and coordinate their activities using a globally shared network and meditated environments. The integration of new display devices has largely changed how users interact with the system and how those activities, in turn, change their perception and experience. Although a considerable amount of research has already been done in the fields of computer-supported collaborative work, human-computer interaction, extended reality, cognitive psychology, perception, and social sciences, there is still no in-depth review to determine the current state of research on multiple-user-experience-centred design at the intersection of these domains. This paper aims to present an overview of research work on coexperience and analyses important aspects of human factors to be considered to enhance collaboration and user interaction in collaborative extended reality platforms, including: (i) presence-related factors, (ii) group dynamics and collaboration patterns, (iii) avatars and embodied agents, (iv) nonverbal communication, (v) group size, and (vi) awareness of physical and virtual world. Finally, this paper identifies research gaps and suggests key directions for future research considerations in this multidisciplinary research domain.
Huyen Nguyen, Tomasz Bednarz
Shopping with Virtual Hands
Abstract
Retailers can use virtual reality as a new touchpoint for their customers: within an existent channel or as a new sales channel. Thus, it is crucial to understand the differences and similarities between the physical and the virtual shopping environment. Shopping simulations make it possible to test, observe, and collect data in a controlled, low-cost, and fast way compared to field experiments. However, past studies might have provided biased results due to the characteristics of the sample used. This study analyzes how consumers behave in two virtual shopping tasks. The exploratory, experimental research uses an immersive VR shopping environment and a sample of participants balanced across demographic characteristics and previous experience with VR. Moreover, it uses both self-reported and implicit metrics gathered through eye- and hand-tracking system. The findings demonstrate the value of having those two sources of metrics to better understand consumer shopping behavior in a virtual reality setting.
Aline Simonetti, Enrique Bigné, Shobhit Kakaria
Psychophysical Effects of Experiencing Burning Hands in Augmented Reality
Abstract
Can interactive Augmented Reality (AR) experiences induce involuntary sensations through additional modalities? In this paper we report on our AR experience that enables users to see and hear their own hands burning while looking through a Video See-Through Head-Mounted Display (VST-HMD). In an exploratory study (n = 12, within-subject design), we investigated whether this will lead to an involuntary heat sensation based on visual and auditory stimuli. A think-aloud-protocol and an AR presence questionnaire indicated that six out of twelve participants experienced an involuntary heat sensation on their hands. Despite no significant change of perceived anxiety, we found a significant increase in skin conductance during the experiment for all participants; participants who reported an involuntary heat sensation had higher skin conductance responses than participants who did not report a heat sensation. Our results support our initial hypothesis as we found evidence of cross-modal audiovisual-to-thermal transfers. This is an example of virtual synaesthesia, a sensation occurring when single-modal (or multi-modal) stimulus sets off the simultaneous sensation over other senses—involuntarily and automatically. We believe that our results contribute to the scientific understanding of AR induced synaesthesia as well as inform practical applications.
Daniel Eckhoff, Alvaro Cassinelli, Tuo Liu, Christian Sandor

Training, Teaching and Learning

Frontmatter
Integrating Virtual Reality in a Lab Based Learning Environment
Abstract
In engineering education, practical laboratory experience is essential and typically universities own expensive laboratory facilities that are deeply embedded in their curricula. Based on a comprehensive requirements analysis in a design based research approach, we have created a virtual clone of an existing RFID (radio-frequency identification) laboratory with the aim of integrating it into an existing teaching and learning scenario. The resulting application prepares students for real experiments by guiding them through the process assisted by an avatar. We have had our application tested in a qualitative evaluation by students as well as experts and we assess which design decisions have a positive impact on the learning experience. Our results suggest that the appearance of the environment, the avatar and the interactions of our virtual reality application have a strong motivational character but a closer content-wise link of the virtual and real experiments is crucial for students to perceive the application as part of the learning environment.
Nils Höhner, Mark Oliver Mints, Julien Rodewald, Anke Pfeiffer, Kevin Kutzner, Martin Burghardt, David Schepkowski, Peter Ferdinand
A Virtual Reality Surgical Training System for Office Hysteroscopy with Haptic Feedback: A Feasibility Study
Abstract
Hysteroscopy is a widely used gynaecological procedure to evaluate and treat cervical and intra-uterine pathology. In the last few decades, technical refinements in the optics technology, surgical accessories and the reduction of the outer diameter of the instrument have made it possible to perform many hysteroscopic procedures, including some operative procedures, in the office setting and without any anesthesia. Mini-hysteroscopic procedures in the office setting are associated with less pain, lower complication rate and faster recovery compared hysteroscopic procedures in day surgery under general anesthesia.
The main challenge for the clinician in performing office hysteroscopy is to pass the narrow cervical canal. Inaccurate motion or excessively applied force can lead to a cervical or uterine perforation. This study introduces a novel VR training platform for office hysteroscopy. The presented system was tested in the laboratory setting to prove the feasibility of using VR simulation for office hysteroscopy training. Conducted experiments demonstrated the potential of the system to transfer the essential skills and confirmed the set of proposed metrics for effective assessment.
Vladimir Poliakov, Kenan Niu, Bart Paul De Vree, Dzmitry Tsetserukou, Emmanuel Vander Poorten
Semantic Modeling of Virtual Reality Training Scenarios
Abstract
Virtual reality can be an effective tool for professional training, especially in the case of complex scenarios, which performed in reality may pose a high risk for the trainee. However, efficient use of VR in practical everyday training requires efficient and easy-to-use methods of designing complex interactive scenarios. In this paper, we propose a new method of creating virtual reality training scenarios, with the use of knowledge representation enabled by semantic web technologies. We have verified the method by implementing and demonstrating an easy-to-use desktop application for designing VR scenarios by domain experts.
Krzysztof Walczak, Jakub Flotyński, Dominik Strugała, Sergiusz Strykowski, Paweł Sobociński, Adam Gałązkiewicz, Filip Górski, Paweł Buń, Przemysław Zawadzki, Maciej Wielgus, Rafał Wojciechowski
Exploiting Extended Reality Technologies for Educational Microscopy
Abstract
Exploiting extended reality technologies in laboratory training enhances both teaching and learning experiences. It complements the existing traditional learning/teaching methods related to science, technology, engineering, arts and mathematics. In this work, we use extended reality technologies to create an interactive learning environment with dynamic educational content. The proposed learning environment can be used by students of all levels of education, to facilitate laboratory-based understanding of scientific concepts. We introduce a low-cost and user-friendly multi-platform system for mobile devices which, when coupled with edutainment dynamics, simulation, extended reality and natural hand movements sensing technologies such as hand gestures with virtual triggers, is expected to engage users and prepare them efficiently for the actual on-site laboratory experiments. The proposed system is evaluated by a group of experts and the results are analyzed in detail, indicating the positive attitude of the evaluators towards the adoption of the proposed system in laboratory educational procedures. We conclude the paper by highlighting the capabilities of extended reality and dynamic content management in educational microscopy procedures.
Helena G. Theodoropoulou, Chairi Kiourt, Aris S. Lalos, Anestis Koutsoudis, Evgenia Paxinou, Dimitris Kalles, George Pavlidis

Tracking and Rendering

Frontmatter
Improved CNN-Based Marker Labeling for Optical Hand Tracking
Abstract
Hand tracking is essential in many applications reaching from the creation of CGI movies to medical applications and even real-time, natural, physically-based grasping in VR. Optical marker-based tracking is often the method of choice because of its high accuracy, the support for large workspaces, good performance, and there is no wiring of the user required. However, the tracking algorithms may fail in case of hand poses where some of the markers are occluded. These cases require a subsequent reassignment of labels to reappearing markers. Currently, convolutional neural networks (CNN) show promising results for this re-labeling because they are relatively stable and real-time capable. In this paper, we present several methods to improve the accuracy of label predictions using CNNs. The main idea is to improve the input to the CNNs, which is derived from the output of the optical tracking system. To do so, we propose a method based on principal component analysis, a projection method that is perpendicular to the palm, and a multi-image approach. Our results show that our methods provide better label predictions than current state-of-the-art algorithms, and they can be even extended to other tracking applications.
Janis Rosskamp, Rene Weller, Thorsten Kluss, Jaime L. Maldonado C., Gabriel Zachmann
Volumetric Medical Data Visualization for Collaborative VR Environments
Abstract
In clinical practice, medical imaging technologies, like computed tomography, have become an important and routinely used technique for diagnosis. Advanced 3D visualization techniques of this data, e.g. by using volume rendering, provide doctors a better spatial understanding for reviewing complex anatomy. There already exist sophisticated programs for the visualization of medical imaging data, however, they are usually limited to exactly this topic and can be hardly extended to new functionality; for instance, multi-user support, especially when considering immersive VR interfaces like tracked HMDs and natural user interfaces, can provide the doctors an easier, more immersive access to the information and support collaborative discussions with remote colleagues. We present an easy-to-use and expandable system for volumetric medical image visualization with support for multi-user VR interactions. The main idea is to combine a state-of-the-art open-source game engine, the Unreal Engine 4, with a new volume renderer. The underlying game engine basis guarantees the extensibility and allows for easy adaption of our system to new hardware and software developments. In our example application, remote users can meet in a shared virtual environment and view, manipulate and discuss the volume-rendered data in real-time. Our new volume renderer for the Unreal Engine is capable of real-time performance, as well as, high-quality visualization.
Roland Fischer, Kai-Ching Chang, René Weller, Gabriel Zachmann
Viewing-Direction Dependent Appearance Manipulation Based on Light-Field Feedback
Abstract
We propose a novel light-field feedback system that achieves appearance manipulation depending on the viewing-direction. Our method employs a reflection model using multiple projectors and cameras. It produces stable light-field feedback in the multiple-input and multiple-output system with decoupling using a pseudo-inverse. Through experiments, we confirmed that our method successfully enabled viewing-direction dependent appearance manipulation on the mirror reflection and the retro-reflection surface. Additionally, we verified that our method achieves robust appearance manipulation against disturbances, such as ambient light change.
Toshiyuki Amano, Hiroki Yoshioka

Scientific Posters

Frontmatter
Holistic Quality Assessment of Mediated Immersive Multisensory Social Communication
Abstract
Communication through modern immersive systems that afford the representation of a wide range of multisensory (visual, auditory, haptic, olfactory) social and ambient (environmental) affective cues can provide compelling experiences that approach face-to-face communication. The quality of a mediated social communication experience (QoE) can be defined as the degree to which it matches its real-life counterpart and is typically assessed through questionnaires. However, available questionnaires are typically extensive, targeted at specific systems, and do not address all relevant aspects of social presence. Here we propose a general holistic social presence QoE questionnaire (HSPQ), that uses a single item for each of the relevant processing levels in the human brain: sensory, emotional, and cognitive, behavioral, and reasoning. The HSPQ measures social presence through the senses of spatial presence (= telepresence + agency) in the mediated environment and social interaction (= interaction + engagement) with the other persons therein. Initial validation studies confirm the content and face validity of the HSPQ. In future studies we will test the stability, sensitivity, and convergent validity of the HSPQ.
Alexander Toet, Tina Mioch, Simon N. B. Gunkel, Camille Sallaberry, Jan B. F. van Erp, Omar Niamut
Conversation with Your Future Self About Nicotine Dependence
Abstract
Nicotine dependence continues to be one of the major contributors to the global disease burden, despite the wide variety of assessment and treatment techniques developed. Although VR has been used as an instrument to improve cue-exposure therapy techniques, the full extent of its power in the treatment of addictions has not been fully explored. In this paper, we utilize body-swapping, a VR specific paradigm, in order to facilitate a dialogue between the present self and the future self of the smoker about nicotine dependence. The experiment will compare the difference in Fagerström Test for Nicotine Dependence, Stages of Change, Future-Self Continuity Scale and Perceived Risks and Benefits Questionnaire scores before and after the dialogue between three groups, each named based on who the participant is talking with: Present Self, Future Self Smoking Cessation, and Future Self Still Smoking. We expect this new approach to lower nicotine dependence and lead to long-term healthy behaviour choices as well as pave the way for novel VR-treatment techniques.
Gizem Şenel, Mel Slater
VR as a Persuasive Technology “in the Wild”. The Effect of Immersive VR on Intent to Change Towards Water Conservation
Abstract
The combination of VR with the correct psychological mechanism could become a powerful persuasion system to stimulate intent to change towards important environmental issues such as water conservation. However, very limited research has been reported on VR usage in this area. Therefore, we conducted a between-groups study to investigate whether the level of presence felt in a VR environment together with a trigger mechanism such as guilt could spark intent to change towards water conservation. Participants were exposed to a persuasive message about water conservation in one of three conditions: audio only, simple VR and visually rich VR. Forty participants completed the study “in the wild”. The results showed that while intent to change increased in all three groups, both VR groups indicated lower levels of change than the audio only group. Additionally, a positive correlation, albeit small, was found between presence and cued recall along with presence and intent to change. These results furthermore showed that presence could play a role in behavior modification and intent to change.
Konstantinos Chionidis, Wendy Powell
Virtual Reality Experiential Training for Individuals with Autism: The Airport Scenario
Abstract
One of the common traits of individuals with Autism Spectrum Disorder (ASD) is the inclination to perceive unknown situations and environments as a source of stress and anxiety. It is common for them to tend to avoid novel experiences -including traveling to new places- and therefore an environments like an airport can be overwhelming. Virtual Reality can be a functional tool to provide ASD users with a training system that allows them to experience the airport process, even several times, before facing the real life experience. We hereby present the scenario in which our investigation takes place, the system we developed, and a draft of the evaluation of the training technique.
Agata Marta Soccini, Simone Antonio Giuseppe Cuccurullo, Federica Cena
A Machine Learning Tool to Match 2D Drawings and 3D Objects’ Category for Populating Mockups in VR
Abstract
Virtual Environments (VE) relying on Virtual Reality (VR) can facilitate the co-design by enabling the users to create 3D mockups directly in the VE. Databases of 3D objects are helpful to populate the mockup and necessitate retrieving methods for the users. In early stages of the design process, the mockups are made up of common objects rather than variations of objects. Retrieving a 3D object in a large database can be fastidious even more in VR. Taking into account the necessity of a natural user’s interaction and the necessity to populate the mockup with common 3D objects, we propose, in this paper, a retrieval method based on 2D sketching in VR and machine learning. Our system is able to recognize 90 categories of objects related to VR interior design with an accuracy up to 86%. A preliminary study confirms the performance of the proposed solution.
Romain Terrier, Nicolas Martin
Performance Design Assistance by Projector-Camera Feedback Simulation
Abstract
In this paper, we propose a simulation method for a projector-camera system that aims to design assist for the installation art using a projector-camera feedback system. The simulator requires a precise estimation of the optical response between the camera and the projector via the projection target. For this requirement, we model the optical response with a non-linear projector response model and Light Transport matrices. Experimental results show our proposed simulator successfully reproduced the behavior of the pixel feedback animation that is a illumination projection used un-stable condition in addition to the static appearance manipulation.
Taichi Kagawa, Toshiyuki Amano
Backmatter
Metadata
Title
Virtual Reality and Augmented Reality
Editors
Dr. Patrick Bourdot
Dr. Victoria Interrante
Regis Kopper
Anne-Hélène Olivier
Prof. Hideo Saito
Gabriel Zachmann
Copyright Year
2020
Electronic ISBN
978-3-030-62655-6
Print ISBN
978-3-030-62654-9
DOI
https://doi.org/10.1007/978-3-030-62655-6

Premium Partner