Skip to main content

2025 | Buch

Virtual Reality and Mixed Reality

21st EuroXR International Conference, EuroXR 2024, Athens, Greece, November 27–29, 2024, Proceedings

herausgegeben von: Arcadio Reyes-Lecuona, Gabriel Zachmann, Monica Bordegoni, Jian Chen, Giannis Karaseitanidis, Alain Pagani, Patrick Bourdot

Verlag: Springer Nature Switzerland

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of the 21st International Conference on Virtual Reality and Mixed Reality, EuroXR 2024, held in Athens, Greece, during November 27–29, 2024.

The 14 full papers presented together with 1 short paper were carefully reviewed and selected from 47 submissions. The papers are grouped into the following topics: Designing Experiences, Human Factors, Rendering and Visualization, Interaction Techniques, and Education and Training. EuroXR aims to foster engagement between European industries, academia, and the public sector, to promote the development and deployment of XR tech niques in new and emerging, but also in existing fields.

Inhaltsverzeichnis

Frontmatter

Designing Experiences

Frontmatter
A Fully Immersive Dual-Task Using a Smartphone While Walking in a Virtual Reality Environment
Abstract
We developed a fully simulated dual-task paradigm to be used in virtual environment studies implying both eye-head coordination and an increased workload. We utilized a physical smartphone for favoring an authentic touch and haptic feedback, and a free walk on a treadmill, without any imposed control system, to closely replicate real-world postural behavior while facilitating natural walking. These simple and natural tasks may play a key role in future vision sciences studies.
Gildas Marin, Noélie Berjaud, Jeremy Julien, Marc Le Renard, Delphine Bernardin
User Experience Design and Evaluation of a Virtual Reality Museum Installation for Historic Sailing Ships
Abstract
Museums and exhibitions can benefit from immersive technologies by embodying visitors in rich interactive environments, where they can experience digitally reconstructed scenes and stories of the past. Nevertheless, the short-term Virtual Reality interaction offered in public spaces needs to be carefully designed in order to communicate the intended message and optimize the delivered experience, especially for first-time users. This paper contributes to the ongoing research on user experience in VR for cultural heritage through the presentation of the design and user evaluation of an installation that immerses users on board of a historic sailing ship and has been part of a museum exhibition. We present the process of reconstructing the ship and developing the application with emphasis on design choices about the user experience (scene presentation, content delivery, navigation and interaction modes, assistance, etc.). We have performed a thorough user experience evaluation and present its results and our reflections on design issues regarding public VR installations for museums.
Spyros Vosinakis, George Anastassakis, Panayiotis Koutsabasis, Kostas Damianidis
Embodied Acts for Testing Mixed Reality Low-Fidelity Prototypes that Convey Intangible Cultural Heritage: Method and Case Study
Abstract
In the design of bodily engaging Mixed Reality (MR) installations for intangible cultural heritage (ICH) in museums, a gap remains regarding how to test low-fidelity prototypes to refine the detailed design and ensure successful final implementation. By integrating bodystorming early in the design thinking process, alongside lab-based experience prototyping, this study proposes a method to foster a deeper empathetic connection between cultural practitioners and the public, thereby enhancing visitor experiences. Focusing on the ICH of wooden shipbuilding, the process involves stakeholders and users from conceptualization to prototype testing, ensuring that final interactive installations are user-centered and culturally resonant. The findings demonstrate how this participatory design approach bridges digital technologies with traditional cultural practices, providing methodological insights for the transmission of ICH and visitor engagement that other designers can adopt.
Vasiliki Nikolakopoulou, Spyros Vosinakis, Modestos Stavrakis, Panayiotis Koutsabasis

Human Factors

Frontmatter
Impact of Acceleration and Angle in the Real Environment on Cybersickness
Abstract
The inconsistency between visually perceived movement and real movement is an important reason for cybersickness. Previous research on cybersickness mostly focused on creating simulation motion platforms and rarely conducted experiments in real environments. This study explored the impact of different acceleration states (acceleration > 0; acceleration = 0; acceleration < 0) of high-speed railways and the angles between the direction users faced and the direction trains ran (angle = 0°; angle = 90°; angle = 180°) on cybersickness. We arranged for participants to use VR devices under nine different motion states and measured their motion sickness levels. The results showed that the degree of cybersickness in the acceleration and deceleration state was much higher than that in the uniform speed state and had statistical significance (p < 0.001). The degree of cybersickness when the direction users faced was inconsistent with the direction trains ran was higher than when the directions were consistent.
Zhang Yanxiang, Wang yuxiao
Study on the Influence of the Total Front HMI Size in Intelligent Cabins on the Drivers’ Eye Movement Behavior
Abstract
The large-screen and multi-screen design of the HMI (Human-machine interface) is one of the essential manifestations of today’s intelligent cabins. However, this trend poses a risk of distracting drivers and affecting safe driving. We used VR (Virtual Reality) technology to provide participants with a simulated driving environment and used eye movement analysis to explore the impact of the total size of the front HMI in intelligent cabins on the drivers’ eye movement behavior. At the same time, we investigated the effects of driving age, driving time, and whether the cabins are equipped with a HUD (Head-up Display) on the driver’s eye movement behavior. The results indicated that the total size of the HMI had a significant impact on the drivers’ eye movement behavior, and the larger the total size, the more distracting the drivers’ attention. The three variables of driving age, driving time, and whether or not a HUD is used had no significant impact on the eye movement behavior of drivers.
Wang Yuxiao, Zhang Yanxiang
Contrast and Hue in Depth Perception for Virtual Reality: An Experimental Study
Abstract
Depth perception, essential for effective interactions within 3D spaces, encounters significant challenges in virtual reality (VR) environments due to the lack of comprehensive real-world depth cues. This study explores the roles of contrast and color, with a particular focus on Hue, as mechanisms for conveying depth in VR. We designed a comparative experiment that leverages the CIELAB color space to isolate the impact of contrast when exploring the influence of Hue on depth perception. By meticulously controlling contrast levels and concentrating on variations in Hue, our findings reveal that an excessive variety of Hues can confuse users’ depth perception. Furthermore, the use of contrasting Hues enhances participants’ ability to discern distances between objects.
Sun Yusi, Leith K. Y. Chan, Yong Hong Kuo

Rendering and Visualization

Frontmatter
Context-Based Annotation Visualisation in Virtual Reality: A Use Case in Archaeological Data Exploration
Abstract
In this paper we present a Virtual Reality (VR) tool that facilitates the visualisation and exploration of context-based, multi-level annotations in archaeology. Indeed, thanks to photogrammetry and laser scanning techniques, archaeologists can capture and reconstruct almost faithfully ruins and excavations in 3D. Thus, they can maximise their work performance on the field within their limited excavation time and leave deep, but often slow, analysis tasks at later stages offsite. Using VR, we can enrich 3D captured and reconstructed environments by adding extra information for ex situ analysis of archaeological data. This approach can also help the archaeologists speed up their collaboration with other experts or easily share the information with the public. We aim to assist the exploration of numerous annotations and information at disposal that can be densely clustered in a 3D reconstructed world. We collaborated closely with archaeologists at AOROC in the process of designing annotations that convey information with increasing Levels of Detail (LoDs), reflecting the complexity of the contained data. We then developed different interaction techniques that allow users to explore the annotations by switching between LoDs, including active selection by clicking, proximity approach, and the combination of both. Additionally, we proposed and evaluated the effectiveness of annotation grouping mechanism to enhance visual clarity by aggregating annotations in close spatial proximity. The results of our user study with non-experts, aggregated with a qualitative study with the archaeologists, showed user preference for the combination of the interaction techniques, but evidenced some limitations for the grouping of annotations.
Michele De Bonis, Huyen Nguyen, Patrick Bourdot

Open Access

3-2-3 Multi-AI Segmentation Framework: LoD-Based, Incremental Segmentation of 3D Scan Data Using Any 2D AI
Abstract
In the age of spatial computing, computer vision is central, and efficient segmentation of 3D scan data becomes a fundamental task. Existing segmentation methods are often locked to specific AI models, lack level-of-detail (LoD) capabilities, and do not support efficient incremental segmentation. These limitations hinder their application to XR systems that integrate architectural and urban scales, which demand both at scale and detailed, up-to-date segmentation information, while leveraging limited local hardware in distributed computing environments.
In this work, we present a novel framework that integrates multiple 2D AI through AI-agnostic 3D geometry feature fusion, ensuring spatial consistency while taking advantage of the rapid advancements in 2D AI models. Our framework performs LoD segmentation, enabling swift segmentation of downsampled geometry and full detail on needed segments. Additionally, it progressively builds a segmentation database, processing only newly added data, thereby avoiding point cloud reprocessing, a common limitation in previous methods.
In our use case, our framework analyzed a public building based on three scans: a drone LiDAR capture of the exterior, a static LiDAR capture of a room, and a user-held RGB-D camera capture of a section of the room. Our approach provided a fast understanding of building volumes, room elements, and a fully detailed geometry of a requested object, a “panel with good lighting and a view to a nearby building”, to implement an XR activity.
Our preliminary results are promising for applications in other urban and architectural contexts and point to further developments in our Geometric Data Inference AI as a cornerstone for deeper, more accurate Multi-AI integration.
Hermenegildo Solheiro, Lee Kent, Keisuke Toyoda
Enhancing Materiality in Adaptive BRDF Display with Light Ray Diffusion
Abstract
The adaptive BRDF display method utilizing multiple light ray projections (ABDM-MLRP) offers promising capabilities for replicating real-world materiality. This adaptive approach allows for BRDF display on arbitrarily shaped surfaces without the need for geometrical calibration. While initially proposed for materiality reproduction of BRDF, its applications extend to scientific simulations, product design, and the fusion of virtual and real-world environments in the emerging metaverse. However, ABDM-MLRP has challenges in accurately representing matte materiality due to a limited number of cast light rays. To address this, we propose employing light ray diffusion. This paper presents a precise model of light ray diffusion, demonstrating how the diffuser expands the numeric aperture of incident light rays to address glossiness issues in matte BRDF representation. However, the straightforward introduction of diffuser plate utilization blurs display results and clears away the color gradation of structural color and specular reflections. To mitigate this, we propose introducing pre-compensation on BRDF data. Our results show that the proposed method effectively resolves these issues, enhancing materiality representation, including structural color and glossiness, across various shapes and transparent objects.
Toshiyuki Amano, Sho Nishida

Interaction Techniques

Frontmatter
A Tangible Interface for Creating Virtual Cutaways in Mixed Reality
Abstract
In this paper, we discuss work-in-progress research on tangible interfaces for interactive cutaway visualizations in Mixed Reality (MR). We present an approach that allows users to flexibly and intuitively define virtual cutaway geometry by directly interacting with real-world objects. Using hand movements the user physically traces the shape of the cutaway on an object, without the need for specialized input devices. Tangible interaction allows more accurate interaction and makes it easier for users to plan the placement and shape of required cutaways that fit the object. We developed a prototype demonstrator on the Microsoft HoloLens 2 and present the design and implementation details for such a system and discuss insights, gained from preliminary testing, that motivate compelling directions for future work.
Xuyu Li, Priyansh Jalan, John Dingliana
MarkAR: Exploring the Benefits of Combining Microgestures and Mid-Air Marks to Trigger Commands in Augmented Reality
Abstract
Immersive technologies like Augmented Reality (AR) have a promising potential for group activities. Collaborative AR systems allow co-located and/or distant users to create, visualize and manipulate together a large variety of virtual content in order to take actions and decisions. However, contrary to traditional 2D interfaces, there is currently no standard interface or generalized interaction technique for AR. This paper explores an approach for triggering commands linked to AR content. We propose MarkAR, a system combining virtual thumbnail representations of AR content (called vignettes), a microgesture and 3D mid-air mark system inspired by marking menus. Experimental results from a qualitative study highlight that MarkAR offers a great usability, subtle interactions and an easy way to trigger both discrete and continuous commands in AR.
Charles Bailly, Lucas Pometti, Julien Castet
Visual Search in People with Macular Degeneration: A Virtual Reality Eye-Tracking Study
Abstract
This study is the first to explore the usability of a commercial off the shelf (COTS) VR headset for people with macular degeneration (MD) in the context of visual search. Fourteen participants were recruited; 9 fully sighted and 5 with sight loss due to MD. Firstly, a visual grid search task was presented where participants were asked to identify and discriminate virtual objects and shapes. Secondly, affective audio-visual videos were presented in VR to assess participants processing of affective information. The experimental procedure involved both a physical visual acuity Snellen test and a VR Snellen test conducted within a custom virtual environment. Most participants with macular degeneration (MD) reported increased visibility in VR. They were able to discriminate positive affective content and detect objects and shapes appearing at various locations across their entire field of view. Overall performance was linked to the level of visual impairment and whether it affected one or both eyes. Importantly, all participants successfully used the off-the-shelf VR headset. These findings provide preliminary insights into the usability of VR technologies for users with MD.
Theofilos Kempapidis, Ifigeneia Mavridou, Ellen Seiss, Claire L. Castle, Daisy Bradwell, Filip Panchevski, Sophia Cox, Renata S. M. Gomes

Education and Training

Frontmatter
Engagement and Attention in XR for Learning: Literature Review
Abstract
Engagement and attention are two crucial factors in determining the effectiveness of the learning experience, and both concepts have been studied extensively in educational sciences. However, these two aspects have not been studied sufficiently in eXtended Reality (XR) settings. Recognizing the growing role of XR in education, this review aims to study and understand this gap through a systematic literature review following the PRISMA methodology, renowned for its thoroughness and reliability. Our main objectives were to identify different methods to measure students’ attention and engagement and discuss strategies to enhance them. After a comprehensive analysis, we identified the most effective methods for measuring students’ engagement and attention in XR environments. Our study also makes evidence of the interest in an integrating evaluation approach gathering optimal tracking of attention and engagement during learning experiences. Finally, we discussed various approaches to enhance users’ engagement and attention, highlighting other relevant elements in designing and developing learning applications, such as the user-centred design of dedicated immersive paradigms and pedagogical-based learning scenarios.
Carlos Lièvano Taborda, Huyen Nguyen, Patrick Bourdot
Exploring Students’ Acceptance of Augmented Reality Technologies in Education: An Extended Technology Acceptance Model Approach
Abstract
This study examines the potential of Augmented Reality (AR) to enhance and complement educational experiences within an extended theoretical framework of the Technology Acceptance Model (TAM). Given AR’s capacity for providing immersive, interactive learning beyond traditional classroom settings, we aim to understand the cognitive and emotional factors influencing students’ attitudes towards integrating these tools into their learning environments. Our investigation analyzes student responses to AR technologies, focusing on their perceptions of usefulness, ease of use, enjoyment, and anxiety. Employing a quantitative methodology, we gathered diverse student perspectives on AR. This was to gain deeper insights into its educational significance and impact on student engagement and learning outcomes. In our study, we found that students value AR’s ability to enhance learning experiences when implemented appropriately. High levels of perceived usefulness, ease of use, and enjoyment, coupled with low anxiety, indicate that students are more inclined to adopt AR technologies that are beneficial, user-friendly, engaging, and minimally stressful. It is critical to foster wider acceptance and effective integration of AR into educational frameworks to support this positive reception. AR effectiveness in education depends on both functional and emotional aspects. By ensuring AR applications are purposely designed to be useful, accessible, and enjoyable, educators can provide a more welcoming environment for the adoption of these innovative tools as complementary aids to traditional pedagogical practices. Furthermore, educators should strive to create learning experiences that foster creativity, collaboration, and critical thinking. This can be done by integrating AR into existing lesson plans or creating entirely customized activities. The study emphasizes AR’s potential benefits in improving educational outcomes when integrated holistically. To ensure equitable and engaging AR experiences for all students, further research is needed.
Farzin Matin, Eleni Mangina
A Hybrid Collaboration Design for a Large Scale Virtual Reality Training Environment to Fulfil the Belongingness Needs of Maslow’s Theory
Abstract
This work is part of a line of research to systematically investigate, how virtual reality (VR) trainings can be designed to be pragmatically effective (e.g. scalable) while satisfying human needs. To this ends, the design is guided inter alia by human motivational theory and in particular Maslow’s hierarchy of needs (MHN). The study at hand focused on the third level of MHN, covering the need for belongingness. Considering a classroom-sized VR setup, it appears obvious, that multi-user implementations may be effective in creating a sense of belongingness. However, increasing sizes of training areas, sense of competition, distractions or mutual influence impose challenges that need to be overcome. The study evaluates a new concept for a diminished multi-user approach, where only selected elements, that are supportive for belongingness, are synchronized, while disturbing elements are filtered. Results show, that the design, which is applicable for large environments, indeed increased communication, collaboration and awareness, without affecting comfort or distraction compared to single-user simulations.
Yusra Tehreem, Thies Pfeiffer, Sven Wachsmuth
Backmatter
Metadaten
Titel
Virtual Reality and Mixed Reality
herausgegeben von
Arcadio Reyes-Lecuona
Gabriel Zachmann
Monica Bordegoni
Jian Chen
Giannis Karaseitanidis
Alain Pagani
Patrick Bourdot
Copyright-Jahr
2025
Electronic ISBN
978-3-031-78593-1
Print ISBN
978-3-031-78592-4
DOI
https://doi.org/10.1007/978-3-031-78593-1