Skip to main content
Top

2022 | Book

Extended Reality

First International Conference, XR Salento 2022, Lecce, Italy, July 6–8, 2022, Proceedings, Part II

insite
SEARCH

About this book

This two volume proceedings, LNCS 13445 and 13446, constitutes the refereed proceedings of the 9th International Conference on Augmented Reality, Virtual Reality, and Computer Graphics, XR Salento 2022, held in Lecce, Italy, July 6–8, 2022. Due to COVID-19 pandemic the conference was held as a hybrid conference.

The 42 full and 16 short papers were carefully reviewed and selected from 84 submissions. The papers discuss key issues, approaches, ideas, open problems, innovative applications and trends in virtual reality, augmented reality, mixed reality, applications in cultural heritage, in medicine, in education, and in industry.

Table of Contents

Frontmatter

eXtended Reality for Learning and Training

Frontmatter
Mixed Reality Agents for Automated Mentoring Processes
Abstract
Mentoring processes can enhance education by providing personalized advice and feedback to students. A challenge of mentoring is that with a rising number of students, more mentors are required. As it is oftentimes infeasible to employ such a high number of mentors, automated tools can support the activities of mentors by e.g., answering common questions. However, such tools can impact the students’ engagement as they can feel impersonal. Therefore, we developed mixed reality mentoring agents. They personify these automated tools, can interact directly with the students, and demonstrate practical tasks to them as a guide. On the technical level, this is realized by a behavior tree structure with blackboards that simulate the agent’s memory. With such a visual representation of the behavior, developers, teachers, and mentors alike can edit and define the mentoring capabilities of the agent. The implementation results are open-source and we added them to our Virtual Agents Framework that allows developers to quickly add agents to cross-platform mixed reality applications. Moreover, we conducted a user study with the mentoring prototype. The results are promising as students perceived the mixed reality agents in a positive way, with high usability, and as helpful advisors. Therefore, mixed reality mentoring agents have the potential to become widespread companions for students during their studies.
Benedikt Hensen, Danylo Bekhter, Dascha Blehm, Sebastian Meinberger, Ralf Klamma
Asynchronous Manual Work in Mixed Reality Remote Collaboration
Abstract
Research in Collaborative Virtual Environments (CVEs) is becoming more and more significant with increasing accessibility of Virtual Reality (VR) and Augmented Reality (AR) technology, additionally reinforced by the increasing demand for remote collaboration groupware. While the research is focusing on methods for synchronous remote collaboration, asynchronous remote collaboration remains a niche. Nevertheless, future CVEs should support both paradigms of collaborative work, since asynchronous collaboration has as well its benefits, for instance a more flexible time-coordination. In this paper we present a concept of recording and later playback of highly interactive collaborative tasks in Mixed Reality (MR). Furthermore, we apply the concept in an assembly training scenario from the manufacturing industry and test it during pilot user experiments. The pilot study compared two modalities, the first one with a manufacturing manual, and another using our concept and featuring a ghost avatar. First results revealed no significant differences between both modalities in terms of time completion, hand movements, cognitive workload and usability. Some differences were not expected, however, these results and the feedback brought by the participants provide insights to further develop our concept.
Anjela Mayer, Théo Combe, Jean-Rémy Chardonnet, Jivka Ovtcharova
A Virtual Reality Serious Game for Children with Dyslexia: DixGame
Abstract
Children with reading and writing difficulties, such as dyslexia, have been directly affected by the Covid-19 situation because they could not have the teacher’s face-to-face support. Consequently, new devices and technological applications are being used in educational contexts to improve the interest of learning. This paper presents the design of a Virtual Reality Serious Game called DixGame. This game is a pedagogical tool specifically oriented to children between 8 and 12 years old with dyslexia. Two immersive mini-games are included in this game: a Whack-a-mole and a Memory, which try to improve different skills keeping the children focused on tasks. Whack-a-mole aims to work on the attention and visual and reading agility by recognizing correct letters and words. Memory aims to improve memory and attention ability by pairing letter-cards. The mini-game structure permits to incorporate new levels or games and the progressive increment of difficulty allows the autonomous treatment.
Henar Guillen-Sanz, Bruno Rodríguez-Garcia, Kim Martinez, María Consuelo Saiz Manzanares
Processing Physiological Sensor Data in Near Real-Time as Social Signals for Their Use on Social Virtual Reality Platforms
Abstract
Social interactions increasingly shift to computer-mediated communication channels. Compared to face-to-face communication, their use suffers from a loss or distortion in the transmission of social signals, which are prerequisites of social interactions. Social virtual reality platforms offer users a variety of possibilities to express themselves verbally as well as non-verbally. Although these platforms take steps towards compensating the addressed communication gap, there is still high demand to ensure and further improve the correct transmission of social signals. To address this issue, we investigate the processing of physiological sensor data as social signals. This paper provides two major contributions: Firstly, a concept for processing physiological sensor data in near real-time as social signals. The concept enables the processing of physiological sensor data on an individual level as well as across all users. For both the individual user and the collective, single sensors or the data from the whole sensor cluster can be analysed, resulting in four ways of analysis. Secondly, we provide concrete suggestions for a software setup, based on an extensive analysis of available open source software, to support a potential future implementation of the proposed concept. The results of this work are highly relevant for social virtual reality platforms, especially since modern head-mounted displays are often already equipped with appropriate measurement sensors. Moreover, the results can also be transferred to numerous other media, applications and research fields concerned with processing physiological sensor data, which reinforces the provided added value.
Fabio Genz, Clemens Hufeld, Dieter Kranzlmüller
Developing a Tutorial for Improving Usability and User Skills in an Immersive Virtual Reality Experience
Abstract
The fast development and progressive price reduction of Virtual Reality (VR) devices open a broad range of VR applications. Especially interesting are those applications focused on educational objectives. However, before these VR applications can be extensively presented in the educational system, some main issues to optimize their efficiency in the student’s autonomous learning process should be solved. While in non-VR games designers have consistently developed introductory tutorials to prepare new players for the game’s mechanics, in the case of VR, the design of these tutorials is still an open issue. This research presents a tutorial for VR educational applications to help the users to become familiar with the virtual environment and to learn the use of the interaction devices and the different mechanics within the experiences. In addition, the usability of this tutorial was tested with final users to assure its effectiveness.
Ines Miguel-Alonso, Bruno Rodriguez-Garcia, David Checa, Lucio Tommaso De Paolis
Challenges in Virtual Reality Training for CRBN Events
Abstract
The re-emergence of chemical, biological, radioactive, and nuclear (CBRN) threats as a key area of focus for military (as well as civilian) actors, paired with the early stage of CBRN VR training, create a strong opportunity for future research. Improvement in-game engine technology and Virtual Reality hard and software can improve CBRN training and simulation for military and civilian responders to CBRN events. Therefore, in this work, we discussed the challenges of developing a European virtual reality-based CBRN training. By standardizing CBRN training on a European Level interoperability between different actors (military and civilian) and European nationalities shall be increased. We presented the main cornerstones for a VR CBRN training that shall be tackled in the VERTIgO project: (1) the Exercise Simulation Platform (2) Scenario Creator, and (3) a CBRN VR Mask.
Georg Regal, Helmut Schrom-Feiertag, Massimo Migliorini, Massimiliano Guarneri, Daniele Di Giovanni, Andrea D’Angelo, Markus Murtinger
A Preliminary Study on the Teaching Mode of Interactive VR Painting Ability Cultivation
Abstract
This paper introduces the advantages and characteristics of VR painting compared with traditional painting and analyzes its application prospects. Based on the experience in interactive VR painting teaching, we initially explored the mode of training VR painting talents and summarized the current challenges VR painting faces.
YanXiang Zhang, Yang Chen

eXtended Reality in Education

Frontmatter
Factors in the Cognitive-Emotional Impact of Educational Environmental Narrative Videogames
Abstract
This contribution describes an experiment carried out in 2020 with the goal of exploring factors affecting the cognitive-emotional impact of immersive VR Serious Games, and specifically of Educational Environmental Narrative Games. The experimental evaluation was aimed at better understanding three research questions: if passive or active interaction is preferable for users’ factual and spatial knowledge acquisition; if meaningfulness could be considered as a relevant experience in a serious game (SG) context; and if distraction has an impact on knowledge acquisition and engagement in immersive VR educational games. Although the experiment involved only a limited number of participants, our results led to the identification of some relevant tendencies and factors which ought to be considered in the development of future SGs, and which reveal the need for further studies in HCI and game design.
Sofia Pescarin, Delfina S. M. Pandiani
Instinct-Based Decision-Making in Interactive Narratives
Abstract
This paper examines the expressive potential of instinct-based decision-making as a method to enhance narrative immersion in interactive storytelling. One of the key challenges to propose leaned-back interactive narratives lies in the methods through which users and system exchange inputs and outputs. While explicit interfaces tend to disrupt leaned-back participation – demanding a leaned-forward type of agency – and, thus, immersion in the narrative environment, this model proposes interactions based on diegetic stimuli that avoid interfaces and encourage instinctive and immediate reactions from the user in order to navigate a narrative immersive environment. The notion of instinct-based decision-making has been observed in three stages: (1) Conceptualization through previous practices and literature review, (2) Design and production of a Cinematic Virtual Reality (CVR) interactive prototype, and (3) system-testing of the model’s key functional aspects.
Tobías Palma Stade
The Application of Immersive Virtual Reality for Children’s Road Education: Validation of a Pedestrian Crossing Scenario
Abstract
Human beings face the matter of transportation and mobility from their early childhood, as vulnerable non-motorized users; besides, road injuries represent one of the leading causes of death for children. This work investigated the possibility of using virtual reality (VR) for educational purposes for road safety. Specifically, through the observation of children’s behavior, a preliminary validation of an immersive virtual reality environment related to a pedestrian crossing scenario (signal and no signal-controlled), a critical component in road safety, was performed. An experiment was carried out, involving 46 middle school students aged between 11 and 13 years. Participants, wearing the headset, crossed the road in a virtual environment designed and implemented with Unity® software. The scenario consisted of training and trial sessions both one-way and two ways, with and without traffic signal. The goal of this preliminary work was to validate the pedestrian crossing scenario in order to use VR as a tool for Road Education. The results of this first analysis are promising: user’s behavior in the experiment was rather consistent with that in the real world. 31 participants waited for the green light to cross, and 11 crossed with the red light matching what the participants declared in a previous survey. Besides, the analysis indicated that the average crossing speed recorded during the experiment was consistent with the one reported in the literature.
Giulia De Cet, Andrea Baldassa, Mariaelena Tagliabue, Riccardo Rossi, Chiara Vianello, Massimiliano Gastaldi
Collaborative VR Scene Broadcasting for Geometry Education
Abstract
Virtual reality (VR) is promising for future education, and teachers need student management to improve class effectiveness. The authors absorb the broadcasting advantages of video teaching, put forward a VR scene broadcasting (VRSB) for geometry education. VRSB includes a database (DB) and an active server page (ASP) and a distributed database (DDB), which enables teachers and students to enjoy different permissions and statuses to satisfy teachers’ teaching management and stimulate students’ creativity through interactive collaboration and independent exploration.
YanXiang Zhang, JiaYu Wang
Collaborative Mixed Reality Annotations System for Science and History Education Based on UWB Positioning and Low-Cost AR Glasses
Abstract
In this research, the authors designed a low-cost mixed reality (MR) collaborative annotations system for science and history education based on Ultra-wideband (UWB) communication technology. The position of the user is provided by the gyroscope in AR Glasses and the UWB antenna tag which is carried with users. The system is suitable for science education in developing countries that lack quality science teaching resources. It can provide comparably sized things or scenes that may be difficult to see clearly or do not exist in daily life for users to observe and experience. While using the system, the users can interact with each other. The system has some unique advantages, such as low-cost, accurate positioning in areas where GPS signal is weak, virtuality, and reality combination in large indoor spaces.
YanXiang Zhang, LiTing Tang

Artificial Intelligence and Machine Learning for eXtended Reality

Frontmatter
Can AI Replace Conventional Markerless Tracking? A Comparative Performance Study for Mobile Augmented Reality Based on Artificial Intelligence
Abstract
AR is struggling to achieve its maturity for the mass market. Indeed, there are still many challenging issues that are waiting to be discovered and improved in AR related fields. Artificial Intelligence seems the more promising solution to overcome these limitations; indeed, they can be combined to obtain unique and immersive experiences. Thus, in this work, we focus on integrating DL models into the pipeline of AR development. This paper describes an experiment performed as comparative study, to evaluate if classification and/or object detection can be used an alternative way to track objects in AR. In other words, we implemented a mobile application that is capable of exploiting AI based model for classification and object detection and, at the same time, project the results in AR environment. Several off-the-shelf devices have been used, in order to make the comparison consistent, and to provide the community with useful insights over the opportunity to integrate AI models in AR environment and to what extent this can be convenient or not. Performance tests have been made in terms of both memory consumption and processing time, as well as for Android and iOS based applications.
Roberto Pierdicca, Flavio Tonetto, Marco Mameli, Riccardo Rosati, Primo Zingaretti
Find, Fuse, Fight: Genetic Algorithms to Provide Engaging Content for Multiplayer Augmented Reality Games
Abstract
In Augmented Reality (AR) mobile games, several technical aspects are still partially under-explored, thus limiting the creativity of game designers and the spectrum of possible uses of AR. As a result, too often AR is used only to superimpose in a static way predefined digital content to real scenarios. In the present work, we have started to tackle this issue by designing a game to overcome the limited interactivity among players and the somewhat static use of AR on resource-limited devices (i.e., cell phones). In particular, we have designed and prototyped FFF: Find, Fuse, Fight, a game that supports multiplayer mode, offers a more creative use of AR, and demonstrates that Procedural Content Generation (PCG) techniques could be effectively exploited for introducing a higher degree of variability both in the content and in the gameplay, even on devices far less performing than a standard PC. In particular, we developed a prototype that exploits Genetic Algorithms (GAs) to create new content and apply meshes deformation to 3D models in real-time. We have used such content to prototype a mobile game that features AR battles among creatures in an online multiplayer environment. The prototypes have undergone a performance test to evaluate the feasibility of AR multiplayer games with generated content, collecting encouraging preliminary outcomes.
Federico Aliprandi, Renato Avellar Nobre, Laura Anna Ripamonti, Davide Gadia, Dario Maggiorini
Synthetic Data Generation for Surface Defect Detection
Abstract
Ensuring continued quality is challenging, especially when customer satisfaction is the provided service. It seems to become easier with new technologies like Artificial Intelligence. However, field data are necessary to design an intelligent assistant but are not always available. Synthetic data are used mainly to replace real data. Made with a Generative Adversarial Network or a rendering engine, they aim to be as efficient as real ones in training a Neural Network. When synthetic data generation meets the challenge of object detection, its capacity to deal with the defect detection challenge is unknown. Here we demonstrate how to generate these synthetic data to detect defects. Through iterations, we apply different methods from literature to generate synthetic data for object detection, from how to extract a defect from the few data we have to how to organize the scene before data synthesis. Our study suggests that defect detection may be performed by training an object detector neural network with synthetic data and gives a protocol to do so even if at this point, no field experiments have been conducted to verify our detector performances under real conditions. This experiment is the starting point for developing a mobile and automatic defect detector that might be adapted to ensure new product quality.
Déborah Lebert, Jérémy Plouzeau, Jean-Philippe Farrugia, Florence Danglade, Frédéric Merienne

eXtended Reality in Geo-information Sciences

Frontmatter
ARtefact: A Conceptual Framework for the Integrated Information Management of Archaeological Excavations
Abstract
The information management of archaeological excavations (and the follow-up conservation and restoration of excavated objects) is complex, time consuming and laborious in practice. Currently available technological aids in the broader area of digital archaeology are limited in scope, while most are not interoperable. Herein, we propose ARtefact, a conceptual framework which encompasses an integrated technological toolset supporting the digital documentation, knowledge management and interactive presentation of digital resources produced throughout the archaeological excavation and the study/conservation of Artefacts. ARtefact accounts for several end-products: a mobile digital documentation application (executed on mobile devices with built-in depth sensors) which addresses the needs of all stakeholders involved in the documentation of the excavation process and its findings (mainly field archeologists and conservators); a web-based knowledge management tool which enables archaeologists to specify semantic relationships between digital resources; authoring tools used by curators and archaeologists without any technical expertise to create custom AR/VR applications which allow users to retrieve and interact with the ARtefact digital resources, thus enhancing the experience of -physical and virtual- visits in archaeological sites and museum exhibitions. A prototype implementation of ARtefact will be validated in pilot studies conducted in an active archeological excavation site and an archeological museum in Greece.
Damianos Gavalas, Vlasios Kasapakis, Evangelia Kavakli, Panayiotis Koutsabasis, Despina Catapoti, Spyros Vosinakis
Geomatics Meets XR: A Brief Overview of the Synergy Between Geospatial Data and Augmented Visualization
Abstract
Extended Reality (XR) is an extension of the real world obtained thanks to the increase of innovative features that allow users to perceive the surrounding reality in a different and enhanced way, combining it with virtual elements. XR is declined through three different technologies that help change the perception of reality: Virtual Reality (VR), Augmented Reality (AR), Mixed Reality (MR). The latter combines VR and AR creating a more complex experience, generating a new reality resulting from the union between real and virtual allows the user to interact simultaneously with the real world and the virtual environment. In the last years XR technology has been employed in several fields of application, except in geomatics, that is the less explored field. Given the extreme complexity of heterogeneous geomatics data, the visualization is complex, and there is the need to better understand the potential of MR for both users and experts. Considering the potentiality of this technology in geomatics, in this paper we present GEOLENS, an MR application in an eXtended environment. With GEOLENS it is possible to visualize digitally produced objects and information in the field and superimpose them on reality. The advantages of this solution in the field of surveying, architectural design and Geographical Information System (GIS), are the interaction of digital design with reality for greater decision control, time saving and office-field sharing. This paper reports over the development of a cloud-based platform which acts as a repository of geomatics data, ready to use in the real environment. The features described prove the suitability of the solution for multiple purposes.
Roberto Pierdicca, Maurizio Mulliri, Matteo Lucesoli, Fabio Piccinini, Eva Savina Malinverni
Utilization of Geographic Data for the Creation of Occlusion Models in the Context of Mixed Reality Applications
Abstract
Emergency responder training can benefit from outdoor use of Mixed Reality (MR) devices to make trainings more realistic and allow simulations that would otherwise not be possible due to safety risks or cost-effectiveness. But outdoor use of MR requires knowledge of the topography and objects in the area to enable accurate interaction of the real world trainees experience and the virtual elements that are placed in them. An approach utilizing elevation data and geographic information systems to create effective occlusion models is shown, that can be used in such outdoor training simulations. The initial results show that this approach enables accurate occlusion and placement of virtual objects within an urban environment. This improves immersion and spatial perception for trainees. In the future, improvements of the approach are planned with on the fly updates to outdated information in the occlusion models.
Christoph Praschl, Erik Thiele, Oliver Krauss
Development of an Open-Source 3D WebGIS Framework to Promote Cultural Heritage Dissemination
Abstract
Italian territory is characterized by a conspicuous number of cultural heritage sites to be promoted and preserved. Therefore, regional, and local authorities feel the need to identify an economic and efficient solution to monitor their status and encourage their knowledge among heritage and environmental agencies and the business communities. Usually, Geographical Information Systems have been introduced to store and manage data concerning cultural heritage sites albeit, just in the last few years, its role is becoming more and more important thanks to the development of web applications. These ones allow helping cultural heritage dissemination as well as providing a relevant tool to data treatment. Therefore, in this study, an interactive WebGIS platform aimed at supporting cultural heritage management and enhancement has been developed. In accordance with the standards proposed by the Open Geospatial Consortium and EU directive INSPIRE, Free and Open-Source Software for Geographic information systems were applied to develop proper codes aimed at implementing the whole three-tier configuration. Moreover, a user-friendly interactive interface was also programmed to help IT and non-IT users in stored data management. Although the proposed WebGIS appears as the optimal tool to meet research purposes, further improvements are still needed to handle multiple contacts simultaneously and increase the real-time processing options.
Alessandra Capolupo, Cristina Monterisi, Eufemia Tarantino

Industrial eXtended Reality

Frontmatter
A Framework for Developing XR Applications Including Multiple Sensorial Media
Abstract
eXtended Reality applications include multiple sensorial media to increase the quality of the User Experience. In addition to traditional media, video, and sound, two other senses are typically integrated: touch and smell. The development of applications that integrate multiple sensorial media requires a framework for properly managing their activation and synchronization. The paper describes a framework for the development of eXtended Reality mulsemedia applications and presents some applications based on the integration of smells developed using the framework.
M. Bordegoni, M. Carulli, E. Spadoni
Augmented Reality Remote Maintenance in Industry: A Systematic Literature Review
Abstract
Augmented reality (AR) is a promising technology for supporting industrial maintenance applications. Two major types of AR technology are used for maintenance applications. One of those is AR remote maintenance, a technology that connects remote experts to on-site technicians to work collaboratively on industrial maintenance applications. This seems especially valuable for nonstandardized tasks. Although several recent systematic literature reviews (SLRs) on AR for maintenance applications have been published, the growing body of literature calls for an ever-differentiated view of the knowledge base of AR remote maintenance. Therefore, this paper aims to map AR remote maintenance literature by conducting an SLR, characterizing the literature, describing applications in industry, and making suggestions for further research. Based on the analysis of 89 articles from the last two decades, this paper contributes the following findings: 1) the research field has a strong engineering focus on system development. 2) scholars share a common understanding of AR remote maintenance, despite using heterogeneous terminology; 3) the prevailing study design only allows for limited comparison of prototypes and applications; 4) transferability to industrial maintenance professionals is limited, due to the study design; 5) AR remote maintenance appears to raise business model opportunities for product-service systems; and 6) the diversity of AR remote maintenance applications indicates the technology’s industrial versatility. Overall, the maturity of the research field is increasing; however, it is still at an early stage. Based on these findings, we made two proposals for advancing the AR remote maintenance research field.
David Breitkreuz, Maike Müller, Dirk Stegelmeyer, Rakesh Mishra
Virtual Teleoperation Setup for a Bimanual Bartending Robot
Abstract
This paper presents the preliminary design of a teleoperation system for a bimanual bartending robot, with reference to the BRILLO (Bartending Robot for Interactive Long Lasting Operations) project. The aim is to simulate the remote control of the robotic bartender by the human operator in an intuitive manner, using Virtual Reality technologies. The proposed Virtual Reality architecture is based on the use of commercial Head Mounted Display with a pair of hand controllers and the virtual simulation of the remote environment of the robot, with the robotic simulator CoppeliaSim. Originally, virtual simulations of the robot environment have allowed to identify the possible scenarios and interactions between the customers and the different robotic systems inside the automatized bar: the totem for the selection and payment of the order, the robotic bartender to prepare the cocktail and the mobile robot for the cocktail serving at the table. Secondly, focusing on a sequence of main tasks that the robotic bartender must perform for the cocktails preparation, the operator’s control on the simulated robotic system has been reproduced. In fact, the aim of this first experimental phase is to test the interaction between the human operator and the simulated immersive environment for the remote control of the robotic system. Two use cases have been reproduced: the first is related to the recovery from a failure situation such as the fall of a glass, while the second refers to the trajectory training to perform some repeating actions. Six operators (three males and three females), who already knew the taks, with an age between 25 and 40 years and a minimum experience with VR technology for personal entertainment, have been involved in the test phase. For this reason, the paper will finally discuss the perception of the involved operators about the use of the proposed VR architecture in terms of usability and mental workload.
Sara Buonocore, Stanislao Grazioso, Giuseppe Di Gironimo

eXtended Reality in the Digital Transformation of Museums

Frontmatter
Virtualization and Vice Versa: A New Procedural Model of the Reverse Virtualization for the User Behavior Tracking in the Virtual Museums
Abstract
In this paper we present a method of the user behavior (UB) tracking by capturing and measuring user activities through the defined procedural model of the reverse virtualization process, implementing a proof of concept on a real case scenario: the Civic Gallery of Ascoli. In order to define the universal model of such “vice versa” virtual reality (VR) experience, we assigned particular descriptive functions (descriptors) to each interactive feature of the virtual user space. In this virtualization phase we store user interaction information locally using the web-socket streams protocol, ensuring complete control and manipulation of monitored functions. Our algorithm firstly collects the user interaction data and extracts the descriptors’ arguments into the indexed vector of corresponding variables. The next step determines UB pattern by solving the inverse descriptive functions in combination with an appropriate statistical analysis of gathered data. The final result of the proposed method is the repository of salient data that is used in the further user experience improvement, as well as to enable the museums to distinguish the most important points of the visitor interest in the virtual web tours. Our approach also offers a potential benefit of obtained results in an automatic calculation and prediction of UB patterns using artificial intelligence (AI).
Iva Vasic, Aleksandra Pauls, Adriano Mancini, Ramona Quattrini, Roberto Pierdicca, Renato Angeloni, Eva S. Malinverni, Emanuele Frontoni, Paolo Clini, Bata Vasic
“You Can Tell a Man by the Emotion He Feels”: How Emotions Influence Visual Inspection of Abstract Art in Immersive Virtual Reality
Abstract
Art is a complex subject of analysis. Nonetheless, Empirical Aesthetics has proved that the interaction between some bottom-up and top-down mechanisms shapes individuals’ perception of a work of art. Recently, the Vienna Integrated Model for Art Perception [1] added that, during the observation of an artwork, the emotional state of the observer, as a top-down component, could influence the visual perception of the artwork, as a bottom-up one. Positive emotions can influence visual exploration, by broadening the attention focus during the observation of a normal stimulus [2]. However, whether this mechanism applies also for peculiar objects, i.e., abstract paintings, is still unexplored. In this study, we investigated how the emotional state of the subject influenced his/her following visual exploration of abstract works of art, featured in an immersive format. Thirty participants (20 males, 10 females) had been emotionally primed by either a positive (condition 1), negative (condition 2) or neutral affect (condition 3), as a control condition, before they observed 11 abstract paintings displayed in an immersive° format. Participants’ eyes gazes were measured during the view of artistic stimuli through an eye-tracking device integrated in a virtual reality viewer (HTC VIVE Pro-eye). Analyses of eye-tracking metrics from participants - fixations and saccades- showed that individuals experiencing an induced positive mood broadened their visual attention, thus exploring more each painting, while participants in a negative mood explore generally less each work of art. This study confirmed that positive emotions pushed subjects to visually explore more the paintings presented in a virtual environment and confirmed the influence of the emotional state of the observer, as a top-down component, also in the visual exploration of an artwork.
Marta Pizzolante, Alice Chirico
Augmented Reality and 3D Printing for Archaeological Heritage: Evaluation of Visitor Experience
Abstract
Augmented Reality (AR) and 3D printing have increasingly been used in archaeological and cultural heritage to make artifacts and environments accessible to the general public. This paper presents the case study of the Ljungaviken dog, an archaeological find of dog skeleton remains dated around 8000 years ago. The dog remains have been digitized using 3D scanning and displayed in an AR application. A physical replica has also been created with 3D printing. Both the AR application and the 3D printed copy have been shown in a temporary museum exhibition. In this paper, we present the visitors’ experience evaluation based on a study with 42 participants. Aspects being evaluated are related to the realism, enjoyment, and easiness of use of the AR application. Moreover, the two media are compared in terms of understanding, visual quality, and experience satisfaction. The results show an overall positive experience for both the display solutions, with slightly higher scores for the AR application in the comparison. When asked about overall preference, the participants reported similar results between both media. Due to issues of displaying fragile objects in a museum setting, as well as recent restrictions following pandemic closures and availability, the results presented in this paper show a positive alternative towards using digital artifacts to showcase our cultural heritage.
Valeria Garro, Veronica Sundstedt
Building Blocks for Multi-dimensional WebXR Inspection Tools Targeting Cultural Heritage
Abstract
Data exploration and inspection within semantically enriched multi-dimensional contexts, may benefit of immersive VR presentation when proper 3D user interfaces are adopted. WebXR represents a great opportunity to investigate, experiment, develop and assess advanced multi-dimensional interactive tools for Cultural Heritage, making them accessible through a common web browser. We present and describe the potential of WebXR and a set of building blocks for crafting such immersive data inspection tools, exploiting recent web standards and spatial user interfaces. We describe the current state of the EMviq tool - developed within SSHOC European project - and how it is taking advantage of these components for online immersive sessions. The EMviq tool allows to visually inspect and query an Extended Matrix dataset, allowing to query and explore all the information within the knowledge graph relating to the interpretative datasets - in this paper applied to the case studies of the Roman theatre of Catania and Montebelluna smithy. The main functionalities discussed are spatio-temporal exploration, search and selection of stratigraphic units, and the presentation of metadata and paradata related to the data provenance (both objective and interpretative).
Bruno Fanini, Emanuel Demetrescu, Alberto Bucciero, Alessandra Chirivi, Francesco Giuri, Ivan Ferrari, Nicola Delbarba
Comparing the Impact of Low-Cost 360° Cultural Heritage Videos Displayed in 2D Screens Versus Virtual Reality Headsets
Abstract
The continuous price reduction of head mounted displays (HMDs) raises the following question in the field of cultural heritage: Is it possible to adapt desktop passive virtual reality (VR) 360° experiences for HMDs? This work presents a comparison of low-cost 360° videos of cultural heritage displayed across two devices: a desktop display and an HMD. The study case is the virtual reconstruction of Burgos (Spain) in 1921. The key factors of these videos are short duration, virtual reconstruction based on 3D modelling and photo editing and the inclusion of real actors performing out looping micro stories. The comparison of both displays devices has been carried out by a group of 32 students from the University of Burgos. The validation includes user satisfaction, knowledge acquisition and visual identification. The results are the following: 1) better knowledge acquisition and immersion in the HMD group, 2) better user satisfaction for the desktop group and 3) more fault identification related to the characters for the HMD group. Looking at these results, the most important elements to improve are the integration of characters and increase the length of the videos.
Bruno Rodriguez-Garcia, Mario Alaguero, Henar Guillen-Sanz, Ines Miguel-Alonso

eXtended Reality Beyond the Five Senses

Frontmatter
Non-immersive Versus Immersive Extended Reality for Motor Imagery Neurofeedback Within a Brain-Computer Interfaces
Abstract
A sensory feedback was employed for the present work to remap brain signals into sensory information. In particular, sensorimotor rhythms associated with motor imagery were measured as a mean to interact with an extended reality (XR) environment. The aim for such a neurofeedback was to let the user become aware of his/her ability to imagine a movement. A brain-computer interface based on motor imagery was thus implemented by using a consumer-grade electroencephalograph and by taking into account wearable and portable feedback actuators. Visual and vibrotactile sensory feedback modalities were used simultaneously to provide an engaging multimodal feedback in XR. Both a non-immersive and an immersive version of the system were considered and compared. Preliminary validation was carried out with four healthy subjects participating in a total of four sessions on different days. Experiments were conducted according to a wide-spread synchronous paradigm in which an application provides the timing for the motor imagery tasks. Performance was compared in terms of classification accuracy. Overall, subjects preferred the immersive neurofeedback because it allowed higher concentration during experiments, but there was not enough evidence to prove its actual effectiveness and mean classification accuracy resulted about 65%. Meanwhile, classification accuracy resulted higher with the non-immersive neurofeedback, notably it reached about 75%. Future experiments could extend this comparison to more subjects and more sessions, due to the relevance of possible applications in rehabilitation. Moreover, the immersive XR implementation could be improved to provide a greater sense of embodiment.
Pasquale Arpaia, Damien Coyle, Francesco Donnarumma, Antonio Esposito, Angela Natalizio, Marco Parvis
Virtual Reality Enhances EEG-Based Neurofeedback for Emotional Self-regulation
Abstract
A pilot study to investigate possible differences between a virtual reality-based neurofeedback and a traditional neurofeedback is presented. Neurofeedback training aimed to strengthen the emotional regulation capacity. The neurofeedback task is to down-regulate negative emotions by decreasing the beta band power measured in the midline areas of the scalp (i.e., Fcz-Cpz). Negative International Affective Picture System images were chosen as eliciting stimuli. Three healthy subjects participated in the experimental activities. Each of them underwent three VR-based neurofeedback sessions and three neurofeedback sessions delivered on a traditional 2D screen. The neurofeedback training session was preceded by a calibration phase allowing to record the rest and the baseline values to adapt the neurofeedback system to the user. For the majority of sessions, the average value of the high beta band power during the neurofeedback training remained below the baseline, as expected. In compliance with previous studies, future works should investigate the virtual reality-based neurofeedback efficacy in physiological responses and behavioral performance.
Pasquale Arpaia, Damien Coyle, Giovanni D’Errico, Egidio De Benedetto, Lucio Tommaso De Paolis, Naomi du Bois, Sabrina Grassini, Giovanna Mastrati, Nicola Moccaldi, Ersilia Vallefuoco
Psychological and Educational Interventions Among Cancer Patients: A Systematic Review to Analyze the Role of Immersive Virtual Reality for Improving Patients’ Well-Being
Abstract
Previous studies show that the lack of information about cancer-related topics (e.g., diagnosis, treatments) and the impact of treatment toxicity on patients’ life, may undermine cancer patients’ psychological well-being. Psycho-educational interventions are therefore implemented to support the oncological population. This systematic review aims to explore the state of art and effectiveness of psychological and educational interventions implemented using Virtual Reality and designed for pediatric and adult cancer patients. The review was conducted in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines (PRISMA), and it was registered with the PROSPERO international prospective register of systematic reviews (registration number CRD42022308402). Twenty studies were included in the review. Our findings show that psychological interventions predominantly use emotion-focused strategies (i.e., distraction) to reduce patients’ emotional distress; educational studies prefer, on the contrary, cognitive-behavioral strategies (i.e., exposure) to restructure patients’ beliefs, increasing their understanding of the procedure, and reducing situational anxiety. VR could be a promising and effective tool for supporting cancer patients’ needs. However, since most of these VR interventions assign the patient a passive role in coping with his or her diagnosis, future research should develop psychological and educational VR interventions that have the primary goal of rendering people with a cancer diagnosis active characters in their psychological well-being, supporting in this way patients’ empowerment.
Maria Sansoni, Clelia Malighetti, Giuseppe Riva
Backmatter
Metadata
Title
Extended Reality
Editors
Lucio Tommaso De Paolis
Pasquale Arpaia
Marco Sacco
Copyright Year
2022
Electronic ISBN
978-3-031-15553-6
Print ISBN
978-3-031-15552-9
DOI
https://doi.org/10.1007/978-3-031-15553-6

Premium Partner