Skip to main content
Top

Extended Reality

International Conference, XR Salento 2024, Lecce, Italy, September 4–7, 2024, Proceedings, Part I

  • 2024
  • Book

About this book

The four-volume proceedings set LNCS 15027, 15028, 15029 and 15030 constitutes the refereed proceedings of the International Conference on Extended Reality, XR Salento 2024, held in Lecce, Italy during September 4–7, 2024.

The 63 full papers and 50 short papers included in these proceedings were carefully reviewed and selected from 147 submissions. They were organized in the following topical sections: Extended Reality; Artificial Intelligence & Extended Reality; Extended Reality and Serious Games in Medicine; Extended Reality in Medicine and Rehabilitation; Extended Reality in Industry; Extended Reality in Cultural Heritage; Extended Reality Tools for Virtual Restauration; Extended Reality and Artificial Intelligence in Digital Humanities; Extended Reality in Learning; and Extended Reality, Sense of Presence and Education of Behaviour.

Table of Contents

  • 1
  • 2
  • current Page 3
Previous
  1. Artificial Intelligence and Extended Reality

    1. Frontmatter

    2. X-NR: Towards An Extended Reality-Driven Human Evaluation Framework for Neural-Rendering

      Lorenzo Stacchio, Emanuele Balloni, Lucrezia Gorgoglione, Marina Paolanti, Emanuele Frontoni, Roberto Pierdicca
      Abstract
      The joint usage of Extended Reality (XR) and Artificial Intelligence (AI) has enabled different Metaverse-related use cases. Such paradigms were recently adopted for immersive content creation, particularly considering Neural Rendering (NR) techniques to project scenes from the real world in the 3D realm. These methods are particularly beneficial in the field of Cultural Heritage (CH), where digitizing and visualizing cultural assets in 3D is crucial. However, current evaluation protocols lack a robust integration of human judgments through a Human-In-The-Loop (HITL) approach to humanly evaluate the quality of the generated 3D models, which could also support model optimization. To bridge this gap, we here introduce X-NR, a novel XR framework designed to evaluate and compare 3D reconstruction methodologies, including NR in the context of CH. We contextualize and validate such a framework through case studies on cultural heritage sites in the Marche region (Italy), employing various data-capturing and 3D reconstruction methodologies. The study concludes with a validation of the framework by CH domain experts, underscoring its potential advantages over traditional 3D editing software.
    3. A Task-Interaction Framework to Monitor Mobile Learning Activities Based on Artificial Intelligence and Augmented Reality

      Marco Arrigo, Mariella Farella, Giovanni Fulantelli, Daniele Schicchi, Davide Taibi
      Abstract
      The complexity behind the analysis of mobile learning activities has requested the development of specifically designed frameworks. When students are involved in mobile learning experiences, they interact with the context in which the activities occur, the content they have access to, with peers and their teachers. The wider adoption of generative artificial intelligence introduces new interactions that researchers have to look at when learning analytics techniques are applied to monitor learning patterns. The task interaction framework proposed in this paper explores how AI-based tools affect student-content and student-context interactions during mobile learning activities, thus focusing on the interplay of Learning Analytics and Artificial Intelligence advances in the educational domain. A use case scenario that explores the framework’s application in a real educational context is also presented. Finally, we describe the architectural design of an environment that leverages the task interaction framework to analyze enhanced mobile learning experiences in which structured content extracted from a Knowledge Graph is elaborated by a large language model to provide students with personalized content.
    4. Collaborative Intelligence and Hyperscanning: Exploring AI Application to Human-Robot Collaboration Through a Neuroscientific Approach

      Flavia Ciminaghi, Laura Angioletti, Katia Rovelli, Michela Balconi
      Abstract
      Cobots are robots designed to work with human operators in a shared workspace and on a shared task. Combining robots and human skills is one of the main advantages of human-robot collaboration (HRC) in industrial production. With the goal of moving toward an authentic collaboration, rather than a simple coexistence, cobots should be able to adapt to the physical and mental needs of the operator in a more natural and personalized way. Using neuroscientific measurements of human responses to HRC combined with artificial intelligence (AI) algorithms, cobots could be implemented with the ability to process and respond in real time to the psychophysiological state of the operator. Moreover, real-world scenarios must consider the presence of complex and multiple social interactions. In line with this perspective, the neuroscientific “hyperscanning” paradigm is particularly suited for the study of complex and naturalistic interactive dynamics and can be used to assess the neurophysiological activity of two or more agents interacting with each other, when a non-human agent, such as a cobot or an AI system, is introduced in the collaboration. This contribution describes a research project in early stages of development which aims to assess the effects of HRC and, more generally, of human factors, on operators’ mental and emotional state and to develop models of real-time adaptation of the cobot to the psychophysiological state of the operator.
    5. Integrating Virtual Reality and Artificial Intelligence in Agricultural Planning: Insights from the V.A.I.F.A.R.M. Application

      Iacopo Bernetti, Tommaso Borghini, Irene Capecchi
      Abstract
      The V.A.I.F.A.R.M. (Virtual and Artificial Intelligence for Farming and Agricultural Resource Management) app explores the integration of collaborative virtual reality (VR) with generative artificial intelligence (AI), specifically utilizing ChatGPT, to enhance educational approaches within agricultural management and planning. This study aims to investigate the educational outcomes associated with the combined use of VR and AI technologies, with a particular focus on their impact on critical thinking, problem-solving abilities, and collaborative learning among university students engaged in agricultural studies.
      By employing VR, the project creates a simulated agricultural environment where students are tasked with various management and planning activities, offering a practical application of theoretical knowledge. The addition of ChatGPT facilitates interactive, AI-mediated dialogues, challenging students to tackle complex agricultural problems through informed decision-making processes.
      The research anticipates findings that suggest an improvement in student engagement and a better grasp of complicated agricultural concepts, attributed to the immersive and interactive nature of the learning experience. Furthermore, it examines the role of VR and AI in cultivating essential soft skills critical for the agricultural sector. The study contributes to the understanding of how collaborative VR and generative AI can be effectively combined to advance educational practices in agriculture, aiming for a balanced evaluation of their potential benefits without overstating the outcomes.
    6. ARFood: Pioneering Nutrition Education for Generation Alpha Through Augmented Reality and AI-Driven Serious Gaming

      Irene Capecchi, Tommaso Borghini, Iacopo Bernetti
      Abstract
      This paper introduces “ARFood,” a groundbreaking augmented reality (AR) and artificial intelligence (AI) application tailored to revolutionize nutrition education for Generation Alpha. ARFood immerses users in a virtual supermarket shopping experience, guided by two AI characters: NutriBot, a hip-hop nutritionist robot, and CyberFlora, a new-age sustainability expert robot. These characters offer feedback on food choices based on nutritional value and environmental sustainability, promoting healthy and eco-conscious eating habits among middle school students. The application leverages the engaging potential of AR and the personalized interaction capabilities of AI to deliver scientifically accurate, age-appropriate educational content. This study examines the development, implementation, and preliminary impact of ARFood, highlighting its effectiveness in enhancing nutrition education and its potential as a model for future educational technologies. By bridging the gap between traditional education methods and the digital nativity of Generation Alpha, ARFood represents a significant step forward in adapting educational content to the preferences and needs of today’s youth.
    7. Enhancing Presentation Skills: A Virtual Reality-Based Simulator with Integrated Generative AI for Dynamic Pitch Presentations and Interviews

      Meisam Taheri, Kevin Tan
      Abstract
      Presenting before an audience presents challenges throughout preparation and delivery, necessitating tools to securely refine skills securely. Interviews mirror presentations, showcasing oneself to convey qualifications. Virtual environments offer safe spaces for trial and error, crucial for iterative practice without emotional distress. This research proposes a Virtual Reality-Based Dynamic Pitch Simulation with Integrated Generative AI to effectively enhance presentation skills. The simulation converts spoken words to text, then uses AI to generate relevant questions for practice. Benefits include realistic feedback and adaptability to user proficiency. Open-source language models evaluate content, coherence, and delivery, offering personalized challenges. This approach supplements learning, enhancing presentation skills effectively. Voice-to-text conversion and AI feedback create a potent pedagogical tool, fostering a prompt feedback loop vital for learning effectiveness. Challenges in simulation design must be addressed for robustness and efficacy. The study validates these concepts by proposing a real-time 3D dialogue simulator, emphasizing the importance of continual improvement in presentation skill development.
    8. Genetic Algorithm and VR for Assessing the Level of Expertise of Maintenance Operator

      Axel Foltyn, Christophe Guillet, Florence Danglade, Frédéric Merienne
      Abstract
      The study aims to find the features for assessing the level of a maintenance operator. A genetic algorithm is used to identify the most relevant features and reduce their size. Based on 30 different features entered, we demonstrate that only three operator-level evaluation features provide a good classification. Virtual reality was used to simulate maintenance operations, collect data, and validate our method for identifying the most relevant features.
    9. Personalising the Training Process with Adaptive Virtual Reality: A Proposed Framework, Challenges, and Opportunities

      Gadea Lucas-Pérez, José Miguel Ramírez-Sanz, Ana Serrano-Mamolar, Álvar Arnaiz-González, Andrés Bustillo
      Abstract
      This work presents a conceptual framework that integrates Artificial Intelligence (AI) into immersive Virtual Reality (iVR) training systems, aiming to enhance adaptive learning environments that dynamically respond to individual users’ physiological states. The framework uses real-time data acquisition from multiple sources, including physiological sensors, eye-tracking and user interactions, processed through AI algorithms to personalise the training experience. By adjusting the complexity and nature of training tasks in real time, the framework seeks to maintain an optimal balance between challenge and skill, fostering an immersive learning environment. This work details some methodologies for data acquisition, the preprocessing required to synchronise and standardise diverse data streams, and the AI training techniques essential for effective real-time adaptation. It also discusses logistical considerations of computational load management in adaptive systems. Future work could explore the scalability of these systems and their potential for self-adaptation, where models are continuously refined and updated in real-time based on incoming data during user interactions.
  2. Digital Twin

    1. Frontmatter

    2. XR-Based Digital Twin for Industry 5.0: A Usability and User Experience Evaluation

      Giovanni Grego, Federica Nenna, Luciano Gamberini
      Abstract
      The advent of Industry 5.0, as described by the European Commission, heralds a paradigm shift towards human-centric values, sustainability, and resilience in manufacturing, replacing the precedent Industry 4.0. In this evolving landscape, the integration of eXtended Reality (XR) technologies presents a promising avenue for enhancing human-machine interaction through industrial Digital Twins (DTs). This study introduces and validates an innovative XR-based interface for programming industrial cobots, comparing its performance to a conventional teach pendant control system. Leveraging a cohort of Human-Computer Interaction and User Experience experts, the evaluation demonstrates significant advantages of the XR-based interface in terms of usability, acceptance, and user experience. Participants were more inclined to adopt the XR-based interface, perceiving improvements in task performance, particularly in speed, while maintaining comparable accuracy to the teach pendant. Qualitative feedback highlights the simplicity, fluency, efficiency, and ergonomic interaction design of the XR interface. These results inform potential enhancements to further optimize the usability and effectiveness of XR-based DT systems in industrial settings, reaffirming the pivotal role of human-centric approaches in shaping the future of manufacturing.
    3. Digital Twins: Innovation in Automated Systems Control Education

      Jessica S. Ortiz, Michael X. Armendáriz, Fanny P. Toalombo, Víctor H. Andaluz
      Abstract
      This paper presents the development of a learning tool based on Digital Twins, designed to generate a virtual environment controlled by a PLC S7-1200. This tool allows engineering students to interact in an active and practical way with simulated industrial processes. The real-time communication between the controller and the work environment contributes significantly to decision making, allowing students to design, control and manipulate industrial processes with precision, as well as to respond to possible eventualities during the execution of such processes. To evaluate the usability of the tool, the System Usability Test (SUS) was applied to a homogeneous group of 20 students, obtaining an average score of 86.5, which classifies the tool in the “good” range. This result suggests that the tool is perceived as interactive and immersive, creating an active and user-friendly work environment.
    4. Towards Concepts for Digital Twins in Higher Education

      Yevgeniya Daineko, Aigerim Seitnur, Dana Tsoy, Madina Ipalakova, Akkyz Mustafina, Miras Uali
      Abstract
      The implementation of digital twins in higher education has enormous potential and is becoming increasingly relevant. Specifically, digital twins represent a powerful tool that promotes the transformation of the educational process and unveils new opportunities for students and teachers. Enhancing educational accessibility, realistic simulations, resource savings, personalized learning, and research opportunities are just a few of the prospects for the application of digital twins in education.
      This article analyzes the challenges and prospects associated with the deployment of digital twins in the educational sphere. It explores technological innovations that enable the creation of accurate copies of real objects and processes in a virtual environment. The benefits of this approach are discussed, including enhancement of educational accessibility, improved practical experience, and enhanced student motivation. Examples of successful digital twin applications in various educational contexts are considered, as well as challenges and potential solutions to overcome them. A digital twin of the International Information Technology University (Almaty, Kazakhstan) has been developed, allowing for education in a virtual space. The article concludes by summarizing the findings and draws conclusions about the potential of digital twins to transform education.
    5. An Evaluation Method for Digital Twin Development Platforms

      José Monteiro, João Barata
      Abstract
      The Digital Twin (DT) offers an integrated solution for replicating physical (human and non-human) systems with monitoring capabilities and intelligent support for decision-making. Their popularity in academia and industry is growing, and different commercial and open-source development platforms are now available. However, there is a lack of detailed platform benchmarking studies and selection guidelines. This paper (1) identifies a portfolio of DT development platforms (DTDP) and (2) suggests a systematic method to evaluate them. Preliminary results of the method adoption are presented for a use case of a dry port DT deployment. This research will assist companies with their DTDP investments, presenting an assessment example for more complex DT deployment settings.
  3. Backmatter

  • 1
  • 2
  • current Page 3
Previous
Title
Extended Reality
Editors
Lucio Tommaso De Paolis
Pasquale Arpaia
Marco Sacco
Copyright Year
2024
Electronic ISBN
978-3-031-71707-9
Print ISBN
978-3-031-71706-2
DOI
https://doi.org/10.1007/978-3-031-71707-9

Accessibility information for this book is coming soon. We're working to make it available as quickly as possible. Thank you for your patience.

Premium Partner

    Image Credits
    Neuer Inhalt/© ITandMEDIA, Nagarro GmbH/© Nagarro GmbH, AvePoint Deutschland GmbH/© AvePoint Deutschland GmbH, AFB Gemeinnützige GmbH/© AFB Gemeinnützige GmbH, USU GmbH/© USU GmbH, Ferrari electronic AG/© Ferrari electronic AG