Skip to main content

2020 | Buch

Augmented Reality, Virtual Reality, and Computer Graphics

7th International Conference, AVR 2020, Lecce, Italy, September 7–10, 2020, Proceedings, Part I

insite
SUCHEN

Über dieses Buch

The 2-volume set LNCS 12242 and 12243 constitutes the refereed proceedings of the 7th International Conference on Augmented Reality, Virtual Reality, and Computer Graphics, AVR 2020, held in Lecce, Italy, in September 2020.*

The 45 full papers and 14 short papers presented were carefully reviewed and selected from 99 submissions. The papers discuss key issues, approaches, ideas, open problems, innovative applications and trends in virtual reality, augmented reality, mixed reality, 3D reconstruction visualization, and applications in the areas of cultural heritage, medicine, education, and industry.

* The conference was held virtually due to the COVID-19 pandemic.

Inhaltsverzeichnis

Frontmatter

Virtual Reality

Frontmatter
How to Reduce the Effort: Comfortable Watching Techniques for Cinematic Virtual Reality

When watching omnidirectional movies with Head-Mounted Displays, viewers can freely choose the direction of view, and thus the visible section of the movie. However, looking around all the time can be exhausting and having content in the full 360° area can cause the fear to miss something. For making watching more comfortable, we implemented new methods and conducted three experiments: (1) exploring methods to inspect the full omnidirectional area by moving the head, but not the whole body; (2) comparing head, body and movie rotation and (3) studying how the reduction of the 360° area influences the viewing experience. For (3), we compared the user behavior watching a full 360°, a 225° and a 180° movie via HMD. The investigated techniques for inspecting the full 360° area in a fixed sitting position (experiments 1 and 2) perform well and could replace the often-used swivel chair. Reducing the 360° area (experiment 3), 225° movies resulted in a better score than 180° movies.

Sylvia Rothe, Lang Zhao, Arne Fahrenwalde, Heinrich Hußmann
Asymmetrical Multiplayer Versus Single Player: Effects on Game Experience in a Virtual Reality Edutainment Game

Gamification of learning material is becoming popular within the education field and the possibilities of designing edutainment games are being explored. This project compares a single player and a two-player game experience in a collaborative Virtual Reality (VR) edutainment game. The two versions of the game had exactly the same information, where in the collaborative game the information was divided between the two players in an asymmetrical format, where one player is outside of VR. The evaluation of the two versions compared only the experience of the participants in VR using an independent measures design. The results showed that the two-player version scored higher in questions related to positive game experience with a significant difference to the single player version. Furthermore, participants using the two-player version rated significantly lower on questions related to annoyance. In the setting of an edutainment game the results suggest that incorporating a collaborative aspect through asymmetrical game play in VR increases enjoyment of the experience.

Anders Hansen, Kirstine Bundgaard Larsen, Helene Høgh Nielsen, Miroslav Kalinov Sokolov, Martin Kraus
Procedural Content Generation via Machine Learning in 2D Indoor Scene

The article proposes a method of combining multiple deep forward neural networks to generate a distribution of objects in a 2D scene. The main concepts of machine learning, neural networks and procedural content generation concerning this intention are presented here. Additionally, these concepts are put into the context of computer graphics and used in a practical example of generating an indoor 2D scene. A method of vectorization of input datasets for training forward neural networks is proposed. Scene generation is based on the consequent placement of objects of different classes into the free space defining a room of a certain shape. Several evaluate methods have been proposed for testing the correctness of generation.

Bruno Ježek, Adam Ouhrabka, Antonin Slabý
Keeping It Real!
Investigating Presence in Asymmetric Virtual Reality

In this paper we discuss the necessity to preserve the sense of presence in virtual reality (VR). A high sense of presence has proven advantages but is also very fragile to interruptions. We outline scenarios where interaction and communication between persons inside and outside virtual environments are necessary and assess challenges for maintaining the immersed user’s sense of presence in such cases. We also use existing literature to outline an experiment that allows us to try out different methods of collaboration between immersed users and external facilitators in order to discern their effect on presence.

Mika P. Nieminen, Markus Kirjonen
GazeRoomLock: Using Gaze and Head-Pose to Improve the Usability and Observation Resistance of 3D Passwords in Virtual Reality

Authentication has become an important component of Immersive Virtual Reality (IVR) applications, such as virtual shopping stores, social networks, and games. Recent work showed that compared to traditional graphical and alphanumeric passwords, a more promising form of passwords for IVR is 3D passwords. This work evaluates four multimodal techniques for entering 3D passwords in IVR that consist of multiple virtual objects selected in succession. Namely, we compare eye gaze and head pose for pointing, and dwell time and tactile input for selection. A comparison of a) usability in terms of entry time, error rate, and memorability, and b) resistance to real world and offline observations, reveals that: multimodal authentication in IVR by pointing at targets using gaze, and selecting them using a handheld controller significantly improves usability and security compared to the other methods and to prior work. We discuss how the choice of pointing and selection methods impacts the usability and security of 3D passwords in IVR.

Ceenu George, Daniel Buschek, Andrea Ngao, Mohamed Khamis
Alert Characterization by Non-expert Users in a Cybersecurity Virtual Environment: A Usability Study

Although cybersecurity is a domain where data analysis and training are considered of the highest importance, few virtual environments for cybersecurity are specifically developed, while they are used efficiently in other domains to tackle these issues.By taking into account cyber analysts’ practices and tasks, we have proposed the 3D Cyber Common Operational Picture model (3D CyberCOP), that aims at mediating analysts’ activities into a Collaborative Virtual Environment (CVE), in which users can perform alert analysis scenarios.In this article, we present a usability study we have performed with non-expert users. We have proposed three virtual environments (a graph-based, an office-based, and the coupling of the two previous ones) in which users should perform a simplified alert analysis scenario based on the WannaCry ransomware. In these environments, users must switch between three views (alert, cyber and physical ones) which all contain different kinds of data sources. These data have to be used to perform the investigations and to determine if alerts are due to malicious activities or if they are caused by false positives.We have had 30 users, with no prior knowledge in cybersecurity. They have performed very well at the cybersecurity task and they have managed to interact and navigate easily. SUS usability scores were above 70 for the three environments and users have shown a preference towards the coupled environment, which was considered more practical and useful.

Alexandre Kabil, Thierry Duval, Nora Cuppens
A Driving Simulator Study Exploring the Effect of Different Mental Models on ADAS System Effectiveness

This work investigated the effect of mental models on the effectiveness of an advanced driver assistance system (ADAS). The system tested was a lateral control ADAS, which informed the drivers whether the vehicle was correctly positioned inside the lane or not, with the use of two visual and one auditory stimuli. Three driving simulator experiments were performed, involving three separate groups of subjects, who received different initial exposures to the technology. In Experiment 0 subjects were not exposed to ADAS in order to be able to indicate that no effect of learning affected the results. In Experiment A subjects were not instructed on the ADAS functionalities and they had to learn on their own; in Experiment B they were directly instructed on the functionalities by reading an information booklet. In all experiments drivers performed multiple driving sessions. The mean absolute lateral position (LP) and standard deviation of lateral position (SDLP) for each driver were considered as main dependent variables to measure the effectiveness of the ADAS. Findings from this work showed that the initial mental model had an impact on ADAS effectiveness, since it produced significantly different results in terms of ADAS effectiveness, with those reading the information booklet being able to improve more and faster their lateral control.

Riccardo Rossi, Massimiliano Gastaldi, Francesco Biondi, Federico Orsini, Giulia De Cet, Claudio Mulatti
Virtual Reality vs Pancake Environments: A Comparison of Interaction on Immersive and Traditional Screens

Virtual Reality environments provide an immersive experience for the user. Since humans see the real world in 3D, being placed in a virtual environment allows the brain to perceive the virtual world as a real environment. This paper examines the contrasts between two different user interfaces by presenting test subjects with the same 3D environment through a traditional flat screen (pancake) and an immersive virtual reality (VR) system. The participants (n = 31) are computer literate and familiar with computer generated virtual worlds. We recorded each user’s interactions while they undertook a short-supervised play session with both hardware options to gathering objective data; with a questionnaire to collect subjective data, to gain an understanding of their interaction in a virtual world. The information provided an opportunity to understand how we can influence future interface implementations used in the technology. Analysis of the data has found that people are open to using VR to explore a virtual space with some unconventional interaction abilities such as using the whole body to interact. Due to modern VR being a young platform very few best practice conventions are known in this space compared to the more established flat screen equivalent.

Raymond Holder, Mark Carey, Paul Keir
VR Interface for Designing Multi-view-Camera Layout in a Large-Scale Space

Attention has been focused on sports broadcast, which used a free-viewpoint video that integrates multi-viewpoint images inside a computer and reproduces the appearance observed at arbitrary viewpoint. In a multi-view video shooting, it is necessary to arrange multiple cameras to surround the target space. In a large-scale space such a soccer stadium, it is necessary to determine where the cameras can be installed and to understand what kind of multi-view video can be shot. However, it is difficult to get such information in advance so that “location hunting” is needed usually. This paper presents a VR interface for supporting the preliminary consideration of multi-view camera arrangement in large-scale space. This VR interface outputs the multi-view camera layout on the 3D model from the shooting requirements for multi-view camera shooting and the viewing requirements for observation of the generated video. By using our interface, it is expected that the labor and time required to determine the layout of multi-view cameras can be drastically reduced.

Naoto Matsubara, Hidehiko Shishido, Itaru Kitahara
Collaboration in Virtual and Augmented Reality: A Systematic Overview

This paper offers a systematic overview of collaboration in virtual and augmented reality, including an assessment of advantages and challenges unique to collaborating in these mediums. In an attempt to highlight the current landscape of augmented and virtual reality collaboration (AR and VR, respectively), our selected research is biased towards more recent papers (within the last 5 years), but older work has also been included when particularly relevant. Our findings identify a number of potentially under-explored collaboration types, such as asynchronous collaboration and collaboration that combines AR and VR. We finally provide our key takeaways, including overall trends and opportunities for further research.

Catlin Pidel, Philipp Ackermann
A User Experience Questionnaire for VR Locomotion: Formulation and Preliminary Evaluation

When evaluating virtual reality (VR) locomotion techniques, the user experience metrics that are used are usually either focused on specific experiential dimensions or based on non-standardised, subjective reporting. The field would benefit from a standard questionnaire for evaluating the general user experience of VR locomotion techniques. This paper presents a synthesised user experience questionnaire for VR locomotion, which is called the VR Locomotion Experience Questionnaire (VRLEQ). It comprises the Game Experience Questionnaire (GEQ) and the System Usability Scale (SUS) survey. The results of the VRLEQ’s application in a comparative, empirical study ( $$n = 26$$ ) of three prevalent VR locomotion techniques are described. The questionnaire’s content validity is assessed at a preliminary level based on the correspondence between the questionnaire items and the qualitative results from the study’s semi-structured interviews. VRLEQ’s experiential dimensions’ scoring corresponded well with the semi-structured interview remarks and effectively captured the experiential qualities of each VR locomotion technique. The VRLEQ results facilitated and quantified comparisons between the techniques and enabled an understanding of how the techniques performed in relation to each other.

Costas Boletsis
Virtual Fitness Trail: A Complete Program for Elderlies to Perform Physical Activity at Home

This paper presents a Virtual Reality (VR) exergame, Virtual Fitness Trail (VFT), designed to encourage the elderly to regularly exercise in an engaging and safe way staying at home. VFT provides a highly immersive physical activity experiences in first-person perspective and it has been developed to run on Oculus Quest head mounted display. The exergame proposes four activities involving the main muscles of the legs and arms, as well as stabilizing balance and reflexes: monkey bar, side raises, basketball shot and slalom between beams. Each activity has four difficulty level and has been designed to minimize the perception of self-motion so as to reduce the cybersickness arise. The user’s performances are saved in .json file and it can be shared, via email, with the caregiver at the end of the exergame session. The application is a prototype and needs to be tested and validated before proposing it for autonomous physical activity at home.

Marta Mondellini, Marco Sacco, Luca Greci
Exploring Players’ Curiosity-Driven Behaviour in Unknown Videogame Environments

Curiosity is a fundamental trait of human nature, and as such, it has been studied and exploited in many aspects of game design. However, curiosity is not a static trigger that can just be activated, and game design needs to be carefully paired with the current state of the game flow to produce significant reactions. In this paper we present the preliminary results of an experiment aimed at understanding how different factors such as perceived narrative, unknown game mechanics, and non-standard controller mapping could influence the evolution of players’ behaviour throughout a game session. Data was gathered remotely through a puzzle game we developed and released for free on the internet, and no description on potential narrative was provided before gameplay. Players who downloaded the game did it on their free will and played the same way they would with any other game. Results show that initial curiosity towards both a static and dynamic environment is slowly overcome by the sense of challenge, and that interactions that were initially performed with focus lose accuracy as result of players’ attention shift towards the core game mechanics.

Riccardo Galdieri, Mata Haggis-Burridge, Thomas Buijtenweg, Marcello Carrozzino
Considering User Experience Parameters in the Evaluation of VR Serious Games

Serious Games for Virtual Reality (SG-VR) is still a new subject that needs to be explored. Achieving the optimal fun and learning results depends on the application of the most suitable metrics. Virtual Reality environments offer great capabilities but at the same time make difficult to record User Experience (UX) to improve it. Moreover, the continuous evolution of Virtual Reality technologies and video game industry tendencies constantly change these metrics. This paper studies the Mechanics, Dynamics and Aesthetic (MDA) framework and User Experience metrics to develop new ones for SG-VR. These new parameters are focused on the intrinsic motivations the players need so they engage with the game. However, the development team budget must be taken into account, since it limits items and interactions but still have to aim to the learning goals. New VR metrics will be 1) UX features: chosen VR headsets, training interactions tutorials to learn control and interactive adaptions to avoid VR inconveniences; and 2) MDA features: exclusive VR aesthetical elements and its interactions.

Kim Martinez, M. Isabel Menéndez-Menéndez, Andres Bustillo
Virtual Reality Technologies as a Tool for Development of Physics Learning Educational Complex

The paper describes the project on physics learning. It was implemented using the Unity game engine, Leap Motion package to provide controlling within the laboratory works, and C# programming language in order to define logic between objects of the app. Also, the survey on the efficiency of similar projects and applications is presented. It was conducted among students of the high school. The observations on using of virtual reality technology are also given. The main purpose of the article is to demonstrate and evaluate use of the application in subject’s learning and its efficiency. The paper also raises question on educational tools relevance and modernity.

Yevgeniya Daineko, Madina Ipalakova, Dana Tsoy, Aigerim Seitnur, Daulet Zhenisov, Zhiger Bolatov
Evaluating the Effect of Reinforcement Haptics on Motor Learning and Cognitive Workload in Driver Training

Haptic technologies have the capacity to enhance motor learning, potentially improving the safety and quality of operating performance in a variety of applications, yet there is limited research evaluating implementation of these devices in driver training environments. A driving simulator and training scenario were developed to assess the quality of motor learning produced with wrist-attached vibrotactile haptic motors for additional reinforcement feedback. User studies were conducted with 36 participants split into 2 groups based on feedback modality. Throughout the simulation vehicle interactions with the course were recorded, enabling comparisons of pre and post-training performance between the groups to evaluate short-term retention of the steering motor skill. Statistically significant differences were found between the two groups for vehicle position safety violations (U = 78.50, P = 0.008) where the visual-haptic group improved significantly more than the visual group. The Raw NASA-TLX (RTLX) was completed by participants to examine the cognitive effect of the additional modality, where the visual-haptic group reported greater levels of workload (U = 90.50, P = 0.039). In conclusion, reinforcement vibrotactile haptics can enhance short-term retention of motor learning with a positive effect on the safety and quality of post-training behaviour, which is likely a result of increased demand and stimulation encouraging the adaptation of sensorimotor transformations.

Thomas G. Simpson, Karen Rafferty

Augmented Reality

Frontmatter
Using Augmented Reality to Train Flow Patterns for Pilot Students - An Explorative Study

Today, just as in the early days of flying, much emphasis is put on the pilot student’s flight training before flying a real commercial aircraft. In the early stages of a pilot student’s education, they must, for example, learn different operating procedures known as flow patterns using very basic tools, such as exhaustive manuals and a so-called paper tiger. In this paper, we present a first design of a virtual and interactive paper tiger using augmented reality (AR), and perform an evaluation of the developed prototype. We evaluated the prototype on twenty-seven pilot students at the Lund University School of Aviation (LUSA), to explore the possibilities and technical advantages that AR can offer, in particular the procedure that is performed before takeoff. The prototype got positive results on perceived workload, and in remembering the flow pattern. The main contribution of this paper is to elucidate knowledge about the value of using AR for training pilot students.

Günter Alce, Karl-Johan Klang, Daniel Andersson, Stefan Nyström, Mattias Wallergård, Diederick C. Niehorster
Scalable Integration of Image and Face Based Augmented Reality

In this paper we present a scalable architecture that integrates image based augmented reality (AR) with face recognition and augmentation over a single camera video stream. To achieve the real time performance required and ensure a proper level of scalability, the proposed solution makes use of two different approaches. First, we identify that the main bottleneck of the integrated process is the feature descriptor matching step. Taking into account the particularities of this task in the context of AR, we perform a comparison of different well known Approximate Nearest Neighbour search algorithms. After the empirical evaluation of several performance metrics we conclude that HNSW is the best candidate. The second approach consists on delegating other demanding tasks such as face descriptor computation as asynchronous processes, taking advantage of multi-core processors.

Nahuel A. Mangiarua, Jorge S. Ierache, María J. Abásolo
Usability of an Input Modality for AR

The ability to overlay useful information into the physical world has made augmented reality (AR) a popular area of research within industry for maintenance and assembly tasks. However, the current input modalities for state-of-the-art technologies such as the Microsoft HoloLens have been found to be inadequate within the environment. As part of AugmenTech, an AR guidance system for maintenance, we have developed a tactile input module (TIM) with a focus on ease of use in the field of operation. The TIM is formed by 3D printed parts, off-the-shelf electronic components and an ESP-8266 microcontroller. A within-subjects controlled experiment was conducted to evaluate the usability and performance of the module against existing HoloLens input modalities and the Avatar VR glove from NeuroDigital. A System Usability Scale (SUS) score of 81.75 and low error count demonstrated the TIM’s suitability for the factory environment.

Daniel Brice, Karen Rafferty, Seán McLoone
State of the Art of Non-vision-Based Localization Technologies for AR in Facility Management

Augmented reality (AR) applications for indoor purpose mostly use vision-based localization systems. However, even with AI-based algorithms, reachable accuracies are quite low. In the field of facility management an important functionality is the possibility to go for a high localization accuracy to display information, warnings or instructions at the correct position, if necessary. Simple vision-based solutions, like QR codes, are widely used. However, they show a high effort during installation and the advantages of using AR are limited. Thus, a state-of-the-art review for non-vision-based indoor localization technologies was carried out. Moreover, an evaluation with respect to usability for augmented reality applications was done. A scenario of the application of AR in the facility management environment is described based on the review results. For use-cases with high accuracy, tracking systems like infrared-based camera systems are a preferable solution. Also, ultrasonic could be a cheap solution for a medium accuracy tracking. For a simple room-based localization Bluetooth beacons and other hybrid indoor position technologies are preferred.

Dietmar Siegele, Umberto Di Staso, Marco Piovano, Carmen Marcher, Dominik T. Matt
AI4AR: An AI-Based Mobile Application for the Automatic Generation of AR Contents

Augmented reality (AR) is the process of using technology to superimpose images, text or sounds on top of what a person can already see. Art galleries and museums started to develop AR applications to increase engagement and provide an entirely new kind of exploration experience. However, the creation of contents results a very time consuming process, thus requiring an ad-hoc development for each painting to be increased. In fact, for the creation of an AR experience on any painting, it is necessary to choose the points of interest, to create digital content and then to develop the application. If this is affordable for the great masterpieces of an art gallery, it would be impracticable for an entire collection. In this context, the idea of this paper is to develop AR applications based on Artificial Intelligence. In particular, automatic captioning techniques are the key core for the implementation of AR application for improving the user experience in front of a painting or an artwork in general. The study has demonstrated the feasibility through a proof of concept application, implemented for hand held devices, and adds to the body of knowledge in mobile AR application as this approach has not been applied in this field before.

Roberto Pierdicca, Marina Paolanti, Emanuele Frontoni, Lorenzo Baraldi
AR-Based Visual Aids for sUAS Operations in Security Sensitive Areas

The use of UAVs has recently been proposed for services in civil airports and other sensitive areas. Besides specific regulations from authorities, the situation awareness of the UAV operator is a key aspect for a safe coexistence between manned and unmanned traffic. The operator must be provided in real-time with contextually relevant information, in order to take the proper actions promptly based on the notified contingency. Augmented reality can be adopted to superimpose such additional information on the real-world scene. After the definition of an architecture for the integration of drone operations in the airspace, in this paper the interface and the layout of an augmented reality application for the situation awareness of the pilot are designed and discussed.

Giulio Avanzini, Valerio De Luca, Claudio Pascarelli
Global-Map-Registered Local Visual Odometry Using On-the-Fly Pose Graph Updates

Real-time camera pose estimation is one of the indispensable technologies for Augmented Reality (AR). While a large body of work in Visual Odometry (VO) has been proposed for AR, practical challenges such as scale ambiguities and accumulative errors still remain especially when we apply VO to large-scale scenes due to limited hardware and resources. We propose a camera pose registration method, where a local VO is consecutively optimized with respect to a large-scale scene map on the fly. This framework enables the scale estimation between a VO map and a scene map and reduces accumulative errors by finding corresponding locations in the map to the current frame and by on-the-fly pose graph optimization. The results using public datasets demonstrated that our approach reduces the accumulative errors of naïve VO.

Masahiro Yamaguchi, Shohei Mori, Hideo Saito, Shoji Yachida, Takashi Shibata

Mixed Reality

Frontmatter
Combining HoloLens and Leap-Motion for Free Hand-Based 3D Interaction in MR Environments

The ability to interact with virtual objects using gestures would allow users to improve their experience in Mixed Reality (MR) environments, especially when they use AR headsets. Today, MR head-mounted displays like the HoloLens integrate hand gesture based interaction allowing users to take actions in MR environments. However, the proposed interactions remain limited. In this paper, we propose to combine a Leap Motion Controller (LMC) with a HoloLens in order to improve gesture interaction with virtual objects. Two main issues are presented: an interactive calibration procedure for the coupled HoloLens-LMC device and an intuitive hand-based interaction approach using LMC data in the HoloLens environment. A set of first experiments was carried out to evaluate the accuracy and the usability of the proposed approach.

Fakhreddine Ababsa, Junhui He, Jean-Remy Chardonnet
Mixed Reality Annotations System for Museum Space Based on the UWB Positioning and Mobile Device

In this research, the authors designed a mixed reality annotations system based on UWB positioning and mobile device, which is a low-cost innovative solution especially for wide range of indoor environments. This design can be targeted to solve the problem of low investment in museums in most parts of the developing country and large visitor flow during holidays. The position of the visitor is obtained through the UWB antenna tag which was attached on smartphones. The gyroscope data and focal length was also used to keep virtual camera and real camera consistent and virtual space’s calibration. The system can ensure that when there is a large flow of people, visitors can watch the multimedia annotation of exhibits on their phones during the queuing far away from the exhibits. The types of annotation are mainly video, 3D model and audio. In China, many museums have the function of science education. A rich form of annotation can enhances this functionality. At last, we compare and analyze the localization advantage of this system (to solve the problem of congestion and shortage of funds), and recruited 10 volunteers to experience system. We find that this system can achieve the exact matching standard when the visitors are 0.75 –1 m away from the exhibits, while when the visitors are more than 3 m away from the exhibits, it has the advantages that other systems cannot have, such as playing and watching videos when they cannot get close to the exhibits due to crowding. This system provides a new solution for the application of MR in large indoor area and updated the exhibition of museum.

YanXiang Zhang, Yutong Zi
A Glasses-Based Holographic Tabletop for Collaborative Monitoring of Aerial Missions

This paper describes the development of a HoloLens application for experimenting the ability to collaboratively monitor flight tests by a shared holographic-like tabletop approach. The situational awareness arising from a high sense of presence deriving from the glasses-based holographic representation of the flying scenario leads to effective decision making. Moreover, the optical see-through MR approach to in-site collaboration makes inter-person communication as easy as in the reality. Finally, the shared holographic representation virtually recreated in the in-between space among participants promises a visually coherent basis for “look here” collaboration style.A flexible architecture is proposed for this application separating the core app from the data feeds for a slimmer development-deployment process, and technologies and data source to be used for the realization are reviewed.Finally, the paper reports the results of experimental use of the system collected in a couple of flight tests held in our Center, and allows readers to draw a perspective pathway to future MR developments.

Bogdan Sikorski, Paolo Leoncini, Carlo Luongo

3D Reconstruction and Visualization

Frontmatter
A Comparative Study of the Influence of the Screen Size of Mobile Devices on the Experience Effect of 3D Content in the Form of AR/VR Technology

Differences in screen size of mobile devices can affect users’ experience of products, but very few researches have confirmed that differences in screen size of mobile devices can affect users’ experience of VR/AR content are few, or even affect the choice of VR/AR content form by 3D content creators. The authors use the AR and VR forms of the same 3D content to evaluate the participants’ convenience, intuitive feedback and comfort of VR/AR interactive experience on mobile devices with different screen sizes, in order to explore whether the difference of screen size affects the experience effect of 3D content AR/VR. Research has shown that participants tend to use larger screen mobile devices for interactive experiences, but not the larger the screen of the mobile device used, the stronger the willingness of participants are. We found that participants tend to use mobile devices with moderate screen size for VR/AR interaction.

YanXiang Zhang, Zhenxing Zhang
Photogrammetric 3D Reconstruction of Small Objects for a Real-Time Fruition

Among the techniques for digitalization and 3D modeling of real objects, photogrammetry is assuming an increasing importance due to easy procedures and low costs of hardware and software equipment. Thanks to the advances of the last years in computer vision, photogrammetry software can reconstruct the geometric 3D shape of an object from a series of pictures taken from different viewpoints. In particular, close-range photogrammetry for the reconstruction of small objects allows performing image acquisition around the target object almost automatically. In this paper we present a brief survey of the hardware setup, algorithms and software tools for photogrammetric acquisition and reconstruction applied to small objects, aimed at achieving a good photorealism level without an excessive computational load.

Lucio Tommaso De Paolis, Valerio De Luca, Carola Gatto, Giovanni D’Errico, Giovanna Ilenia Paladini
Adaptive Detection of Single-Color Marker with WebGL

This paper presents a method for real-time single-color marker detection. The algorithm is based on our previous work, and the goal of this paper is to investigate and test possible enhancements that can be done, namely color weighting, hue calculation, and modified versions of dilatation and erosion operations. The paper also explains a solution for dynamic color selection, which makes the system more robust to varying lighting conditions by detecting the color hue, saturation, and value under the current conditions. The designed methods are implemented in WebGL, which allows running the developed application on any platform with any operating system, given WebGL is supported by the web browser. Testing of all described and implemented improvements was conducted, and it revealed that using hue weighting has a good effect on the resulting detection. On the other hand, dynamic thresholding of saturation and value components of the HSV color model does not give good results. Therefore, testing for finding the right threshold for these components had to be done. The erosion operation improves the detection significantly while dilatation does not have almost any impact on the result.

Milan Košťák, Bruno Ježek, Antonín Slabý
Users’ Adaptation to Non-standard Controller Schemes in 3D Gaming Experiences

With hundreds of new games being released every week, designers rely on existing knowledge to design control schemes for their products. However, in the case of games with new game mechanics, designers struggle to implement new button schemes due to the lack of research on players’ adaptation to new and non-standard controls. In this study we investigated PC players habits when playing a game they have no knowledge of, and how they adapt to its non-standard control scheme. Data was collected by using a specifically designed game instead of relying on pre-existing ones, allowing us to design specific game mechanics to exploit users’ habits and monitor players’ behaviour in their home environments. Preliminary results seem to indicate that PC players do pay attention to control schemes and are able to quickly learn new ones, but they also prefer to make mistakes in favour of execution speed.

Riccardo Galdieri, Mata Haggis-Burridge, Thomas Buijtenweg, Marcello Carrozzino
3D Dynamic Hand Gestures Recognition Using the Leap Motion Sensor and Convolutional Neural Networks

Defining methods for the automatic understanding of gestures is of paramount importance in many application contexts and in Virtual Reality applications for creating more natural and easy-to-use human-computer interaction methods. In this paper, we present a method for the recognition of a set of non-static gestures acquired through the Leap Motion sensor. The acquired gesture information is converted in color images, where the variation of hand joint positions during the gesture are projected on a plane and temporal information is represented with color intensity of the projected points. The classification of the gestures is performed using a deep Convolutional Neural Network (CNN). A modified version of the popular ResNet-50 architecture is adopted, obtained by removing the last fully connected layer and adding a new layer with as many neurons as the considered gesture classes. The method has been successfully applied to the existing reference dataset and preliminary tests have already been performed for the real-time recognition of dynamic gestures performed by users.

Katia Lupinetti, Andrea Ranieri, Franca Giannini, Marina Monti
RGB-D Image Inpainting Using Generative Adversarial Network with a Late Fusion Approach

Diminished reality is a technology that aims to remove objects from video images and fills in the missing region with plausible pixels. Most conventional methods utilize the different cameras that capture the same scene from different viewpoints to allow regions to be removed and restored. In this paper, we propose an RGB-D image inpainting method using generative adversarial network, which does not require multiple cameras. Recently, an RGB image inpainting method has achieved outstanding results by employing a generative adversarial network. However, RGB inpainting methods aim to restore only the texture of the missing region and, therefore, does not recover geometric information (i.e, 3D structure of the scene). We expand conventional image inpainting method to RGB-D image inpainting to jointly restore the texture and geometry of missing regions from a pair of RGB and depth images. Inspired by other tasks that use RGB and depth images (e.g., semantic segmentation and object detection), we propose late fusion approach that exploits the advantage of RGB and depth information each other. The experimental results verify the effectiveness of our proposed method.

Ryo Fujii, Ryo Hachiuma, Hideo Saito
3D Radial Layout for Centrality Visualization in Graphs

This paper presents new methods of 3D visualization of graphs that allow to highlight nodes structural centrality. These methods consist in projecting, along the vertical axis, 2D graph representations on three 3D surfaces: 1) a half-sphere; 2) a cone and 3) a torus portion. The transition to 3D allows to better handle the visualization of complex and large data that 2D techniques are generally unable to provide. The 3D radial layout techniques reduce nodes and edges overlap and improve, in some cases, the perception of nodes connectivity by exploiting differently or better the display space.

Piriziwè Kobina, Thierry Duval, Laurent Brisson
A Preliminary Study on Full-Body Haptic Stimulation on Modulating Self-motion Perception in Virtual Reality

We introduce a novel experimental system to explore the role of vibrotactile haptic feedback in Virtual Reality (VR) to induce the self-motion illusion. Self-motion (also called vection) has been mostly studied through visual and auditory stimuli and a little is known how the illusion can be modulated by the addition of vibrotactile feedback. Our study focuses on whole-body haptic feedback in which the vibration is dynamically generated from the sound signal of the Virtual Environment (VE). We performed a preliminary study and found that audio and haptic modalities generally increase the intensity of vection over a visual only stimulus. We observe higher ratings of self-motion intensity when the vibrotactile stimulus is added to the virtual scene. We also analyzed data obtained with the igroup presence questionnaire (IPQ) which shows that haptic feedback has a general positive effect of presence in the virtual environment and a qualitative survey that revealed interesting and often overlooked aspects such as the implications of using a joystick to collect data in perception studies and in the concept of vection in relation to people’s experience and cognitive interpretation of self-motion.

Francesco Soave, Nick Bryan-Kinns, Ildar Farkhatdinov
Backmatter
Metadaten
Titel
Augmented Reality, Virtual Reality, and Computer Graphics
herausgegeben von
Prof. Lucio Tommaso De Paolis
Patrick Bourdot
Copyright-Jahr
2020
Electronic ISBN
978-3-030-58465-8
Print ISBN
978-3-030-58464-1
DOI
https://doi.org/10.1007/978-3-030-58465-8