Skip to main content

2016 | Buch

Augmented Reality, Virtual Reality, and Computer Graphics

Third International Conference, AVR 2016, Lecce, Italy, June 15-18, 2016. Proceedings, Part I

insite
SUCHEN

Über dieses Buch

The 2-volume set LNCS 9768 and 9769 constitutes the refereed proceedings of the Third International Conference on Augmented Reality, Virtual Reality and Computer Graphics, AVR 2016, held in Lecce, Italy, in June 2016.
The 40 full papers and 29 short papers presented werde carefully reviewed and selected from 131 submissions. The SALENTO AVR 2016 conference intended to bring together researchers, scientists, and practitioners to discuss key issues, approaches, ideas, open problems, innovative applications and trends on virtual and augmented reality, 3D visualization and computer graphics in the areas of medicine, cultural heritage, arts, education, entertainment, industrial andmilitary sectors.

Inhaltsverzeichnis

Frontmatter

Virtual Reality

Frontmatter
Simulation of Tsunami Impact upon Coastline

This paper presents a simulation of a tsunami impact upon an urban coastline. Emphasis was given to the conservation of momentum, as its distribution in space and time is the main factor of the wave’s effects on the coastline. Due to this, a hybrid simulation method was adopted, based on the Smoothed Particle Hydrodynamics (SPH) method, enriched with geometric constraints and rigid body interactions. The implementation is the result of cooperation between the Bullet physics engine and our custom SPH engine, which successively process the dynamic state of the fluid at every timestep. Furthermore, in order to achieve better performance a custom data structure (LP grid) was developed for the optimization of locality in data storage and minimization of access time. Simulation data is exported to VTK files, allowing interactive processing and visualization. Experimental results demonstrate the benefits of impulse recording at potential hazard estimation and evaluation of defense strategies.

Aristotelis Spathis-Papadiotis, Konstantinos Moustakas
Design and Implementation of a Low Cost Virtual Rugby Decision Making Interactive

The paper describes the design and implementation of a novel low cost virtual rugby decision making interactive for use in a visitor centre. Original laboratory-based experimental work in decision making in rugby, using a virtual reality headset [1] is adapted for use in a public visitor centre, with consideration given to usability, costs, practicality and health and safety. Movement of professional rugby players was captured and animated within a virtually recreated stadium. Users then interact with these virtual representations via use of a low-cost sensor (Microsoft Kinect) to attempt to block them. Retaining the principles of perception and action, egocentric viewpoint, immersion, sense of presence, representative design and game design the system delivers an engaging and effective interactive to illustrate the underlying scientific principles of deceptive movement. User testing highlighted the need for usability, system robustness, fair and accurate scoring, appropriate level of difficulty and enjoyment.

Alan Cummins, Cathy Craig
Immersive Virtual Reality-Based Simulation to Support the Design of Natural Human-Robot Interfaces for Service Robotic Applications

The increasing popularity of robotics and related applications in modern society makes interacting and communicating with robots of crucial importance. In service robotics, where robots operate to assist human beings in their daily life, natural interaction paradigms capable to foster an ever more intuitive and effective collaboration between involved actors are needed. The aim of this paper is to discuss activities that have been carried out to create a 3D immersive simulation environment able to ease the design and evaluation of natural human-robot interfaces in generic usage contexts. The proposed framework has been exploited to tackle a specific use case represented by a robotics-enabled office scenario and to develop two user interfaces based on augmented reality, speech recognition as well as gaze and body tracking technologies. Then, a user study has been performed to study user experience in the execution of semi-autonomous tasks in the considered scenario though both objective and subjective observations. Besides confirming the validity of the devised approach, the study provided precious indications regarding possible evolutions of both the simulation environment and the service robotic scenario considered.

Federica Bazzano, Federico Gentilini, Fabrizio Lamberti, Andrea Sanna, Gianluca Paravati, Valentina Gatteschi, Marco Gaspardone
Multi-Resolution Visualisation of Geographic Network Traffic

Flow visualization techniques are vastly used to visualize scientific data among many fields including meteorology, computational fluid dynamics, medical visualization and aerodynamics. In this paper, we employ flow visualization techniques in conjunction with conventional network visualization methods to represent geographic network traffic data. The proposed visualization system integrates two visualization techniques, flow visualization and node-link diagram. While flow visualization emphasizes on general trends, node-link diagram visualization concentrates on the detailed analysis of the data. A usability study with multiple experiments is performed to evaluate the success of our approach.

Berkay Kaya, Selim Balcisoy
Methodology for Efficiency Analysis of VR Environments for Industrial Applications

Companies are keen on using novel technologies like Virtual Reality (VR) in order to achieve competitive advantages. However, the economic impact of the integration of such technologies in a company is difficult to quantify. Especially small and medium enterprises encounter difficulties when trying to quantify the benefits of instruments like VR. During the decision process, companies need extensive support and expensive consulting. In this paper a methodology for an efficiency analysis of industrial VR integration is presented. It includes both cost- and utility-based considerations. The user-friendly analysis allows the decision-maker to access a deeper understanding of VR and it results in a customised VR solution. The proposed economic assessment methodology is being validated by two companies in the mechanical engineering sector and it is proved to be a very useful tool to enable the decision for VR integration.

Jana Dücker, Polina Häfner, Jivka Ovtcharova
Unity3D Virtual Animation of Robots with Coupled and Uncoupled Mechanism

This paper presents the development of the animation of robots in virtual reality environments, whose mechanisms can be coupled -the movement relies on mechanical principles-; and uncoupled mechanisms, i.e., the degrees of freedom are controlled independently via a control unit. Additionally, the present phases to transfer the design of a robot developed in a CAD tool to a virtual simulation environment without being lost the physical characteristics of the original design are showed, for which it is considered the various types of motions that the robot can perform depending on the design. Finally, shows the results obtained from the simulation of motion of a robot hexapod 18DOF and Theo Jansen mechanism.

Víctor Hugo Andaluz, Jorge S. Sánchez, Jonnathan I. Chamba, Paúl P. Romero, Fernando A. Chicaiza, Jose Varela, Washington X. Quevedo, Cristian Gallardo, Luis F. Cepeda
A Scalable Cluster-Rendering Architecture for Immersive Virtual Environments

Complex virtual environments often require computational resources exceeding the capabilities of a single machine. Furthermore immersive visualization can exploit multiple displays fostering the needing of computational power. We hereby present a system, called XVR Network Renderer, allowing rendering load to be distributed throughout a cluster of workstations operating concurrently. The proposed solution consists in a set of software modules structured as a single-master multiple-slaves architecture. The master software intercepts all the graphical commands performed by an OpenGL application, without any modification of the source code. The commands are then streamed and executed individually by each slave client. The Network Renderer can be seen as a virtual OpenGL context with high capabilities. The system can be configured to work with a wide range of complex visualization setups, like CAVEs, automatically handling stereoscopy, performing perspective corrections and managing projection-related common problems. Any number of displays can be simultaneously managed by the cluster.

Giovanni Avveduto, Franco Tecchia, Marcello Carrozzino, Massimo Bergamasco
The Effect of Emotional Narrative Virtual Environments on User Experience

The surrounding world has a strong impact on the way we feel and perceive the events that happens in daily life. The power of environments to elicit emotions in humans has been widely studied in experimental psychology by using exposure to photographs or real situations. These researches do not reproduce the vividness of events in ordinary life and do not permit to control the situations that happen within. By reproducing a realistic scenario similar to daily life and by controlling the social narratives happening within, Virtual Reality (VR) is a powerful tool to investigate the effect of environments on humans’ feelings and emotions. In this study we have animated the emotional content of a realistic virtual scenario with a dynamic scene in order to introduce a novel approach to investigate the effect of environments in human feeling based on the Emotional Narrative Virtual Environment (ENVE) paradigm. A sample of 36 subjects experimented 3 ENVEs with a Fear, Disgust and Happy emotional content, made to live with a social narratives, in an immersive VR setup. Results showed the ability of ENVE to elicit specific emotional state in participants and corroborate the idea that the ENVE approach can be used in environmental psychology or to treat persons with mental disease.

Claudia Faita, Camilla Tanca, Andrea Piarulli, Marcello Carrozzino, Franco Tecchia, Massimo Bergamasco
User Based Intelligent Adaptation of Five in a Row Game for Android Based on the Data from the Front Camera

Playing games on mobile phones is very popular nowadays. Many people prefer logic games such as chess, five in a row, checkers etc. This work aspires to come up with a concept of such game, in which the user will not have to deal with setting the opponent’s difficultness – the application will automatically optimize itself. In order to that it will use a shot acquired by the front camera and suitable algorithms of a computer vision. On the smartphone front camera shots these algorithms are able not only to recognize a human face, but as well to estimate an indication about the particular person (for example age, sex, mood). This work brings the concept and an implementation of the game five in a row for Android mobile platform. The paper suggests an applicable algorithm coming out of a Minimax method with its own evaluating function. To design this function there are utilized genetic algorithms – precisely a tournament selection method. Therefore the result of this work is a concrete algorithm of the opponent in the game five in a row implemented into the Android application, which optimizes itself to the user according to the data from the smartphone front camera.

Jan Novotny, Jan Dvorak, Ondrej Krejcar
Modeling of Complex Taxonomy: A Framework for Schema-Driven Exploratory Portal

This paper discusses an evolution of exploratory portal for an advanced and easy construction of an exploratory portal, through a simplification of the data loading process by a modeling of complex taxonomies. The main requirement for achieving this goal has been to make the schema-driven portal through a modeling of taxonomy, the data and the portal layout on Excel. A framework is proposed in which we implement an application that can build the exploratory portal from this Excel model. We have validated the portal population process, first “in vitro”, then “in vivo”.

Luca Mainetti, Roberto Paiano, Stefania Pasanisi, Roberto Vergallo
Audio-Visual Perception - The Perception of Object Material in a Virtual Environment

A digital approach to audio-visual perception of object material and mass within unimodal and multimodal conditions. Similar research has been executed regarding the perception of physical object material, however little has been accomplished relating to virtual perceptions of object material. This study evaluates the effects of manipulating specific stimuli when inducing cross modal augmentation, intersensory biases and cross modal transfers. Three test conditions were established in order to determine perceptual accuracies and mass type dominance: auditory only stimuli, visual only stimuli and audio-visual stimuli.The results indicated that multimodal perception was more accurate when perceiving object material and that vision was most dominant within unisensory conditions. No dominance was found within a multimodal environment when there was object incongruency, however when the visual stimuli was obscure, the auditory modality confirmed the final perception.

Ryan Anderson, Joosep Arro, Christian Schütt Hansen, Stefania Serafin
Facial Landmarks for Forensic Skull-Based 3D Face Reconstruction: A Literature Review

Recent Face Analysis advances have focused the attention on studying and formalizing 3D facial shape. Landmarks, i.e. typical points of the face, are perfectly suited to the purpose, as their position on visage shape allows to build up a map of each human being’s appearance. This turns to be extremely useful for a large variety of fields and related applications. In particular, the forensic context is taken into consideration in this study. This work is intended as a survey of current research advances in forensic science involving 3D facial landmarks. In particular, by selecting recent scientific contributions in this field, a literature review is proposed for in-depth analyzing which landmarks are adopted, and how, in this discipline. The main outcome concerns the identification of a leading research branch, which is landmark-based facial reconstruction from skull. The choice of selecting 3D contributions is driven by the idea that the most innovative Face Analysis research trends work on three-dimensional data, such as depth maps and meshes, with three-dimensional software and tools. The third dimension improves the accurateness and is robust to colour and lightning variations.

Enrico Vezzetti, Federica Marcolin, Stefano Tornincasa, Sandro Moos, Maria Grazia Violante, Nicole Dagnes, Giuseppe Monno, Antonio Emmanuele Uva, Michele Fiorentino
Virtual Reality Applications with Oculus Rift and 3D Sensors

In this paper we describe our experiences with skeletal tracking using Unreal Engine 4 with Oculus Rift and Xbox 360 Kinect while building a tool for rehabilitation of patients with impaired motor skills. We give an overview of the implemented solution, describe the problems encountered and how they were solved.

Edi Ćiković, Kathrin Mäusl, Kristijan Lenac
The Virtual Experiences Portals — A Reconfigurable Platform for Immersive Visualization

Virtual Experience Portals are mobile stereoscopic ultra high definition LCD displays with human interface sensors, which can be combined into a reconfigurable development platform for shared immersive virtual and augmented reality experiences. We are targeting applications in, for example, industrial automation, serious games, scientific visualization and building architecture. The aim is to provide a framework for natural and effortless interfaces for shared small group experiences of interactive 3D content, combining selected existing elements of computer aided virtual environments and virtual reality. In this short paper we report on efforts to date in developing the platform, integration with an existing visualization framework, SAGE2, some short application case studies, one in an industry-sponsored research context in industrial automation, and some ideas for future work.

Ian D. Peake, Jan Olaf Blech, Edward Watkins, Stefan Greuter, Heinz W. Schmidt
Virtual Reality for Product Development in Manufacturing Industries

Currently, Virtual Reality (VR) systems give industries in different domains the possibility to interact with and work into a simulated environment in order to improve their processes, efficiency and effectiveness, fast introducing new products in the market in a cost effective way.The fundamental idea is to identify the main applications of Virtual Reality in the manufacturing domain and provide valuable insights for future research and trends concern the application of this technology along the whole product development process.This paper aims to propose a set of new emerging scenarios, composed of Virtual Reality technologies, tools and systems used in manufacturing industries with a focus on the aerospace sector. The proposed scenarios are based on projects and initiatives carried out for applying the VR to industries in order to optimize internal processes and the overall supply chain.

Laura Schina, Mariangela Lazoi, Roberto Lombardo, Angelo Corallo
Virtual Reality Pave the Way for Better Understand Untouchable Research Results

Virtual reality (VR) is the “last medium”. VR achieves presence and becomes “real”, we don´t need any other communication medium. We can communicate anything within VR, using just code. VR will also pave the way for better understanding of the micro world, cosmos world, underground world and lot of world environments, which pupil is not able in real world visit, because by zooming we are able to see cells function, we can fly by virtual space ship between stars, in front to the Sun and visit life under ground and so on. The best form, how present untouchable or abstract research results is to put them to the Virtual Reality form. Our research in institute is oriented on high performance computing like GRID and Cloud computing, clusters computing. Usually we compute with a big data and the results are large. The paper describe the tool for converting final outputs data to VR. The paper present VR tool for untouchable research results in the field of astrophysics research and in the field of underground water management. Tool is composed of several modules which all of them have individual role. The paper describe also the functionality of the singular module.

Eva Pajorova, Ladislav Hluchy
Visualization of the Renewable Energy Resources

The methods of the renewable energy resources visualization are analysed in this work, the examples of the systems and possible architecture of the renewable energy monitoring systems of the Republic of Kazakhstan are considered. Successful practices are analysed, the leading scientific organization in the field of green energy are considered, a comparative analysis of geographic information systems and data sources in the field of green energy is performed. Possible software architecture of the system based on 3M paradigm of geographic information system (multilayer views, multilayer architecture and multi-agent interaction) is considered.

Ravil Muhamedyev, Sophia Kiseleva, Viktors I. Gopejenko, Yedilkhan Amirgaliyev, Elena Muhamedyeva, Aleksejs V. Gopejenko, Farida Abdoldina
Transparency of a Bilateral Tele-Operation Scheme of a Mobile Manipulator Robot

This work presents the design of a bilateral tele-operation system for a mobile manipulator robot, allowing a human operator to perform complex tasks in remote environments. In the tele-operation system it is proposed that the human operator is immersed in an augmented reality environment to have greater transparency of the remote site. The transparency of a tele-operation system indicates a measure of how the human feels the remote system. In the local site an environment of augmented reality developed in Unity3D is implemented, which through input devices recreates the sensations that the human would feel if he were in the remote site, for which is considered the senses of sight, touch and hearing. These senses help the human operator to “transmit” their ability and experience to the robot to perform a task. Finally, experimental results are reported to verify the performance of the proposed system.

Víctor Hugo Andaluz, Washington X. Quevedo, Fernando A. Chicaiza, José Varela, Cristian Gallardo, Jorge S. Sánchez, Oscar Arteaga
Unity3D-MatLab Simulator in Real Time for Robotics Applications

This paper presents the implementation of a new 3D simulator applied to the area of robotics. The simulator allows to analyze the performance of different schemes of autonomous and/or tele-operated control in structured environments, partially structured and unstructured. For robot-environment interaction is considered virtual reality software Unity3D, this software exchanges information with MATLAB to execute different control algorithms proposed through the use of shared memory. The exchange of information in real time between the two software is essential because the advanced control algorithms require a feedback from the robot-environment interaction to close the control loop, while the simulated robot updates its kinematic and dynamic parameters depending on controllability variables calculated by MATLAB. Finally, the 3D simulator is evaluated by implementing an autonomous control scheme to solve the problem of path following of a 6DOF robot arm, also the results obtained by implementing the tele-operation scheme for said robot are presented.

Víctor Hugo Andaluz, Fernando A. Chicaiza, Cristian Gallardo, Washington X. Quevedo, José Varela, Jorge S. Sánchez, Oscar Arteaga

Augmented and Mixed Reality

Frontmatter
Mobile Augmented Reality Based Annotation System: A Cyber-Physical Human System

One goal of the Industry 4.0 initiative is to improve knowledge sharing among and within production sites. A fast and easy knowledge exchange can help to reduce costly down-times in factory environments. In the domain of automotive manufacturing, production line down-times cost in average about $1.3 million per hour. Saving seconds or minutes have a real business impact and the reduction of such down-time costs is of major interest.In this paper we describe MARBAS, a Mobile Augmented Reality based Annotation System, which supports production line experts during their maintenance tasks. We developed MARBAS as Cyber-Physical Human System that enables experts to annotate a virtual representation of a real world scene. MARBAS uses a mobile depth sensor that can be attached to smart phones or tablets in combination with Instant Tracking. Experts can share information using our proposed system. We believe that such an annotation system can excel current maintenance processes by accelerating them.To identify applicable mesh registration algorithms we conducted a practical simulation. We used a 6 axis joint-arm robot to evaluate 7 different ICP algorithms concerning time and accuracy. Our results show that PCL non-linear ICP offers best performance for our scenario. Additionally, we developed a vertical prototype using a mobile depth sensor in combination with a tablet. We could show the feasibility of our approach augmenting real world scenes with virtual information.

Constantin Scheuermann, Felix Meissgeier, Bernd Bruegge, Stephan Verclas
A Framework for Outdoor Mobile Augmented Reality and Its Application to Mountain Peak Detection

Outdoor augmented reality applications project information of interest onto views of the world in real-time. Their core challenge is recognizing the meaningful objects present in the current view and retrieving and overlaying pertinent information onto such objects. In this paper we report on the development of a framework for mobile outdoor augmented reality application, applied to the overlay of peak information onto views of mountain landscapes. The resulting app operates by estimating the virtual panorama visible from the viewpoint of the user, using an online Digital Terrain Model (DEM), and by matching such panorama to the actual image framed by the camera. When a good match is found, meta-data from the DEM (e.g., peak name, altitude, distance) are projected in real time onto the view. The application, besides providing a nice experience to the user, can be employed to crowdsource the collection of annotated mountain images for environmental applications.

Roman Fedorov, Darian Frajberg, Piero Fraternali
Augmented Industrial Maintenance (AIM): A Case Study for Evaluating and Comparing with Paper and Video Media Supports

Maintenance is a crucial point to improve productivity in industry whereas systems to be maintained have an increasing complexity. Augmented Reality (AR) can reduce maintenance process time and improve quality by giving virtual information and assistance to the operator during the procedure. In this paper, a workflow is firstly presented allowing a maintenance expert to author augmented reality maintenance procedures without computer skills. Then the AR maintenance application developed is described. Based on it, we present a case study which aims to compare maintenance efficiency with respect to the market available media support used, i.e. paper, video, AR tablet or AR Smart Glasses. A set of experiments involving 24 persons is described and analyzed. The results show that augmented reality maintenance reduce number of errors done by operator than with paper for the same duration of maintenance. A qualitative analysis shows that AR systems are well accepted by the users.

Vincent Havard, David Baudry, Xavier Savatier, Benoit Jeanne, Anne Louis, Bélahcène Mazari
Augmented Reality in the Control Tower: A Rendering Pipeline for Multiple Head-Tracked Head-up Displays

The purpose of the air traffic management system is to accomplish the safe and efficient flow of air traffic. However, the primary goals of safety and efficiency are to some extent conflicting. In fact, to deliver a greater level of safety, separation between aircrafts would have to be greater than it currently is, but this would negatively impact the efficiency. In an attempt to avoid the trade-off between these goals, the long-range vision for the Single European Sky includes objectives for operating as safely and efficiently in Visual Meteorological Conditions as in Instrument Meteorological Conditions. In this respect, a wide set of virtual/augmented reality tools has been developed and effectively used in both civil and military aviation for piloting and training purposes (e.g., Head-Up Displays, Enhanced Vision Systems, Synthetic Vision Systems, Combined Vision Systems, etc.). These concepts could be transferred to air traffic control with a relatively low effort and substantial benefits for controllers’ situation awareness. Therefore, this study focuses on the see-through, head-tracked, head-up display that may help controllers dealing with zero/low visibility conditions and increased traffic density at the airport. However, there are several open issues associated with the use of this technology. One is the difficulty of obtaining a constant overlap between the scene-linked symbols and the background view based on the user’s viewpoint, which is known as ‘registration’. Another one is the presence of multiple, arbitrary oriented Head-Up Displays (HUDs) in the control tower, which further complicates the generation of the Augmented Reality (AR) content. In this paper, we propose a modified rendering pipeline for a HUD system that can be made out of several, arbitrary oriented, head-tracked, AR displays. Our algorithm is capable of generating a constant and coherent overplay between the AR layer and the outside view from the control tower. However a 3D model of the airport and the airport’s surroundings is needed, which must be populated with all the necessary AR overlays (both static and dynamic). We plan to use this concept as a basis for further research in the field of see-through HUDs for the control tower.

Nicola Masotti, Francesca De Crescenzio, Sara Bagassi
CoCo - A Framework for Multicore Visuo-Haptics in Mixed Reality

Mixed Reality applications involve the integration of RGB-D streams with virtual entities potentially extended with force feedback. Increasing complexity of the applications pushes the limits of traditional computing structures, not keeping up with the increased computing power of multicore platform. This paper presents the CoCo framework, a component based, multicore system designed for tackling the challenges of visuo-haptics in mixed reality environment, with structural reconfiguration. Special care has been also given to the management of transformation between reference frames for easing registration, calibration and integration of robotic systems. The framework is described together with a description of two relevant case studies.

Emanuele Ruffaldi, Filippo Brizzi
Design of a Projective AR Workbench for Manual Working Stations

We present the design and a prototype of a projective AR workbench for an effective use of the AR in industrial applications, in particular for Manual Working Stations. The proposed solution consists of an aluminum structure that holds a projector and a camera that is intended to be mounted on manual working stations. The camera, using a tracking algorithm, computes in real time the position and orientation of the object while the projector displays the information always in the desired position. We also designed and implemented the data structure of a database for the managing of AR instructions, and we were able to access this information interactively from our application.

Antonio Emmanuele Uva, Michele Fiorentino, Michele Gattullo, Marco Colaprico, Maria F. de Ruvo, Francescomaria Marino, Gianpaolo F. Trotta, Vito M. Manghisi, Antonio Boccaccio, Vitoantonio Bevilacqua, Giuseppe Monno
A Taxonomy for Information Linking in Augmented Reality

A key challenge in augmented reality is the precise linking of virtual information with physical places or objects to create a spatial relationship. The visual presentation of these links can have many forms like direct overlays or connection lines. In spite of its importance, this topic has never been systematically addressed by existing approaches. As a first step in this direction, we suggest a taxonomy for such visualizations to facilitate their detailed analysis in terms of graphical properties. It consists of the three artifact types spatial anchor, information object and information connection as well as the three dimensions reference system, visual connection and context. Additionally we surveyed literature to collect the knowledge on how these dimensions and their combinations affect user performances. To explain the application of our taxonomy, we classified user interfaces from literature. We also conducted an empirical experiment regarding the effects on task performance of different classes from our dimension visual connection, i.e. the type of visual connection that is presented to the user. The outcomes give important guidance for augmented reality interface design in a part, which has not been researched before. The results show that the preferred method for visualizing information linking is using a close spatial proximity, followed by a continuous visual connection, a color coded symbolical connection and a shape coded symbolical connection.

Tobias Müller, Ralf Dauenhauer
Mobile User Experience in Augmented Reality vs. Maps Interfaces: A Case Study in Public Transportation

This article comprises a study on user experience when interacting with different modes of mobile interfaces. Our emphasis is on application instances commonly found in mobile app stores, which utilize sensor-based augmented reality or two-dimensional zoomable maps to visualize points of interest (POIs) in the vicinity of the user. As a case study, we developed two variants of an Android application addressed to public transportation users. The application displays nearby transit stops along with timetable information of transit services passing-by those stops. We report findings drawn from an empirical field study in real outdoors conditions. The evaluation findings have been cross-checked with logged (usage) data. We aim at eliciting knowledge about user requirements related to mobile application interfaces in this context and evaluating user experience from pragmatic and affective viewpoints.

Manousos Kamilakis, Damianos Gavalas, Christos Zaroliagis
GazeAR: Mobile Gaze-Based Interaction in the Context of Augmented Reality Games

Gaze-based interaction in the gaming context offers various research opportunities. However, when looking at available games supported by eye tracking technology it becomes apparent that the potential has not been fully exploited: a majority of gaze-based games are tailored for static settings (desktop PC). We propose an experimental setting that transfers approaches of mobile gaze-based interactions to the augmented reality (AR) games domain. It is our main aim to find out if the inclusion of gaze input in an AR game has a positive impact on the User Experience (UX) in comparison to a solely touch-based approach. By doing so designers and researchers should receive insights in the design of gaze-based mobile AR games. To find answers we carried out a comparative study consisting of two mobile game prototypes. Results show that the inclusions of gaze in AR games is very well received by players and this novel approach was preferred in comparison to a design without gaze interaction.

Michael Lankes, Barbara Stiglbauer
Visualization of Heat Transfer Using Projector-Based Spatial Augmented Reality

Thermal imaging cameras, commonly used in application areas such as building inspection and night vision, have recently also been introduced as pedagogical tools for helping students visualize, interrogate and interpret notoriously challenging thermal concepts. In this paper we present a system for Spatial Augmented Reality that automatically projects thermal data onto objects. Instead of having a learner physically direct a hand-held camera toward an object of interest, and then view the display screen, a group of participants can gather around the display system and directly see and manipulate the thermal profile projected onto physical objects. The system combines a thermal camera that captures the thermal data, a depth camera that realigns the data with the objects, and a projector that projects the data back. We also apply a colour scale tailored for room temperature experiments.

Karljohan Lundin Palmerius, Konrad Schönborn
An Efficient Geometric Approach for Occlusion Handling in Outdoors Augmented Reality Applications

Mobile location-based AR frameworks typically project information about real or virtual locations in the vicinity of the user. Those locations are treated indiscriminately, regardless of whether they are actually within the field of view (FoV) of the user or not. However, displaying occluded objects often misleads users’ perception thereby compromising the clarity and explicitness of AR applications. This paper introduces an efficient geometric technique aiming at assisting developers of outdoors mobile AR applications in generating a realistic FoV for the users. Our technique enables real time building recognition in order to address the occlusion of physical or virtual objects by physical artifacts. Our method is demonstrated in the location-based AR game Order Elimination. The latter utilizes publicly available building information to calculate the players’ FoV in real-time. Extensive performance tests provide sufficient evidence that real-time FoV rendering is feasible by modest mobile devices, even under stress operation conditions. A user evaluation study reveals that the consideration of buildings for determining FoV in mobile AR games can increase the quality of experience perceived by players when compared with standard FoV generation methods.

Vlasios Kasapakis, Damianos Gavalas, Panagiotis Galatis
Improving the Development of AR Application for Artwork Collections with Standard Data Layer

Museums and art galleries are called to preserve and promote their collections. Mobile technologies like Augmented Reality would transform visitors from passive observers to protagonists, creating engaging and personal art experience for the audience. However, the hurdles preventing Augmented Reality from becoming a widespread medium to convey virtual information about the Cultural Heritage, lies in the limitations in adopting fast and agile tools of development. The paper presents an ongoing research, aimed at the creation of a framework to serialize the development of Augmented Reality application, based on a standard data layer. The core of the application is designed to augment artworks, while the standardization of data will permit a fast multi-app development. This framework is useful to bridge the gap between designers and developers, and will facilitate a semi-automatic development of Augmented Reality applications for Cultural Institutions.

Emanuele Frontoni, Roberto Pierdicca, Ramona Quattrini, Paolo Clini
Augmented Reality for the Control Tower: The RETINA Concept

The SESAR (Single European Sky Air Traffic Management Research) Joint Undertaking has recently granted the Resilient Synthetic Vision for Advanced Control Tower Air Navigation Service Provision project within the framework of the H2020 research on High Performing Airport Operations. Hereafter, we describe the project motivations, the objectives, the proposed methodology and the expected impacts, i.e. the consequences of using virtual/augmented reality technologies in the control tower.

Nicola Masotti, Sara Bagassi, Francesca De Crescenzio
Automatic Information Positioning Scheme in AR-assisted Maintenance Based on Visual Saliency

This paper presents a novel automatic augmentation of pertinent information for Augmented Reality (AR) assisted maintenance based on a biologically inspired visual saliency model. In AR-assisted maintenance, the human operator performs routine service, repair, assembly and disassembly tasks with the aid of information displayed virtually. Appropriate positioning of virtual information is crucial because it has to be visible without hindering the normal maintenance operation at the same time. As opposed to conventional positioning approaches based on discretization and clustering of the scene, this paper proposes a novel application of a graph-based visual saliency model to enable automatic positioning of virtual information. Particularly, this research correlates the types of information with the levels of activation on the resulting visual saliency map for different scenarios. Real life examples of the proposed methodology are used to evaluate the feasibility of using visual saliency for information positioning in AR applications.

Miko May Lee Chang, Soh Khim Ong, Andrew Yeh Ching Nee
Interactive Spatial AR for Classroom Teaching

The authors fuse the virtual objects in science and the action of teachers on real podium by developing an interactive spatial AR system, in which teachers could interact with virtual objects by their gesture in real-time presentation, and the images of virtual objects that projected on a transparent projection screen were aligned and matched to calibrate with their body part’s position. The students will see virtual objects are seamlessly matched on the real teachers on podium space as if they are real things that are just under the control of the teachers. The students will immerse into the presentation more deeply, hence enhance the cognitive effect of classroom teaching and learning.

YanXiang Zhang, ZiQiang Zhu
Third Point of View Augmented Reality for Robot Intentions Visualization

Lightweight, head-up displays integrated in industrial helmets allow to provide contextual information for industrial scenarios such as in maintenance. Moving from single display and single camera solutions to stereo perception and display opens new interaction possibilities. In particular this paper addresses the case of information sharing by a Baxter robot displayed to the user overlooking at the real scene. System design and interaction ideas are being presented.

Emanuele Ruffaldi, Filippo Brizzi, Franco Tecchia, Sandro Bacinelli
Optimizing Image Registration for Interactive Applications

With the spread of wearable and mobile devices, the request for interactive augmented reality applications is in constant growth. Among the different possibilities, we focus on the cultural heritage domain where a key step in the development applications for augmented cultural experiences is to obtain a precise localization of the user, i.e. the 6 degree-of-freedom of the camera acquiring the images used by the application. Current state of the art perform this task by extracting local descriptors from a query and exhaustively matching them to a sparse 3D model of the environment. While this procedure obtains good localization performance, due to the vast search space involved in the retrieval of 2D-3D correspondences this is often not feasible in real-time and interactive environments. In this paper we hence propose to perform descriptor quantization to reduce the search space and employ multiple KD-Trees combined with a principal component analysis dimensionality reduction to enable an efficient search. We experimentally show that our solution can halve the computational requirements of the correspondence search with regard to the state of the art while maintaining similar accuracy levels.

Riccardo Gasparini, Stefano Alletto, Giuseppe Serra, Rita Cucchiara
A System to Exploit Thermographic Data Using Projected Augmented Reality

We present a prototype system composed practically of an IR camera and a video projector with the purpose to create a device that projects the thermal map directly on the observed surface. The novelty of this work lies on the building of a portable tool, the development of software and the proposing of a calibration procedure to be used in industrial and construction sites from thermal inspectors.

Saverio Debernardis, Michele Fiorentino, Antonio E. Uva, Giuseppe Monno
Cloud Computing Services for Real Time Bilateral Communication, Applied to Robotic Arms

This work presents the design of a bilateral teleoperation system for a robotic arm. It proposes a new prototype communication protocol with Websockets for the communication and Json for data structuration on a cloud computing environment with OpenStack and Openshift Origin. The human operator receives visual and force feedback from the remote site, and it sends position commands to the slave. Additionally, in the tele-operation system it is proposed that the human operator is immersed in an augmented reality environment to have greater transparency of the remote site. The transparency of a tele-operation system indicates a measure of how the human feels the remote system. Finally, the experimental results are reported to verify the performance of the proposed system.

Cristian Gallardo, Víctor Hugo Andaluz
Backmatter
Metadaten
Titel
Augmented Reality, Virtual Reality, and Computer Graphics
herausgegeben von
Lucio Tommaso De Paolis
Antonio Mongelli
Copyright-Jahr
2016
Electronic ISBN
978-3-319-40621-3
Print ISBN
978-3-319-40620-6
DOI
https://doi.org/10.1007/978-3-319-40621-3

Premium Partner