Skip to main content
Top

2018 | Book

Virtual, Augmented and Mixed Reality: Interaction, Navigation, Visualization, Embodiment, and Simulation

10th International Conference, VAMR 2018, Held as Part of HCI International 2018, Las Vegas, NV, USA, July 15-20, 2018, Proceedings, Part I

insite
SEARCH

About this book

This two-volume set LNCS 10909 and 10910 constitutes the refereed proceedings of the 10th International Conference on Virtual, Augmented and Mixed Reality, VAMR 2018, held as part of HCI International 2018 in Las Vegas, NV, USA.

HCII 2018 received a total of 4346 submissions, of which 1171 papers and 160 posters were accepted for publication after a careful reviewing process.

The 65 papers presented in this volume were organized in topical sections named: interaction, navigation, and visualization in VAMR; embodiment, communication, and collaboration in VAMR; education, training, and simulation; VAMR in psychotherapy, exercising, and health; virtual reality for cultural heritage, entertainment, and games; industrial and military applications.

Table of Contents

Frontmatter

Interaction, Navigation and Visualization in VAMR

Frontmatter
Determining Which Touch Gestures Are Commonly Used When Visualizing Physics Problems in Augmented Reality

Touch gestures can be a very important aspect when developing mobile applications with enhanced reality. The main purpose of this research was to determine which touch gestures were most frequently used by engineering students when using a simulation of a projectile motion in a mobile AR application. A randomized experimental design was given to students, and the results showed the most commonly used gestures to visualize are: zoom in “pinch open”, zoom out “pinch closed”, move “drag” and spin “rotate”.

Marta del Rio Guerra, Jorge Martín-Gutiérrez, Raúl Vargas-Lizárraga, Israel Garza-Bernal
Element Selection of Three-Dimensional Objects in Virtual Reality

The manipulation of three dimensional objects is vital to fields such as engineering and architecture, but understanding 3D models from images on 2D screens takes years of experience. Virtual reality offers a powerful tool for the observation and manipulation of 3D objects by giving its users a sense of depth perception and the ability to reach through objects. To understand specific pain points in 2D CAD software, we conducted interviews and a survey of students and professionals with experience using CAD software. We narrowed in on the ability to select interior or obscured elements, and created a VR prototype allowing users to do so. Our usability tests found that compared to 2D software, VR was easier to use, more intuitive, and less frustrating, thought slightly more physically uncomfortable. Finally, we created a set of recommendations for VR CAD programs around action feedback, environmental context, and the necessity of a tutorial.

Dylan Fox, Sophie So Yeon Park, Amol Borcar, Anna Brewer, Joshua Yang
Design and Assessment of Two Handling Interaction Techniques for 3D Virtual Objects Using the Myo Armband

Hand gesture recognition using electromyography signals (EMG) has attracted increased attention due to the rise of cheaper wearable devices that can record accurate EMG data. One of the outstanding devices in this area is the Myo armband, equipped with eight EMG sensors and a nine-axis inertial measurement unit. The use of Myo armband in virtual reality, however, is very limited, because it can only recognize five pre-set gestures. In this work, we do not use these gestures, but the raw data provided by the device in order to measure the force applied to a gesture and to use Myo vibrations as a feedback system, aiming to improve the user experience. We propose two techniques designed to explore the capabilities of the Myo armband as an interaction tool for input and feedback in a VRE. The objective is to evaluate the usability of the Myo as an input and output device for selection and manipulation of 3D objects in virtual reality environments. The proposed techniques were evaluated by conducting user tests with ten users. We analyzed the usefulness, efficiency, effectiveness, learnability and satisfaction of each technique and we conclude that both techniques had high usability grades, demonstrating that Myo armband can be used to perform selection and manipulation task, and it can enrich the experience making it more realistic by using the possibility of measuring the strength applied to the gesture and the vibration feedback system.

Yadira Garnica Bonome, Abel González Mondéjar, Renato Cherullo de Oliveira, Eduardo de Albuquerque, Alberto Raposo
Surface Prediction for Spatial Augmented Reality

Image projection in spatial augmented reality requires tracking of non-rigid surfaces to be effective. When a surface is moving quickly, simply using the measured deformation of the surface may not be adequate as projectors often suffer from lag and timing delays. This paper uses a novel approach for predicting the motion of a non-rigid surface so images can be projected ahead of time to compensate for any delays. The extended Kalman filter based algorithm is evaluated using an experimental setup where an image is project onto a deformable surface being perturbed by “random” forces. The results are quite positive, showing a visible improvement over using standard projection techniques. Additionally, the error results show that the algorithm can be used in most surface tracking applications.

Adam Gomes, Keegan Fernandes, David Wang
Real-Time Motion Capture on a Budget

The U.S. Army Research Laboratory’s Simulation & Training Technology Center, along with Cole Engineering Services, Inc. and the University of Central Florida have set out to leverage commercial technology with the goal of improving realism, and reducing cost for Army training tasks. The focus of this task is to establish a prototype functionality that allows a live person to take control of a virtual character. This is done using the Enhanced Dynamic Geo-Social Environment, which is an Army-owned simulation built upon the Unreal Engine 4.Commercial games and movies make use of motion capture capabilities to animate characters. This functionality is needed in real-time to allow person-to-person interactions within a simulation. The goal is to have puppeteers that can take over Artificial Intelligence (AI) characters when in-depth interactions need to occur. While AAA games and movie budgets allow for more expensive systems, the goal of this team is to keep the cost well below $10,000.A market analysis along with this team’s experience utilizing and integrating the market capabilities to meet these goals are described in this paper.

Tami Griffith, Tabitha Dwyer, Jennie Ablanedo
A Novel Way of Estimating a User’s Focus of Attention in a Virtual Environment

Results from prior experiments suggested that measuring immersion objectively (using eye trackers) can be a very important supplement to subjective tests (with questionnaires). But, traditional eye trackers are not usable together with VR HMDs (Head Mounted Displays) because they cannot “see” an audience’s eyes occluded by helmets. The eye trackers compatible with HMDs are not easily accessible to students, researchers and developers in small studios because of the high prices. This paper explores a novel way of estimating a user’s focus of attention in a virtual environment. An experiment measuring the relationship between subject’s head movement and eyesight was conducted to investigate whether eye movement can be closely approximated by head rotation. The findings suggested that people’s eyesight tended to remain in the central area of the HMD when playing a VR game and the HMD orientation data was very close to the eyesight direction. And therefore, this novel way that employs no other equipment than HMDs themselves can hopefully be used to estimate a user’s focus of attention in a much more economic and convenient manner.

Xuanchao He, Zhejun Liu
Get Well Soon! Human Factors’ Influence on Cybersickness After Redirected Walking Exposure in Virtual Reality

Cybersickness poses a crucial threat to applications in the domain of Virtual Reality. Yet, its predictors are insufficiently explored when redirection techniques are applied. Those techniques let users explore large virtual spaces by natural walking in a smaller tracked space. This is achieved by unnoticeably manipulating the user’s virtual walking trajectory. Unfortunately, this also makes the application more prone to cause Cybersickness. We conducted a user study with a semi-structured interview to get quantitative and qualitative insights into this domain. Results show that Cybersickness arises, but also eases ten minutes after the exposure. Quantitative results indicate that a tolerance towards Cybersickness might be related to self-efficacy constructs and therefore learnable or trainable, while qualitative results indicate that users’ endurance of Cybersickness is dependent on symptom factors such as intensity and duration, as well as factors of usage context and motivation. The role of Cybersickness in Virtual Reality environments is discussed in terms of the applicability of redirected walking techniques.

Julian Hildebrandt, Patric Schmitz, André Calero Valdez, Leif Kobbelt, Martina Ziefle
Dynamic Keypad – Digit Shuffling for Secure PIN Entry in a Virtual World

As virtual reality becomes more mainstream there is a need to investigate the security of user level authentication while in the virtual world. In order for authentication methods to be useful, they must be secure, not allow for any external observers to determine the secure data being entered by the user, and also not break the immersion that the virtual world provides. Using head mounted virtual reality displays, users can interact with the world by using gaze, that is selecting objects by what the user is focusing on. This paper analyzes the security issues involved with utilizing gaze detection for secure password entry. A user study finds security issues with standard gaze based PIN input, and as a result a solution to this problem is presented. The solution shuffles the numbers on the PIN pad and finds that method to be more secure while maintaining accuracy and speed.

Andrew Holland, Tony Morelli
VR Evaluation of Motion Sickness Solution in Automated Driving

The sensory conflict theory describes the occurrence of motion sickness caused by the discrepancy between the motion felt and the motion visually perceived. During driving, drivers monitor the environment while performing driving tasks, this enables them to get the visual perception of the motion felt. Visual cues help drivers to anticipate the direction of movement and thus, eliminate confusion, which could lead to anxiety, and thus motion sickness. Occupants of highly automated vehicles will have the luxury of performing activities such as reading or interacting with their mobile devices while the system performs the driving tasks. However, if the passenger takes his eyes off the surrounding traffic environment, sensory conflict is likely to occur. We implemented a concept in virtual reality to prevent motion sickness during automated driving based on a split screen technology. A part of the screen shows a video capture of the car surrounding in real time, while the other part is free to be used for individual applications. This additional data enables visual cues, which makes it possible to monitor the direction of movement of the vehicle. This minimizes sensory conflict and prevents motion sickness. An experiment was conducted with fourteen participants on a virtual reality automated driving simulator with an integrated motion platform. The result shows that the video streaming of the horizon presented to the passengers on a display helps them to feel comfortable and also reduced motion sickness during automated driving.

Quinate Chioma Ihemedu-Steinke, Prashanth Halady, Gerrit Meixner, Michael Weber
Enactive Steering of an Experiential Model of the Atmosphere

We present a stream of research on Experiential Complex Systems which aims to incorporate responsive, experiential media systems, i.e. interactive, multimodal media environments capable of responding to sensed activity at perceptual rates, into the toolbox of computational science practitioners. Drawing on enactivist, embodied approaches to design, we suggest that these responsive, experiential media systems, driven by models of complex system dynamics, can help provide an experiential, enactive mode of scientific computing in the form of perceptually instantaneous, seamless iterations of hypothesis generation and immersive gestural shaping of dense simulations when used together with existing high performance computing implementations and analytical tools. As a first study of such a system, we present EMA, an Experiential Model of the Atmosphere, a responsive media environment that uses immersive projection, spatialized audio, and infrared-filtered optical sensing to allow participants to interactively steer a computational model of cloud physics, exploring the necessary conditions for different atmospheric processes and phenomena through the movement and presence of their bodies and objects in the lab space.

Brandon Mechtley, Christopher Roberts, Julian Stein, Benjamin Nandin, Xin Wei Sha
Reconstruction by Low Cost Software Based on Photogrammetry as a Reverse Engineering Process

Among the various types of scanning that can be found on the market to perform a three-dimensional reconstruction, an alternative is highlighted due to its low cost and its ease of use, making it suitable for a great amount of applications. This is Image-based 3D Modeling and Rendering (IB3DMR), in which it is possible to generate a three-dimensional model from a set of 2D photographs. Among the existing commercial applications based on the IB3DMR, this communication has selected the Autodesk ReCap software, which is free and provides great features in terms of simplicity of operation, automation of the reconstruction process and possibility of exporting to other more complex applications. The use of this type of technologies based on photogrammetry is an alternative to the conventional reverse engineering processes, so a study with seven different pieces in terms of colour, geometry and texture has been performed for its assessment, obtaining three-dimensional reconstructions with very satisfactory results.

Dolores Parras, Francisco Cavas-Martínez, José Nieto, Francisco J. F. Cañavate, Daniel García Fernández-Pacheco
Simulation Sickness Evaluation While Using a Fully Autonomous Car in a Head Mounted Display Virtual Environment

Simulation sickness is a condition of physiological discomfort felt during or after exposure to a virtual environment. A virtual environment can be accessed through a head mounted display which provides the user with an entrance to the virtual world. The onset of simulation sickness is a main disadvantage of virtual reality (VR) systems. The proof-of-concept presented in this paper aims to provide new insights into development and evaluation of a VR driving simulation based on consumer electronics devices and a 3 Degrees-of-Freedom (3 DOF) motion platform. A small sample (n = 9) driving simulator pre-study with within-subjects design was conducted to explore simulation sickness outbreak, sense of presence and physiological responses induced by autonomous driving in a dynamic and static driving simulation. The preliminary findings show that users experienced no substantial simulation sickness while using an autonomous car when the VR simulation included a motion platform. This study is the basis for more extensive research in the future. Future studies will include more participants and investigate more factors that contribute to or mitigate the effects of simulation sickness.

Stanislava Rangelova, Daniel Decker, Marc Eckel, Elisabeth Andre
Visualizing Software Architectures in Virtual Reality with an Island Metaphor

Software architecture is abstract and intangible. Tools for visualizing software architecture can help to comprehend the implemented architecture but they need an effective and feasible visual metaphor, which maps all relevant aspects of a software architecture and fits all types of software. We focus on the visualization of module-based software—such as OSGi, which underlies many large software systems—in virtual reality, since this offers a much higher comprehension potential compared to classical 3D visualizations. Particularly, we present an approach for visualizing OSGi-based software architectures in virtual reality based on an island metaphor. The software modules are visualized as islands on a water surface. The island system is displayed in the confines of a virtual table where users can explore the software visualization on multiple levels of granularity by performing intuitive navigational tasks. Our approach allows users to get a first overview about the complexity of the software system by interactively exploring its modules as well as the dependencies between them.

Andreas Schreiber, Martin Misiak
Interaction in Virtual Environments - How to Control the Environment by Using VR-Glasses in the Most Immersive Way

Not only in the gaming industry is Virtual Reality (VR) the new way to give users a new experience – in engineering or production plant operation we also see first attempts at finding innovative ways of visualizing data or training plant staff. This is necessary because processes are getting more and more complex thanks to higher interconnection and flexibility. This paper presents actual possibilities of interacting with a virtual environment and provides three concepts for immersive interaction. We also show the results of an evaluation of these concepts at the end of the paper.

Barbara Streppel, Dorothea Pantförder, Birgit Vogel-Heuser
Sensor Data Fusion Framework to Improve Holographic Object Registration Accuracy for a Shared Augmented Reality Mission Planning Scenario

Accurate 3D holographic object registration for a shared augmented reality application is a challenging proposition with Microsoft HoloLens. We investigated using a sensor data fusion framework which uses both sensor data from an external positional tracking system and the Microsoft HoloLens to reduce augmented reality registration errors. In our setup, positional tracking data from the OptiTrack motion capture system was used to improve the registration of the 3D holographic object for a shared Augmented Reality application running on three Microsoft HoloLens displays. We showed an improved and more accurate 3D holographic object registration in our shared Augmented Reality application compared to the shared augmented reality application using HoloToolkit Sharing Service released by Microsoft. The result of our comparative study of the two applications also showed participants’ responses consistent with our initial assessment on the improved registration accuracy using our sensor data fusion framework. Using our sensor data fusion framework, we developed a shared augmented reality application to support a mission planning scenario using multiple holographic displays to illustrate details of the mission.

Simon Su, Vincent Perry, Qiang Guan, Andrew Durkee, Alexis R. Neigel, Sue Kase
Using Body Movements for Running in Realistic 3D Map

We developed a running support system using body movements as input in a realistic 3D Map. The users use their body movements to control movements in the 3D map. We used depth-camera sensor to track the user’s body joints movement as the user moves. The location changes are tracked real-time and the system calculates the speed, which will be used as speed control in our system. This means when the users run in the real life, they will also feel like running in the system naturally. The user will feel the realistic aspect of our system because the speed changes with the user’s speed. Realistic 3D map from Zenrin is used in our system. The map consists of several detailed featured such as traffic signs, train station, and another natural feature such as weather, lightning, and shadow. Therefore, we would like to help the users feel more realistic and have some fun experiences.In our system, we use a head-mounted display to enhance the user’s experience. The users will wear it while running and can see the realistic 3D map environment. We aim to provide immersive experience so the users feel as if going to the real place and keep the users motivation high. Integrating realistic 3D map as part of the system will enrich human experience, keep user motivated, and provide natural environment that is similar with the real world. We hope our system can assist the users as one of possible alternatives running in a virtual environment.

Adhi Yudana Svarajati, Jiro Tanaka
VRowser: A Virtual Reality Parallel Web Browser

In this paper, we propose VRowser, which is a virtual reality (VR) web browser that utilizes various visualization and interaction methods to support webpage content comparisons, allocation, grouping, and retrieval that are essential of parallel web browsing. VRowser’s main design objective is to embody a VR parallel web browsing environment that maintains familiar web browsing metaphors and leverages virtual reality interaction and visualization capabilities to support parallel web browsing tasks. Thus, our approach retains the following design factors: (1) Immersive VR. (2) Maintaining document-based web browsing metaphor within the VR environment. (3) Segregated interaction methods for 3D and web-tasks. We present our prototype specifications, followed by an evaluation. Our user study essentially gauged participants’ impressions about VRowser, as well as their webpage placement strategies within two different VR environments. The results indicate that users heavily relied on environmental landmarks, such as trees or furniture, to facilitate placement and retrieval of webpages. While the locomotion method developed in our prototype proven to be inefficient for quickly travelling from one location to another. Lastly, we present our conclusion and future development direction of our work.

Shuma Toyama, Mohammed Al Sada, Tatsuo Nakajima
Construction of Experimental System SPIDAR-HS for Designing VR Guidelines Based on Physiological Behavior Measurement

VR technology is still under development and only the evaluation of the performance and functions of VR devices is being studied. There are few studies that are as subjective and difficult to evaluate as the study of immersion in VR. Moreover, methods and indicators for evaluating this characteristic are still unclear. By evaluating this characteristic quantitatively, it seems that technological development for enhancing the immersive feeling in the virtual space will be greatly improved.Therefore, the final goal of this research is to clarify the relationship between sensory display (force, vision and auditory sense) and Sense of Agency (SoA) in VR environment through physiological behavior measurement. To achieve the final goal, the aim for this paper is to establish a human-scale VR environment and to prepare an environment in which the SoA can be evaluated.We conducted physiological behavior measurements on two tasks, which are the “ball-catching task” which consists of dropping a ball weighing 220 g from a height of 80 cm and catching it, and the “rod-tracking task” that consists of moving the rod so as not to touch the wall of a sinusoidal path. In the “ball-catching task,” it was possible to evaluate the strength of force sense, and the evaluation of the slight force sense was carried out based on the “rod-tracking task.”

Ryuki Tsukikawa, Ryoto Tomita, Kanata Nozawa, Issei Ohashi, Hiroki Horiuchi, Kentaro Kotani, Daiji Kobayashi, Takehiko Yamaguchi, Makoto Sato, Sakae Yamamoto, Tetsuya Harada
Augmented, Mixed, and Virtual Reality Enabling of Robot Deixis

When humans interact with each other, they often make use of deictic gestures such as pointing to help pick out targets of interest to their conversation. In the field of Human-Robot Interaction, research has repeatedly demonstrated the utility of enabling robots to use such gestures as well. Recent work in augmented, mixed, and virtual reality stands to enable enormous advances in robot deixis, both by allowing robots to gesture in ways that were not previously feasible, and by enabling gesture on robotic platforms and environmental contexts in which gesture was not previously feasible. In this paper, we summarize our own recent work on using augmented, mixed, and virtual-reality techniques to advance the state-of-the-art of robot-generated deixis.

Tom Williams, Nhan Tran, Josh Rands, Neil T. Dantam

Embodiment, Communication and Collaboration in VAMR

Frontmatter
Is This Person Real? Avatar Stylization and Its Influence on Human Perception in a Counseling Training Environment

This paper describes a pilot study planned by the Defense Equal Opportunity Employment Management Institute (DEOMI). By leveraging previous work in maturing a low-cost real-time puppeted character in a virtual environment, the team is seeking to explore the role stylization plays in how participants perceive emotions and connect with an avatar emotionally within a training atmosphere. The paper also describes future work in exploring how biases might be exposed when interacting with puppeted virtual characters.

Jennie Ablanedo, Elaine Fairchild, Tami Griffith, Christopher Rodeheffer
Virtually Empathetic?: Examining the Effects of Virtual Reality Storytelling on Empathy

Virtual reality is gaining attention as a new storytelling tool due to its ability to transport users into alternative realities. The current study investigated whether VR storytelling was a viable intervention for inducing a state of empathy. A short documentary about a prison inmate’s solitary confinement experiences, After Solitary, was shown to two groups of participants. One group watched the documentary on a commercial VR headset (Oculus Rift) and the other group on a desktop computer via a YouTube 360° video. Results indicated the two groups did not differ in their state empathy levels and in their sense of presence levels. This suggests that watching the documentary in VR was not substantially different from watching it on YouTube with respect to the extent to which an individual empathizes with the emotional experience of another person.

EunSeo Bang, Caglar Yildirim
The Role of Psychophysiological Measures as Implicit Communication Within Mixed-Initiative Teams

There has been considerable effort, particularly in the military, at integrating automated agents into human teams. Currently, automated agents lack the ability to intelligently adapt to a dynamic operational environment, which results in them acting as tools rather than teammates. Rapidly advancing technology is enabling the development of autonomous agents that are able to actively make team-oriented decisions meaning truly intelligent autonomous agents are on the horizon. This makes the understanding of what is important to team performance a critical goal. In human teams, mission success depends on the development of a shared mental models and situation awareness. Development of these constructs requires good intra-team communication. However, establishing effective intra-team communication in a mixed-initiative team represents a current bottleneck in achieving successful teams. There has been significant research aimed at identifying modes of communication that can be used both by human and agent teammates, but often neglects a source of communication or information for the agent teammate that has been adopted by the human robot community to increase robot acceptance. Specifically, the use of psychophysiological features supplied to the agent that can then use algorithms to infer the cognitive state of the human teammate. The utility of using psychophysiological features for communication within teams has not been widely explored yet representing a knowledge gap in developing mixed-initiative teams. We designed an experimental paradigm that created an integrated human-automation team where psychophysiological data was collected and analyzed in real-time to address this knowledge gap. We briefly present a general background to human automation teaming before presenting our research and preliminary analysis.

Kim Drnec, Greg Gremillion, Daniel Donavanik, Jonroy D. Canady, Corey Atwater, Evan Carter, Ben A. Haynes, Amar R. Marathe, Jason S. Metcalfe
Extending Embodied Interactions in Mixed Reality Environments

The recent advances in mixed reality (MR) technologies provide a great opportunity to support deployment and use of MR applications for training and education. Users can interact with virtual objects that can help them be more engaged and acquire more information compared to the more traditional approaches. MR devices, such as the Microsoft HoloLens device, use spatial mapping to place virtual objects in the surrounding space and support embodied interaction with those objects. However, some applications may require an extended range of embodied interactions that are beyond the capabilities of the MR device. For instance, interaction with virtual objects using arms, legs, and body almost the same way we interact with physical objects. We describe an approach to extend the functionality of Microsoft HoloLens to support an extended range of embodied interactions in an MR space by using the Microsoft Kinect V2 sensor device. Based on that approach, we developed a system that maps the captured skeletal data from the Kinect device to the HoloLens device coordinate system. We have measured the overall delay of the developed system to evaluate its effect on application responsiveness. The described system is currently being used for the development of a HoloLens application for nurse aide certification in the Commonwealth of Virginia.

Mohamed Handosa, Hendrik Schulze, Denis Gračanin, Matthew Tucker, Mark Manuel
Interaction of Distant and Local Users in a Collaborative Virtual Environment

Virtual Reality enables a new form of collaboration. It allows users to work together in the same virtual room regardless of their actual physical location. However, it is unclear which effect the physical location of the user has on task performance, the feeling of presence or immersion. We compared the collaboration of two users in the same local room and in remote rooms on the basis of a knowledge-transfer task. An instructor indicated different virtual objects using three different pointing gestures and a trainee selected the highlighted object. The results of a 28 participant user study show that the performance of the gestures in the local and remote setup is equal, according to NASA-TLX, rankings and time. Users feel equally co-present and tend to prefer the remote collaboration. The data presented in this paper shows that VR collaboration in a virtual room is independent from the physical location of the participants. This allows the development of VR applications without special consideration of the user’s location. VR systems can use the advantages of a remote collaboration, like faster reaction times, no travel expenses and no user collision, or of local collaboration, e.g. direct contact between users.

Adrian H. Hoppe, Roland Reeb, Florian van de Camp, Rainer Stiefelhagen
Bidirectional Communication for Effective Human-Agent Teaming

The recent proliferation of artificial intelligence research is reaching a point where machines are able to learn and adapt to dynamically make decisions independently or in collaboration with human team members. With such technological advancements on the horizon, there will come a mandate to develop techniques to deploy effective human-agent teams. One key challenge to the development of effective teaming has been enabling a shared, dynamic understanding of mission space, and a basic knowledge about the states and intents other teammates. Bidirectional communication is an approach that fosters communication between human and intelligent agents to improve mutual understanding and enable effective task coordination. This session focuses on current research and scientific gaps in three areas necessary to advance the field of bidirectional communication between human and intelligent agent team members. First, intelligent agents must be capable of understanding the state and intent of the human team member. Second, human team members must be capable of understanding the capabilities and intent of the intelligent agent. Finally, in order for the entire system to work, systems must effectively integrate information from and coordinate behaviors across all team members. The combination of these three areas will enable future human-agent teams to develop a shared understanding of the environment as well as a mutual understanding of each other, thereby enabling truly collaborative human-agent teams.

Amar R. Marathe, Kristin E. Schaefer, Arthur W. Evans, Jason S. Metcalfe
PaolaChat: A Virtual Agent with Naturalistic Breathing

For embodied conversational agents (ECAs) the relationship between gesture and rapport is an open question. To enable us to learn whether adding breathing behaviors to an agent similar to SimSensei would lead users interacting to perceive the agent as more natural, we built an application, called Paola Chat, in which the ECA could display naturalistic breathing animations. Our study had two phases. In the first phase, we determined the most natural amplitude for the agent’s breathing. In the second phase, we assessed the effect of breathing on the users’ perceptions of rapport and naturalness. The study had a within-subjects design, with breathing/not-breathing as the independent variable. Despite our expectation that increased naturalness from breathing would lead users to report greater rapport in the breathing condition than in the not-breathing condition, the study’s results suggest that the animation of breathing appears to neither increase nor decrease these perceptions.

David Novick, Mahdokht Afravi, Adriana Camacho
Quantifying Human Decision-Making: Implications for Bidirectional Communication in Human-Robot Teams

A goal for future robotic technologies is to advance autonomy capabilities for independent and collaborative decision-making with human team members during complex operations. However, if human behavior does not match the robots’ models or expectations, there can be a degradation in trust that can impede team performance and may only be mitigated through explicit communication. Therefore, the effectiveness of the team is contingent on the accuracy of the models of human behavior that can be informed by transparent bidirectional communication which are needed to develop common ground and a shared understanding. For this work, we are specifically characterizing human decision-making, especially in terms of the variability of decision-making, with the eventual goal of incorporating this model within a bidirectional communication system. Thirty participants completed an online game where they controlled a human avatar through a 14 × 14 grid room in order to move boxes to their target locations. Each level of the game increased in environmental complexity through the number of boxes. Two trials were completed to compare path planning for the condition of known versus unknown information. Path analysis techniques were used to quantify human decision-making as well as provide implications for bidirectional communication.

Kristin E. Schaefer, Brandon S. Perelman, Ralph W. Brewer, Julia L. Wright, Nicholas Roy, Derya Aksaray

Education, Training and Simulation

Frontmatter
A Maximum Likelihood Method for Estimating Performance in a Rapid Serial Visual Presentation Target-Detection Task

In human-agent teams, communications are frequently limited by how quickly the human component can deliver information to the computer-based agents. Treating the human as a sensor can help relax this limitation. As an instance of this, the rapid serial visual presentation target-detection paradigm provides a fast lane for human target-detection information; however, estimating target-detection performance can be challenging when the inter-stimulus interval is short, relative to human response time variability. This difficulty stems from the uncertainty in assigning each response to the correct stimulus image. We developed a maximum likelihood method to estimate the hit rate and false alarm rate that generally outperforms classic heuristic-based approaches and our previously developed regression-based method. Simulations show that this new method provides unbiased and accurate estimates of target-detection performance across a range of true hit rate and false alarm rate values. In light of the improved estimation of hit rates and false alarm rates, this maximum likelihood method would seem the best choice for estimating human target-detection performance.

Jonroy D. Canady, Amar R. Marathe, David H. Herman, Benjamin T. Files
Virtual Reality Training to Enhance Motor Skills

The use of Virtual Reality (VR) and Augmented Reality (AR) as a healing aid is a relatively newer concept in the field of rehabilitation and training. Clinicians now have access to virtual worlds and games in which they can immerse their patients into interactive scenarios that would not have been possible in previous years.Studies have shown effective results when VR/AR is incorporated into rehabilitation and training therapy practice. Mobility limited individuals can move freely within an open world virtual environment using the enhancements of virtual reality platforms like the HTC VIVE and Oculus Rift. The recent widespread consumer availability of VR/AR platforms has made it possible for clinicians to have the ability to incorporate fully immersive tech into their treatment regimens. Their research methods range from the implementation of consumer video game systems to custom developed hardware and software to enhance training. Clinicians are utilizing VR/AR platforms to better engage their patients. In doing so they are improving the effectiveness of training. Researchers have seen the implementation of these new tools improve the psychological effects of phantom limb syndrome, and improve motor skills for those with multiple sclerosis, cerebral palsy and other mobility debilitating conditions.This research will survey the past, present and future of applications and research in VR/AR Game experiences to aid in training and rehabilitation, exploring current state of research and the documented effectiveness of using games to heal. The benefits and potential further uses for emerging technologies within the healthcare field will guide the implementation of VR/AR applications to aid in the training of children to best learn to implement 3D printed prosthetic limbs in their everyday lives. In collaboration with Limbitless Solutions at the University of Central Florida, researchers have been engaged in discussions with young recipients of 3D prosthetic limbs. The researchers will discuss their findings and plans for how VR/AR Game experiences can provide solutions for the enhancement of the day to day routines of young prosthetic users.

Matt Dombrowski, Ryan Buyssens, Peter A. Smith
Examination of Effectiveness of a Performed Procedural Task Using Low-Cost Peripheral Devices in VR

The paper presents a Virtual Reality (VR) training system dedicated for interactive course focused on acquisition of competences in the field of manual procedural tasks. It was developed as a response for the growing market demand for low-cost VR systems supporting industrial training. A scenario for the implementation of an elementary manual operation (modified peg-in-hole task) was developed. The aim of the test was to show whether the prepared solution (along with peripheral devices) can be an effective tool for training the activities performed at the production site. The procedural task was performed by specific test groups using various peripheral devices. The paper presents preliminary results of tests regarding evaluation of effectiveness of virtual training, depending on specific peripheral devices used.

Damian Grajewski, Paweł Buń, Filip Górski
Study on the Quality of Experience Evaluation Metrics for Astronaut Virtual Training System

With the development of virtual reality (VR) technology, it is possible to train astronauts using VR. To make the system more efficient, it is necessary to study the quality of experience (QoE) of astronauts in the virtual environment (VE). Based on the characteristics of virtual training system and the needs of astronauts training, a set of metrics consisting of five higher-level metrics and fifteen lower-level metrics were put forward for the QoE evaluating of the system. In addition, the weight of each higher-level metrics is obtained using analytic hierarchy process (AHP) method. The results of this paper can be used directly in the QoE evaluation of astronaut virtual training system in a quantitative way.

Xiangjie Kong, Yuqing Liu, Ming An
Developing and Training Multi-gestural Prosthetic Arms

Learning to use prosthetic limbs is challenging for children. One solution to this is to design engaging training games that teach children how to use their new limbs without boring or fatiguing them. The interdisciplinary design team, comprised of digital media faculty, researchers, engineers, and health professionals, strives to create innovative solutions. The training program will provide recipients with the opportunity to become proficient at using their prosthetics to ensure their successful, long term use of these limbs. This collaboration among physical trainers, engineers, and psychologists will create fun, kid-friendly training solutions that follow sound physical training guidance.

Albert Manero, John Sparkman, Matt Dombrowski, Ryan Buyssens, Peter A. Smith
Virtual Reality Based Space Operations – A Study of ESA’s Potential for VR Based Training and Simulation

This paper presents the results of a study the authors conducted together over a year in order to identify key issues of ESA’s (European Space Agency) potential for a deployment of Virtual Reality training environments within space operations. Typically, ESA simulates several operations using DES like systems that need to be linked to a VR environment for training purposes. Based on the second generation of VR equipment and development tools the paper describes a holistic design approach from scenario development through design decisions on SW and HW choices until the final development of a PoC for a virtual lunar base that might simulate the metabolism of a lunar base. Here the idea was to mirror the mass- and energy-flows within a lunar base in order to maintain an environment, in which astronauts can live and work and to establish a tool that supports the training of astronauts for operating such a lunar base, the one likely next step of human space exploration beyond the International Space Station as identified by ESAs decision makers. In the end, we have realized a PoC for a fire emergency case on a lunar base allowing astronauts being trained in a fully simulated and integrated environment. The system could be tested and evaluated in two set-ups, first using classical VR controllers, second, using recent VR glove technology.

Manuel Olbrich, Holger Graf, Jens Keil, Rüdiger Gad, Steffen Bamfaste, Frank Nicolini
Guiding or Exploring? Finding the Right Way to Teach Students Structural Analysis with Augmented Reality

The paper reports on the design of an augmented reality (AR) application for structural analysis education. Structural analysis is a significant course in every civil engineering program. The course focuses on load and stress distributions in buildings, bridges, and other structures. Students learn about graphical and mathematical models that embody structures as well as to utilize those models to determine the safety of a structure. An often reported obstacle is the missing link between these graphical models and a real building. Students often do not see the connection, which hinders them to utilize the models correctly. We designed an AR application that superimposes real buildings with graphical widgets of structural elements to help students establishing this link. The focus of this study is on application design, especially on the question whether students prefer an application that guides them when solving an engineering problem or whether the students prefer to explore. Students were asked to solve a problem with the application, which either instructed them step-by-step or allowed the students to use all feature on their own (exploring). The results are inconclusive, however, tend to favor the explore mode.

Rafael Radkowski, Aliye Karabulut-Ilgu, Yelda Turkan, Amir Behzadan, An Chen
Assembly Training: Comparing the Effects of Head-Mounted Displays and Face-to-Face Training

Due to increasing complexity of assembly tasks at manual workplaces, intensive training of new employees is absolutely essential to ensure high process and product quality. Interactive assistive systems are becoming more and more important as they can support workers during manual procedural tasks. New assistive technologies such as Augmented Reality (AR) are introduced to the industrial domain, especially in the automotive industry. AR allows for enriching our real world with additional virtual information. We are observing a trend in using head-mounted displays (HMDs) in order to support new employees during assembly training tasks. This technology claims to improve the efficiency and quality of assembly and maintenance tasks but so far, HMDs have not been scientifically compared against face-to-face training. In this paper, we aim to close this gap in research by comparing HMD instructions to face-to-face training using a real-life engine assembly task. We executed a training-session with a total of 36 participants. Results showed that trainees who performed the assembly training with HMD support made 10% less picking mistakes, 5% less assembly mistakes and 60% caused less rework but they are significantly slower compared to face-to-face training. We further aimed to rate user satisfaction by using the system usability scale (SUS) questionnaire. Results indicated an average SUS of 73,5 which means ‘good’. These and further findings are presented in this paper.

Stefan Werrlich, Carolin Lorber, Phuc-Anh Nguyen, Carlos Emilio Franco Yanez, Gunther Notni
Backmatter
Metadata
Title
Virtual, Augmented and Mixed Reality: Interaction, Navigation, Visualization, Embodiment, and Simulation
Editors
Jessie Y.C. Chen
Gino Fragomeni
Copyright Year
2018
Electronic ISBN
978-3-319-91581-4
Print ISBN
978-3-319-91580-7
DOI
https://doi.org/10.1007/978-3-319-91581-4