Skip to main content

2020 | Buch

Virtual, Augmented and Mixed Reality. Design and Interaction

12th International Conference, VAMR 2020, Held as Part of the 22nd HCI International Conference, HCII 2020, Copenhagen, Denmark, July 19–24, 2020, Proceedings, Part I

insite
SUCHEN

Über dieses Buch

The 2 volume-set of LNCS 12190 and 12191 constitutes the refereed proceedings of the 12th International Conference on Virtual, Augmented and Mixed Reality, VAMR 2020, which was due to be held in July 2020 as part of HCI International 2020 in Copenhagen, Denmark. The conference was held virtually due to the COVID-19 pandemic.
A total of 1439 papers and 238 posters have been accepted for publication in the HCII 2020 proceedings from a total of 6326 submissions.
The 71 papers included in these HCI 2020 proceedings were organized in topical sections as follows:
Part I: design and user experience in VAMR; gestures and haptic interaction in VAMR; cognitive, psychological and health aspects in VAMR; robots in VAMR.
Part II: VAMR for training, guidance and assistance in industry and business; learning, narrative, storytelling and cultural applications of VAMR; VAMR for health, well-being and medicine.

Inhaltsverzeichnis

Frontmatter
Correction to: A Robotic Augmented Reality Virtual Window for Law Enforcement Operations

The original version of this chapter was revised. The acknowledgement was inadvertently forgotten. It has been added.

Nate Phillips, Brady Kruse, Farzana Alam Khan, J. Edward Swan II, Cindy L. Bethel

Design and User Experience in VAMR

Frontmatter
Guerilla Evaluation of Truck HMI with VR

HMI development requires user centered testing of HMI prototypes in realistic scenarios. Typical problems are the acquisition of a representative user group and the setup of realistic scenarios. The paper describes the method of Guerilla interviews on truck stops along highways in order to approach truck drivers as participants in a user centered HMI development process. A truck steering wheel with cluster display mockup from earlier on-site interviews was compared to a new virtual reality (VR) truck cockpit simulator. The Guerilla method proofed its value to involve truck drivers in HMI evaluation as a fast and efficient approach, as long as mobile HMI prototypes are available. As a limitation of the Guerilla method we found the time limits given by the strict truck drivers’ break time and little control of the participant sample. Regarding the HMI prototype comparison, we conclude that the steering wheel mockup still requires less efforts and costs for testing, however the VR simulator shows a better context representation for the function and hence better external validity of the results. Slightly longer test duration of the VR simulator is mainly attributed to the time for introducing the new technology to participants. The outlook describes how the mobile VR simulator is further developed to overcome its limitations.

Frederik Diederichs, Friedrich Niehaus, Lena Hees
A Mixed-Reality Shop System Using Spatial Recognition to Provide Responsive Store Layout

Environment is very important for consumers shopping. In-store characteristics such as store layout, decoration, music and store employee are the key of consumers shopping experience. However, the current online shop system lacks environment information and in-store characteristics, which are very important for consumers shopping experience. In this paper, we designed a new mixed-reality (MR) shop system by mixing virtual in-store characteristics and real environment, which might be the possible direction for the future online shop system. Technically, we developed a new spatial understanding algorithm and layout mechanism to support responsive spatial layout. In our system, store designers only need to design once, and our system can make any place to the mixed reality shop. We have invited some participants to test the usability and efficiency of our system. We have obtained a positive feedback through the preliminary user study.

Hao Dou, Jiro Tanaka
Mixed Mock-up – Development of an Interactive Augmented Reality System for Assembly Planning

Virtual assembly simulations are used in the industry to save costs in early stages of product development. As previous work, the so called mixed mock-up was developed to support the assembly planning of an automotive supplier. The mixed mock-up extends cardboard engineering through Augmented Reality (AR) visualizations. Virtual components are assembled virtually at the physical assembly mock-up. The evaluation of the first mixed mock-up prototype revealed various deficits in the technical implementation and the user interaction. This paper describes the further development of the existing system with the aim of making the simulation more realistic and increasing the practical suitability of the application. Based on a generic system description, the system has been improved in its individual components. A new pair of AR glasses is used to extend the field of vision. In addition, innovative force feedback gloves are used to support the user interaction. The gloves enhance the handling and ensure a natural interaction with the system and the virtual components. The effort for the preparation of the simulation data and the configuration of the assembly scene is reduced by a semi-automatic data processing step. This makes it easy to exchange CAD data and enables a productive use of the system. An evaluation by experts of the industry partner resulted in a thoroughly positive feedback. The mixed mock-up application has become more realistic and intuitive overall.

Florian Dyck, Jörg Stöcklein, Daniel Eckertz, Roman Dumitrescu
Interactive AR Models in Participation Processes

It is getting more and more important to enable stakeholders from different backgrounds to collaborate efficiently on joint projects. Physical models provide a better understanding of spatial relationships while using video mapping of suitable visualizations enables a meaningful enrichment of information. We therefore developed a demonstrator using a physical architectural model as base and projected additional data via video mapping onto it. In this paper, we describe the initial situation and the requirements for the development of our demonstrator, its construction, the software developed for this purpose, including the calibration process as well as the implementation of tangible interaction as a means to control data and visualizations. In addition, we describe the whole user interface and lessons learned. Ultimately, we present a platform that encourages discussions and can enrich participation processes.

Jonas Hansert, Mathias Trefzger, Thomas Schlegel
Calibration of Diverse Tracking Systems to Enable Local Collaborative Mixed Reality Applications

Mixed reality (MR) devices offer advantages for a wide range of applications, e.g. simulation, communication or training purposes. Local multi user applications allow users to engage with virtual worlds collaboratively, while being in the same physical room. A shared coordinate system is necessary for this local collaboration. However, current mixed reality platforms do not offer a standardized way to calibrate multiple devices. Not all systems offer the required hardware that is used by available algorithms, either because the hardware is not available or not accessible by developers. We propose an algorithm that calibrates two devices using only their tracking data. More devices can be calibrated through repetition. Two MR devices are held together and moved around the room. Our trajectory-based algorithm provides reliable and precise results when compared to SfM or marker based algorithms. The accurate, but easy to use rotational calibration gesture can be executed effortlessly in a small space. The proposed method enables local multi user collaboration for all six degrees of freedom (DOF) MR devices.

Adrian H. Hoppe, Leon Kaucher, Florian van de Camp, Rainer Stiefelhagen
Contrast and Parameter Research of Augmented Reality Indoor Navigation Scheme

We are committed to using various indoor and outdoor images, 3D objects and static scenes as recognition objects to build an augmented reality world. This paper focuses on the research and application of indoor augmented reality navigation. Indoor navigation has a variety of technical solutions, such as Wi-Fi Based and indoor sensor based. As one of them, augmented reality has the advantages of no need to deploy additional hardware devices in advance, six degrees of freedom and high precision. By analyzing the development of augmented reality indoor navigation and the underlying technology, we summarize and implement three solutions: map based (MB), point-cloud based (PCB), image based (IB). We first conducted a control experiment, and compared these schemes with the flow theory and the experimental data. At the same time, we collected the feedback and suggestions during the experiment, and carried out a second experiment on some components of augmented reality navigation (such as path, point of interest), and obtained the corresponding quantitative data.

Wen-jun Hou, Lixing Tang
Study on User-Centered Usability Elements of User Interface Designs in an Augmented Reality Environment

In order to complete the augmented reality (AR) user interface (UI) design simply and quickly, the usable factors were studied in this work. The main focus of interface design is to increase usability. Various factors should be considered together when evaluating usability. An ideal usable model is usually user-centered, with the aim of perceiving the interests of users and easily completing targets. In order to cover all types of usable factors, the literature survey method had been used and a total of 85 usable factors had been collected by this survey. To make the usable factors adapt the augmented reality, the concept of factors is redefined. We extract the items which are adaptable and user-centered, combine or delete the items that have the same meanings and finally select 25 usable evaluative factors. The Human Computer Interaction professional is set as the target, and the related data is collected by Heuristic Evaluation. We are able to systematize the usable factors by principal component analysis, observe the correlation between the usable factors, and classify those with high correlation.

Un Kim, Yaxi Wang, Wenhao Yuan
Research on a Washout Algorithm for 2-DOF Motion Platforms

This paper proposes a new washout algorithm optimized from those designed for 6-DOF Stewart platforms using the Human Vestibular Based Strategy. Its actual effect, from the perspective of user experience, was verified via a within-subject design experiment using a 2-DOF platform with limited rotation angles. The result showed that compared with the common 2-DOF algorithms using the Linear Shrinkage Ratio Strategy, our algorithm provided users with better immersion, presence and overall satisfaction, though no significant difference was found regarding simulator sickness. This positive result demonstrated the potential of the algorithm proposed here and may help to enhance the user experience of various kinds of motion platforms.

Zhejun Liu, Qin Guo, Zhifeng Jin, Guodong Yu
Usability of the Virtual Agent Interaction Framework

The Virtual Agent Interaction Framework (VAIF) is an authoring tool for creating virtual-reality applications with embodied conversational agents. VAIF is intended for use by both expert and non-expert users, in contrast with more sophisticated and complex development tools such as the Virtual Human Toolkit. To determine if VAIF is actually usable by a range of users, we conducted a two-phase summative usability test, with a total of 43 participants. We also tried porting to VAIF a scene from an earlier VR application. The results of the usability study suggest that people with little or even no experience in creating embodied conversational agents can install VAIF and build interaction scenes from scratch, with relatively low rates of encountering problem episodes. However, the usability testing disclosed aspects of VAIF and its user’s guide that could be improved to reduce the number of problem episodes that users encounter.

David Novick, Mahdokht Afravi, Oliver Martinez, Aaron Rodriguez, Laura J. Hinojos
Towards a Predictive Framework for AR Receptivity

Given the sometimes disparate findings and the increasing application of AR in both training and operations, as well as increased affordability and availability, it is important for researchers, user interface and user experience (UI/UX) designers, and AR technology developers to understand the factors that impact the utility of AR. To increase the potential for realizing the full benefit of AR, adequately detailing the interrelated factors that drive outcomes of different AR usage schemes is imperative. A systematic approach to understanding influential factors, parameters, and the nature of the influence on performance provides the foundation for developing AR usage protocols and design principles, which currently are few. Toward this end, this work presents a theoretical model of factors impacting performance with AR systems. The framework of factors, including task, human, and environmental factors, conceptualizes the concept of “AR Receptivity”, which aims to characterize the degree to which the application of AR usage is receptive to the technology design and capabilities. The discussion begins with a brief overview of research efforts laying the foundation for the model’s development and moves to a review of receptivity as a concept of technology suitability. This work provides details on the model and factor components, concluding with implications for application of AR in both the training and operational settings.

Jennifer M. Riley, Jesse D. Flint, Darren P. Wilson, Cali M. Fidopiastis, Kay M. Stanney
Arms and Hands Segmentation for Egocentric Perspective Based on PSPNet and Deeplab

First person videos and games are the central paradigms of camera positioning when using Head Mounted Displays (HMDs). In these situations, the user’s hands and arms play a fundamental role in self-presence feeling and interface. While their visual image is trivial in Augmented Reality devices or when using depth cameras attached to the HMDs, their rendering is not trivial to be solved with regular HMD, such as those based on smartphone devices. This work proposes the usage of semantic image segmentation with Fully Convolutional Networks for detaching user’s hands and arms from a raw image, captured by regular cameras, positioned as a First Person visual schema. We first create a training dataset composed by 4041 images and a validation dataset composed of 322 images, both of them receive labels for an arm and no-arm pixels, focused on the egocentric view. Then, based on two important architectures related to semantic segmentation - PSPNet and Deeplab - we propose a specific calibration for the particular scenario composed of hands and arms, captured from an HMD perspective. Our results show that PSPNet has better detail segmentation while Deeplab achieves best inference time performance. Training with our egocentric dataset generates better arm segmentation than using images in different and more general perspectives.

Heverton Sarah, Esteban Clua, Cristina Nader Vasconcelos
Virtual Scenarios for Pedestrian Research: A Matter of Complexity?

Virtual reality (VR) has become a popular tool to investigate pedestrian behavior. Many researchers, however, tend to simplify traffic scenarios to maximize experimental control at the expense of ecological validity. Multiple repetitions, facilitated by the brief durations of widespread crossing tasks, further add to predictability and likely reduce the need for cognitive processing. Considering the complexity inherent to naturalistic traffic, such simplification may result in biases that compromise the transferability and meaningfulness of empirical results. In the present work, we outline how human information processing might be affected by differences between common experimental designs and naturalistic traffic. Aiming at an optimal balance between experimental control and realistic demands, we discuss measures to counteract predictability, monotony, and repetitiveness. In line with this framework, we conducted a simulator study to investigate the influence of variations in the behavior of surrounding traffic. Although the observed effects seem negligible, we encourage the evaluation of further parameters that may affect results based on scenario design, rather than discussing methodological limitations only in terms of simulator fidelity.

Sonja Schneider, Guojin Li
Comparative Study Design of Multiple Coordinated Views for 2D Large High-Resolution Display with 3D Visualization Using Mixed Reality Technology

We present the design of our qualitative assessment of user interaction and data exploration using our hybrid 2D and 3D visual analytic application with 2D visual analytics application running on Large High-Resolution Display (LHRD) and 3D visual analytics application running on mixed reality immersive displays. The application used for the study visualizes our Monte Carlo simulation over time showing topological, geospatial, and temporal aspects of the data in multiple views. We assessed attitudinal responses on the usefulness of visual analytics using 2D visualization on LHRD, and compare that with visual analytics using 2D visualization on LHRD with 3D visualization using mixed reality display. We first perform a usability test, where the participants complete a couple of exploratory tasks: one, identifying corresponding assets in a visualization, and two, identifying patterns/relationships between particular assets. Participants perform the same tasks using two different system configurations: using 2D visualization on LHRD, using 2D and 3D visualization together but as separate application, and using 2D visualization on LHRD and 3D visualization on Microsoft HoloLens with multiple coordinated views across the two systems. A pilot study were conducted on the experimental design on the relative effectiveness of the different setups towards accomplishing the given tasks. We further discuss how the results of the pilot study confirm current system design decisions, and also discuss additional user-centric characteristics that must be considered to inform future design decisions.

Simon Su, Vincent Perry
Study on Assessing User Experience of Augmented Reality Applications

With the development of the augmented reality technology and the popularisation of smartphones, the application of augmented reality based on mobile devices demonstrates an optimistic development prospect. The current development of mobile augmented reality is mainly technology-oriented, mostly emphasises on technology advancement as a basis of measure while placing insufficient emphasis upon user experience. User-centric design is increasingly important in the design of mobile applications. As it is crucial to quantify and evaluate user experiences of AR application to gain insight in pivotal areas for future development, which this research proposes that the application of the Delphi-AHP method is capable of identifying those areas via five first-level indicators and 20 second-level indicators. This method is tested and verified with six model display applications, which discovered the most important first-level indicators to affect user experience is a system’s functionality and its display.

Lei Wang, Meiyu Lv
How Interaction Paradigms Affect User Experience and Perceived Interactivity in Virtual Reality Environment

Interactivity is one of the major features of virtual reality (VR) comparing with traditional digital devices. Many virtual reality devices and applications provide more than one input instruments for users to experience VR content. This study compares user behavior and perceived interactivity with three interaction paradigms to investigate the influence of interaction paradigms on user experience in virtual environment. An experiment of 36 participants was conducted to measure three factors of user experience and three factors of perceived interactivity. An ANOVA test was conducted and the results show that interaction paradigms have significant influence on user total interaction frequency, playfulness of interactivity and controllability of interactivity. Results did not show significant difference on total experience time between groups, which indicates that how long time the users spent in experiencing VR were not significantly affected by what type of interaction paradigms they used. This study has theoretical and practical implications on designing and developing virtual reality user experience.

Duo Wang, Xiwei Wang, Qingxiao Zheng, Bingxin Tao, Guomeng Zheng
MRCAT: In Situ Prototyping of Interactive AR Environments

Augmented reality (AR) blends physical and virtual components to create a mixed reality experience. This unique display medium presents new opportunities for application design, as applications can move beyond the desktop and integrate with the physical environment. In order to build effective applications for AR displays, we need to be able to iteratively design for different contexts or scenarios. We present MRCAT (Mixed Reality Content Authoring Toolkit), a tool for in situ prototyping of mixed reality environments. We discuss the initial design of MRCAT and iteration after a study ($$N = 14$$N=14) to evaluate users’ abilities to craft AR applications with MRCAT and with a 2D prototyping tool. We contextualize our system in a case study of museum exhibit development, identifying how existing ideation and prototyping workflows could be bolstered with the approach offered by MRCAT. With our exploration of in situ prototyping, we enumerate key aspects both of AR application design and targeted domains that help guide design of more effective AR prototyping tools.

Matt Whitlock, Jake Mitchell, Nick Pfeufer, Brad Arnot, Ryan Craig, Bryce Wilson, Brian Chung, Danielle Albers Szafir
Augmented Reality for City Planning

We present an early study designed to analyze how city planning and the health of senior citizens can benefit from the use of augmented reality (AR) with assistance of virtual reality (VR), using Microsoft’s HoloLens and HTC’s Vive headsets. We also explore whether AR and VR can be used to help city planners receive real-time feedback from citizens, such as the elderly, on virtual plans, allowing for informed decisions to be made before any construction begins. In doing so, city planners can more clearly understand what design features would motivate senior citizens to visit or exercise in future parks, for example. The study was conducted on 10 participants 60 years and older who live within 2 miles from the site. They were presented with multiple virtual options for a prospective park, such as different walls for cancelling highway noise, as well as benches, lampposts, bathroom pods, walking and biking lanes, and other street furniture. The headsets allowed the participants to clearly visualize the options and make choices on them. Throughout the study the participants were enthusiastic about using the AR and VR devices, which is noteworthy for a future where city planning is done with these technologies.

Adam Sinclair Williams, Catherine Angelini, Mathew Kress, Edgar Ramos Vieira, Newton D’Souza, Naphtali D. Rishe, Joseph Medina, Ebru Özer, Francisco Ortega

Gestures and Haptic Interaction in VAMR

Frontmatter
Assessing the Role of Virtual Reality with Passive Haptics in Music Conductor Education: A Pilot Study

This paper presents a novel virtual reality system that offers immersive experiences for instrumental music conductor training. The system utilizes passive haptics that bring physical objects of interest, namely the baton and the music stand, within a virtual concert hall environment. Real-time object and finger tracking allow the users to behave naturally on a virtual stage without significant deviation from the typical performance routine of instrumental music conductors. The proposed system was tested in a pilot study (n = 13) that assessed the role of passive haptics in virtual reality by comparing our proposed “smart baton” with a traditional virtual reality controller. Our findings indicate that the use of passive haptics increases the perceived level of realism and that their virtual appearance affects the perception of their physical characteristics.

Angelos Barmpoutis, Randi Faris, Luis Garcia, Luis Gruber, Jingyao Li, Fray Peralta, Menghan Zhang
FingerTac – A Wearable Tactile Thimble for Mobile Haptic Augmented Reality Applications

FingerTac is a novel concept for a wearable augmented haptic thimble. It makes use of the limited spatial discrimination capabilities of vibrotactile stimuli at the skin and generates tactile feedback perceived at the bottom center of a fingertip by applying simultaneous vibrations at both sides of the finger. Since the bottom of the finger is thus kept free of obstruction, the device is well promising for augmented haptic applications, where real world interactions need to be enriched or amalgamated with virtual tactile feedback. To minimize its lateral dimension, the vibration actuators are placed on top of the device, and mechanical links transmit the vibrations to the skin. Two evaluation studies with N=10 participants investigate (i) the loss of vibration intensity through these mechanical links, and (ii) the effect of lateral displacement between stimulus and induced vibration. The results of both studies support the introduced concept of the FingerTac.

Thomas Hulin, Michael Rothammer, Isabel Tannert, Suraj Subramanyam Giri, Benedikt Pleintinger, Harsimran Singh, Bernhard Weber, Christian Ott
WikNectVR: A Gesture-Based Approach for Interacting in Virtual Reality Based on WikNect and Gestural Writing

In recent years, the usability of interfaces in the field of Virtual Realities (VR) has massively improved, so that theories and applications of multimodal data processing can now be tested more extensively. In this paper we present an extension of VAnnotatoR, which is a VR-based open hypermedia system that is used for annotating, visualizing and interacting with multimodal data. We extend VAnnotatoR by a module for gestural writing that uses data gloves as an interface for VR. Our extension addresses the application scenario of WikiNect, a museum information system, and its gesture palette for realizing gestural writing. To this end, we implement and evaluate seven gestures. The paper describes the training and recognition of these gestures and their use within the framework of a user-centered evaluation system for virtual museums as exemplified by WikiNect.

Vincent Kühn, Giuseppe Abrami, Alexander Mehler
An Empirical Evaluation on Arm Fatigue in Free Hand Interaction and Guidelines for Designing Natural User Interfaces in VR

This research had a systematic study on arm fatigue issue in free hand interaction in VR environment and explored how arm fatigue influenced free hand interaction accuracy. A specifically designed target-acquisition experiment was conducted with 24 volunteered participants (7 left-handedness, 17 right-handedness) recruited. The experiment results indicated that (1) arm fatigue resulted in short durations of hand operation, or frequent alternations of operating hand. The user’s dominant hand had a more durable operation than the non-dominant one; (2) hand operate position had a significant effect on arm fatigue level, a bent arm posture was found to be more labor-saving than an extended arm posture, (3) hand operation at a higher position (e.g., at the head height) perceived arm fatigue more easily than that at a lower position (e.g., at the waist height); and (4) arm fatigue impact hand interaction accuracy negatively.

Xiaolong Lou, Xiangdong Li, Preben Hansen, Zhipeng Feng
Design and Validation of a Unity-Based Simulation to Investigate Gesture Based Control of Semi-autonomous Vehicles

The objective of this investigation is to explore the use of hand gestures to control semi-autonomous vehicles. This was achieved through the use of simulations built in Unity and real-life demonstrations. Screen-spaced simulations modeled the control of a recreational quadcopter while Virtual Reality simulations followed by actual demonstrations used a small ground vehicle. The purpose of the actual demonstrations was to validate observations and lessons learned about vehicle control and human performance from the simulations. The investigative process involved identifying natural gestures to control basic functions of a vehicle, matching them to the selected gesture capture technology, developing algorithms to interpret those gestures for vehicle control, and arranging the appropriate visual environment using the Unity game engine to investigate preferred use of those gestures. Observations and participant feedback aided in refining the gesture algorithms for vehicle control and visual information, indicated the simulations provided suitable learning experiences and environments from which to assess human performance. Results indicate that the gesture-based approach holds promise given the availability of new technology.

Brian Sanders, Yuzhong Shen, Dennis Vincenzi
Hand Gesture Recognition for Smartphone-Based Augmented Reality Applications

Hand Gesture Recognition (HGR) is a principal input method in head-mounted Augmented Reality (AR) systems such as HoloLens, but the high cost and limited availability of such systems prevent HGR from becoming more prevalent. Alternatively, smartphones can be used to provide AR experiences, but current smartphones were not designed with HGR in mind, making development of HGR applications more challenging. This study develops a software-based framework that implements HGR as a principal input method for smartphone AR applications. This framework assumes a contemporary smartphone with dual back-facing cameras, which enable stereo imaging and thus allow extraction of limited depth information from the environment. Several image processing techniques, derived and improved from previous work, were used to filter the noisy depth information to segment the user’s hand from the rest of the environment, and then to extract the pose of the hand and fingers in real-time. The framework additionally facilitates the development of cross-platform AR applications for both head-mounted (HoloLens) and smartphone configurations. A user experiment is held to determine whether a smartphone-based AR application developed using our HGR framework is comparable in usability to the same application on the HoloLens. For each device, participants were asked to use the application and fill out a usability questionnaire. They were also asked to compare the two systems at the end. This experiment shows that, despite the current limitations of smartphone-based HGR, the smartphone system’s usability is competitive with that of the HoloLens. This study ends with recommendations for future development.

Eric Cesar E. Vidal Jr., Ma. Mercedes T. Rodrigo
User-Centric AR Sceneized Gesture Interaction Design

With the rapid development of AR technology, the interaction between humans and computers has become increasingly complex and frequent. However, many interactive technologies in AR currently do not have a very perfect interaction mode, and they are facing huge challenges in terms of design and technical implementation, including that AR gesture interaction methods have not yet been established. There is no universal gesture vocabulary in currently developed AR applications. Identifying appropriate gestures for aerial interaction is an important design decision based on criteria such as ease of learning, metaphors, memory, subjective fatigue, and effort [1]. It must be designed and confirmed in the early stages of system development, and will seriously affect each aerial application project development process as well as the intended user of the user experience (UX) [2]. Thanks to user-centric and user-defined role-playing method, this paper set up a suitable car simulation scenarios, allowing users to define the 3D space matches the information exchange system under AR design environment based on their habits and cultural backgrounds, in particular, It is a demanding gesture during the tour guide and proposes a mental model of gesture preference.

Xin-Li Wei, Rui Xi, Wen-jun Hou

Cognitive, Psychological and Health Aspects in VAMR

Frontmatter
Towards the Specification of an Integrated Measurement Model for Evaluating VR Cybersickness in Real Time

Cybersickness (CS) is an affliction that limits the use of virtual reality (VR) applications. For decades, the measurement of cybersickness has presented one of the most challenges that have aroused the interest of VR research community. Having strong effects on users’ health, cybersickness causes several symptoms relating to different factors. In most cases, the literature studies for VR cybersickness evaluation adopt the questionnaire-based approaches. Some studies have focused on physiological and postural instability-based approaches, while others support the VR content. Despite the attention paid to define measurements for assessing cybersickness, there is still a need for a more complete evaluation model that allows measuring cybersickness in real time. This paper defines a conceptual model that integrates subjective and objective evaluation of CS in real time. The proposed model considers three CS factors (i.e. individual, software and hardware). The aim is to consider the heterogeneous findings (subjective and objective measures) related to the selected CS factors that define integrated indicators. The theoretical part of the model was initially validated by researchers who have comprehensive knowledge and skills in VR domain. As a research perspective, we intend to evaluate the proposed model through a practical case study.

Ahlem Assila, Taisa Guidini Gonçalves, Amira Dhouib, David Baudry, Vincent Havard
Cognitive Workload Monitoring in Virtual Reality Based Rescue Missions with Drones

The use of drones in search and rescue (SAR) missions can be very cognitively demanding. Since high levels of cognitive workload can negatively affect human performance, there is a risk of compromising the mission and leading to failure with catastrophic outcomes. Therefore, cognitive workload monitoring is the key to prevent the rescuers from taking dangerous decisions. Due to the difficulties of gathering data during real SAR missions, we rely on virtual reality. In this work, we use a simulator to induce three levels of cognitive workload related to SAR missions with drones. To detect cognitive workload, we extract features from different physiological signals, such as electrocardiogram, respiration, skin temperature, and photoplethysmography. We propose a recursive feature elimination method that combines the use of both an eXtreme Gradient Boosting (XGBoost) algorithm and the SHapley Additive exPlanations (SHAP) score to select the more representative features. Moreover, we address both a binary and a three-class detection approaches. To this aim, we investigate the use of different machine-learning algorithms, such as XGBoost, random forest, decision tree, k-nearest neighbors, logistic regression, linear discriminant analysis, gaussian naïve bayes, and support vector machine. Our results show that on an unseen test set extracted from 24 volunteers, an XGBoost with 24 features for discrimination reaches an accuracy of 80.2% and 62.9% in order to detect two and three levels of cognitive workload, respectively. Finally, our results are open the doors to a fine grained cognitive workload detection in the field of SAR missions.

Fabio Dell’Agnola, Niloofar Momeni, Adriana Arza, David Atienza
Negative Effects Associated with HMDs in Augmented and Virtual Reality

Head mounted displays (HMD) are becoming ubiquitous. Simulator sickness has been an issue since the first simulators and HMDs were created. As computational power and display capabilities increase, so does their utilization in technologies such as HMDs. However, this does not mean that the issues that once plagued these systems are now obsolete. In fact, evidence suggests that these issues have become more prevalent. Whether the system is Augmented Reality (AR), Virtual Reality (VR), or Mixed Reality (MR) the issues associated with simulator sickness or cybersickness have become more widespread. The reasons are uncertain, but probably multiple. One possible reason is the concept of vection, which is the illusion of movement to the participant where there is none physically. Vection plays a vital role in immersion and presence, however; it is also integral in simulator sickness. Another potential reason is the availability of HMDs. Traditionally a tool used in military training or laboratory settings, HMDs have now become a consumer item. This work reviews the current state of HMD issues such as simulator sickness or cybersickness. It reviews the similarities and differences of the sickness states that are commonly found with HMDs. Also, terms such as presence and immersion are delineated so they are used appropriately. The current theories on simulator sickness and cybersickness are reviewed. Further, the measurement and mitigation strategies currently being employed to reduce sickness are reviewed. Lastly, suggestions for more accurate measurement are recommended.

Charles R. Descheneaux, Lauren Reinerman-Jones, Jason Moss, David Krum, Irwin Hudson
Mixed Μock-up Μeets ErgoCAM: Feasibility Study for Prospective Ergonomic Evaluation of Manual Assembly Processes in Real-Time Using Augmented Reality and Markerless Posture Analysis

The sustainable planning of production systems and processes requires a high degree of flexibility in the processes, both in terms of the production methods used and scalability, in order to be able to react to changing requirements at short notice. These requirements often hamper companies’ efforts to guarantee health-promoting working conditions and systems as the workload involved in a correspondingly short-cycle analysis and evaluation of stressors would be too great. As a result, the individual needs of employees, such as adapting the workplace to suit personal constitutional traits, are often not taken into account sufficiently. In order to address this problem, two prototypical systems for digitally supported workplace design are tested in this paper. The focus lies on a mixed mock-up demonstrator, which combines classic mock-up planning with augmented reality technology and enables the generation of individualized manual assembly workstations. This demonstrator is used and examined in conjunction with ErgoCAM, a prototype of a markerless system, to evaluate individual postures in real time. The aim is to ascertain which benefits the combination of the systems offers with regard to the flexible and early ergonomic evaluation of assembly workstations in practice. Specifically, in a feasibility study, it is examined to which extent a reliable ergonomic evaluation of assembly operations is possible by combining the two prototypes.

Tobias Dreesbach, Alexander Mertens, Tobias Hellig, Matthias Pretzlaff, Verena Nitsch, Christopher Brandl
Fake People, Real Effects
The Presence of Virtual Onlookers Can Impair Performance and Learning

Can effects of social influence be elicited in virtual contexts, and if so, under which conditions can they be observed? Answering these questions has theoretical merit, as the answers can help broaden our understanding of the interaction mechanisms described by social psychology. The increasing popularity of immersive media in training applications, however, has made these questions of practical significance. Virtual reality (VR), in particular, is a weapon of choice in designing training and education simulations, as it can be used to generate highly realistic characters and environments. As a consequence, it is key to understand under which circumstances virtual ‘others’ can facilitate or impede performance and – especially – learning. In this study, we investigated the impact of virtual onlookers on an adapted Serial Reaction Time (SRT) task that was presented in VR. In each trial, participants responded to a series of spherical stimuli by tapping them with handheld controllers when they lit up. Depending on the experiment block, the sequence order was either the permutation of a fixed order (and therefore predictable given the first stimulus), or fully random (and therefore unpredictable). Participants were divided into three groups (audience variable), depending on the environment in which the task was set: a group without onlookers (none condition), a group with a computer-generated audience (CGI condition), and a group being watched by a prerecorded audience (filmed condition). Results showed that the presence of a virtual audience can hamper both overall performance and learning, particularly when the audience appears more realistic. This study further reinforces the notion that the effects of social influence transcend the physical presence of others, but rather extend to virtual audiences.

Wouter Durnez, Klaas Bombeke, Jamil Joundi, Aleksandra Zheleva, Emiel Cracco, Fran Copman, Marcel Brass, Jelle Saldien, Lieven De Marez
Investigating the Influence of Optical Stimuli on Human Decision Making in Dynamic VR-Environments

In this paper, we investigate the human decision making process in a virtual environment. The main goal is to find influencing optical and behavioral economical factors for intuitive decisions in Virtual Reality. We therefore place test persons in a virtual corridor with six visually and technically varying doors. The experimental task is to open one of the doors without overthinking or hesitating too long. This is repeated several times while randomizing and recombining optical features in every iteration. Our data shows that most of the introduced determinants actually do have an impact in the user’s decision. We can also observe different intensities of impact depending on the factor. It appears that color is by far the most influential component, followed by the complexity of the doors opening process. In contrast, position, spotlights and color brightness show only marginal correlation with choices made by the user.

Stefanie Fröh, Manuel Heinzig, Robert Manthey, Christian Roschke, Rico Thomanek, Marc Ritter
A HMD-Based Virtual Display Environment with Adjustable Viewing Distance for Improving Task Performance

This manuscript builds a HMD-based virtual display environment and conducts an experiment on a relation between viewing distances to the display and task performance. The result implies that longer viewing distances could improve speed, accuracy and precision of mouse manipulation.

Makio Ishihara, Yukio Ishihara
Comparative Analysis of Mission Planning and Execution Times Between the Microsoft HoloLens and the Surface Touch Table

In this paper, we present results of an investigation comparing two visualization technologies: the Microsoft HoloLens and the Microsoft Surface Touch Table. Two-person teams (dyads) played the role of commander’s staff tasked with planning the most efficient and safest mission route for a squad of soldiers to extract a repository of intelligence documents from the ruins of a building located in enemy territory. Quantitative and qualitative measures of performance were collected. We focused on two performance measures: total mission planning time and mission execution time (the planned route run in a simulated execution mode). Surprisingly, there was a significant decrease in planning time when using the Surface Touch Table. The dyads needed on average 86% more time to plan the mission using the HoloLens. Additionally, this increase in mission planning time associated with the HoloLens did not produce a more optimal mission solution. In a search for understanding the unexpected results, a content analysis of a preferred visualization questionnaire is described. Analysis results suggested: a more realistic scene invited unnecessary exploration instead of focused time on task; becoming familiar with the HoloLens spilled over into task time; collaborative and communication difficulties stemmed from the HoloLens being designed as a single-user device.

Sue Kase, Vincent Perry, Heather Roy, Katherine Cox, Simon Su
Effect of Motion Cues on Simulator Sickness in a Flight Simulator

The objective of this study is to investigate the effect of sensory conflict on the occurrence and severity of simulator sickness in a flight simulator. According to the sensory conflict theory, it is expected that providing motion cues that match the visual cues will reduce the discrepancy between the sensory inputs and thus reduce simulator sickness. We tested the effect of motion cues thorough a human subject experiment with a spherical type motion platform. After completing pre-experiment questionnaire including Motion Sickness Susceptibility Questionnaire (MSSQ) and Immersive Tendency Questionnaire (ITQ), two groups of participants conducted a flight simulation session with or without motion cues for 40 min. In the simulation session, participants were asked to fly through the gates sequentially arranged along the figure-eight shaped route. The Simulator Sickness Questionnaire (SSQ) was filled out after the exposure to compare groups between with and without motion cues. Physiological data, including electrodermal activity, heart rate, blood volume pressure, and wrist temperature were also collected to find the relationship with perceived simulator sickness. The results showed that simulator sickness and disorientation significantly lowered in motion-based group. Also, nausea and oculomotor were marginally lower when motion cue was given. This study supports sensory conflict theory. Providing proper motion cue corresponding to the visual flow could be considered to prevent simulator sickness.

Jiwon Kim, Jihong Hwang, Taezoon Park
Crew Workload Considerations in Using HUD Localizer Takeoff Guidance in Lieu of Currently Required Infrastructure

The purpose of this research was to examine the crew workload considerations for using HUD with localizer guidance symbology in lieu of currently required infrastructure for lower than standard takeoff minima and within the larger conceptual framework of external (runway) and internal (flight deck) visual cues, HUD guidance symbology, and RVR visibility. To identify the differential contributions of these factors, three baseline conditions without HUD localizer guidance symbology and two conditions with HUD localizer takeoff guidance symbology were used. Currently, only about 30% of the CAT I runways in the NAS are equipped with CLL. Therefore, the human factors considerations in using HUD localizer guidance in lieu of CLL in low visibility conditions were of principal interest. The results of this study have the potential to inform operational credit changes that would allow more reduced visibility takeoffs and increase the number of viable airports available for takeoff under low visibility conditions. The research was conducted on a Boeing 737-800NG Level D simulator at the FAA Flight Technologies & Procedures Division facility in Oklahoma City, Oklahoma.

Daniela Kratchounova, Mark Humphreys, Larry Miller, Theodore Mofle, Inchul Choi, Blake L. Nesmith
Performance, Simulator Sickness, and Immersion of a Ball-Sorting Task in Virtual and Augmented Realities

Virtual Reality (VR) and Augmented Reality (AR) can be defined by the amount of virtual elements displayed to a human’s senses: VR is completely synthetic and AR is partially synthetic. This paper compares VR and AR systems for variations of three ball-sorting task scenarios, and evaluates both user performance and reaction (i.e., simulator sickness and immersion). The VR system scored higher, with statistical significance, than the AR system in terms of effectiveness per each scenario and completion rate of all scenarios. The VR system also scored significantly lower than the AR system in terms of percentage error and total false positives. The VR group scored significantly lower than the AR group in efficiency performance: the VR group had less time spent in each scenario, less total time duration, and higher overall relative efficiency. Although post-scenario simulator sickness did not differ significantly between VR and AR, the VR condition had an increase in disorientation from pre-to-post scenarios. Significant correlations between performance effectiveness and post-scenario simulator sickness were not found. Finally, the AR system scored significantly higher on the immersion measure item for the level of challenge the scenarios provided. AR interface issues are discussed as a potential factor in performance decrement, and AR interface solutions are given. AR may be preferred over VR if disorientation is a concern. Study limits include causality ambiguity and experimental control. Next steps include testing VR or AR systems exclusively, and testing whether the increased challenge from the AR immersion is beneficial to educational applications.

Crystal Maraj, Jonathan Hurter, Sean Murphy

Robots in VAMR

Frontmatter
The Effects of Asset Degradation on Human Trust in Swarms

Human-swarm interaction (HSwI) research investigates interactions between human operators and robotic swarms. Swarms comprise assets, which operate as a unified group to complete goals like target foraging and shape configuration for asset movement optimization. Though the algorithmic specifications of swarm operations make them robust to individual asset loss, it is unknown how viewing asset degradations affects operator trust towards swarms. To investigate this relationship, modifications to an extant simulator of swarm foraging behaviors were implemented to portray functional asset degradation. Participants viewed recordings of swarms foraging, each comprising a randomized percentage of asset degradation. After each recording, participants rated their intentions to rely on the swarms in a target foraging task. Results showed an effect of differential asset loss on participants’ intentions to rely on swarms. Post hoc analyses showed that participants had greater intentions to rely on swarms in a future target foraging task when 5% and 15% of assets were degraded compared to 20% and 50%. Limitations and ideas for future research on trust in HSwI during target foraging tasks are discussed in detail.

August Capiola, Joseph Lyons, Izz Aldin Hamdan, Keitaro Nishimura, Katia Sycara, Michael Lewis, Michael Lee, Morgan Borders
Visual Reference of Ambiguous Objects for Augmented Reality-Powered Human-Robot Communication in a Shared Workspace

In shared workspaces, teammates working with a common set of objects must be able to unambiguously reference individual objects in order to effectively collaborate. When teammates are autonomous robots, human teammates must be able to communicate their intended reference object without overtly interfering with their workflow. In human-robot interaction, the problem of visual reference is defined as identifying the specific object referred to by a human (e.g., through a pointing gesture recognized by an augmented reality device), and relating this object to the associated object in the robotic teammate’s field of view, thereby identifying the intended object from a set of ambiguous objects. As human and robot teammates typically observe their shared workspace from differing perspectives, achieving visual reference of objects is a challenging yet crucial problem. In this paper, we present a novel approach to visual reference of ambiguous objects that introduces a graph matching-based approach which fuses visual and spatial information of the objects in a shared workspace through augmented reality-powered human-robot communication. Our approach represents the objects in a scene with a graph where edges encoding the spatial relationships among objects and attribute vectors describing each object’s appearance associated with each node. Then, we formulate visual object reference for human-robot communication in a shared workspace as an optimization-based graph matching problem, which identifies the correspondence of nodes in graphs built from the human and robot teammates’ observations. We conduct extensive experimental evaluation on two introduced datasets, showing that our approach is able to obtain accurate visual references of ambiguous objects and outperforms existing visual reference methods.

Peng Gao, Brian Reily, Savannah Paul, Hao Zhang
Safety in a Human Robot Interactive: Application to Haptic Perception

Usually, traditional haptic interfaces, such as Virtuose 6DOF [1], are used in the design phases by engineers [2]. Such interfaces are safe. However, the user can apply a force/torque but cannot really feel textures and appreciate material quality. These interfaces have a limited workspace, low stiffness and are very expensive. New haptic interfaces using an industrial robots or a cobot (robots specially designed to work in Human-Robot environments) can be used as a haptic interface with intermittent contacts [3, 4]. For application considered in this paper, the cobot carries several specimens of texture on its end-effector, to allow contact between a finger of the user and the robot.Safety is an important aspect in Human Robot Interactions (HRI) [5], even with the use of cobots, because contacts are expected. The purpose of this paper is to introduce a new methodology to define the basic placement of the robot in relation to the human body and for the planning and control of movements during HRIs to ensure safety.

Vamsi Krishna Guda, Damien Chablat, Christine Chevallereau
Virtual Reality for Immersive Human Machine Teaming with Vehicles

We present developments in constructing a 3D environment and integrating a virtual reality headset in our Project Aquaticus platform. We designed Project Aquaticus to examine the interactions between human-robot teammate trust, cognitive load, and perceived robot intelligence levels while they compete in games of capture the flag on the water. Further, this platform will allows us to study human learning of tactical judgment under a variety of robot capabilities. To enable human-machine teaming (HMT), we created a testbed where humans operate motorized kayaks while the robots are autonomous catamaran-style surface vehicles. MOOS-IvP provides autonomy for the robots. After receiving an order from a human, the autonomous teammates can perform tasks conducive to capturing the flag, such as defending or attacking a flag. In the Project Aquaticus simulation, the humans control their virtual vehicle with a joystick and communicate with their robots via radio. Our current simulation is not engaging or realistic for participants because it presents a top-down, omniscient view of the field. This fully observable representation of the world is well suited for managing operations from the shore and teaching new players game mechanics and strategies; however, it does not accurately reflect the limited and almost chaotic view of the world a participant experiences while in their motorized kayak on the water. We present creating a 3D visualization through Unity that users experience through a virtual reality headset. Such a system allows us to perform experiments without the need for a significant investment in on-water experiment resources while also permitting us to gather data year-round through the cold winter months.

Michael Novitzky, Rob Semmens, Nicholas H. Franck, Christa M. Chewar, Christopher Korpela
A Robotic Augmented Reality Virtual Window for Law Enforcement Operations

In room-clearing tasks, SWAT team members suffer from a lack of initial environmental information: knowledge about what is in a room and what relevance or threat level it represents for mission parameters. Normally this gap in situation awareness is rectified only upon room entry, forcing SWAT team members to rely on quick responses and near-instinctual reactions. This can lead to dangerously escalating situations or important missed information which, in turn, can increase the likelihood of injury and even mortality. Thus, we present an x-ray vision system for the dynamic scanning and display of room content, using a robotic platform to mitigate operator risk. This system maps a room using a robot-equipped stereo depth camera and, using an augmented reality (AR) system, presents the resulting geographic information according to the perspective of each officer. This intervention has the potential to notably lower risk and increase officer situation awareness, all while team members are in the relative safety of cover. With these potential stakes, it is important to test the viability of this system natively and in an operational SWAT team context.

Nate Phillips, Brady Kruse, Farzana Alam Khan, J. Edward Swan II, Cindy L. Bethel
Enabling Situational Awareness via Augmented Reality of Autonomous Robot-Based Environmental Change Detection

Accurately detecting changes in one’s environment is an important ability for many application domains, but can be challenging for humans. Autonomous robots can easily be made to autonomously detect metric changes in the environment, but unlike humans, understanding context can be challenging for robots. We present a novel system that uses an autonomous robot performing point cloud-based change detection to facilitate information-gathering tasks and provides enhanced situational awareness. The robotic system communicates detected changes via augmented reality to a human teammate for evaluation. We present results from a fielded system using two differently-equipped robots to examine implementation questions of point cloud density and its effect on visualization of changes. Our results show that there are trade-offs between implementations that we believe will be constructive towards similar systems in the future.

Christopher Reardon, Jason Gregory, Carlos Nieto-Granda, John G. Rogers
Construction of Human-Robot Cooperation Assembly Simulation System Based on Augmented Reality

Human-Robot cooperation (HRC) is the developing trend in the field of industrial assembly. Design and evaluation of the HRC assembly workstation considering the human factor is very important. In order to evaluate the transformational construction scenario of a manual assembly workstation to a HRC workstation fast and safely, a HRC assembly simulation system is constructed which is based on Augmented Reality (AR) with human-in-loop interaction. It enables a real operator to interact with virtual robot in a real scene, and the assembly steps of real workers can be restored and mapped to a virtual human model for further ergonomic analysis. Kinect and LeapMotion are used as the sensors for human-robot interaction decision and feedback. An automobile gearbox assembly is taken as an example for different assembly task verification, operators’ data are collected and analyzed by RULA scores and NASA-TLX questionnaires. The result shows that the simulation system can be used for the human factor evaluation of different HRC task configuration schemes.

Qiang Wang, Xiumin Fan, Mingyu Luo, Xuyue Yin, Wenmin Zhu
Using Augmented Reality to Better Study Human-Robot Interaction

In the field of Human-Robot Interaction, researchers often techniques such as the Wizard-of-Oz paradigms in order to better study narrow scientific questions while carefully controlling robots’ capabilities unrelated to those questions, especially when those other capabilities are not yet easy to automate. However, those techniques often impose limitations on the type of collaborative tasks that can be used, and the perceived realism of those tasks and the task context. In this paper, we discuss how Augmented Reality can be used to address these concerns while increasing researchers’ level of experimental control, and discuss both advantages and disadvantages of this approach.

Tom Williams, Leanne Hirshfield, Nhan Tran, Trevor Grant, Nicholas Woodward
Backmatter
Metadaten
Titel
Virtual, Augmented and Mixed Reality. Design and Interaction
herausgegeben von
Jessie Y. C. Chen
Gino Fragomeni
Copyright-Jahr
2020
Electronic ISBN
978-3-030-49695-1
Print ISBN
978-3-030-49694-4
DOI
https://doi.org/10.1007/978-3-030-49695-1