Skip to main content
Top

2020 | Book

HCI International 2020 – Late Breaking Papers: Virtual and Augmented Reality

22nd HCI International Conference, HCII 2020, Copenhagen, Denmark, July 19–24, 2020, Proceedings

insite
SEARCH

About this book

This book constitutes late breaking papers from the 22nd International Conference on Human-Computer Interaction, HCII 2020, which was held in July 2020. The conference was planned to take place in Copenhagen, Denmark, but had to change to a virtual conference mode due to the COVID-19 pandemic.

From a total of 6326 submissions, a total of 1439 papers and 238 posters have been accepted for publication in the HCII 2020 proceedings before the conference took place. In addition, a total of 333 papers and 144 posters are included in the volumes of the proceedings published after the conference as “Late Breaking Work” (papers and posters). These contributions address the latest research and development efforts in the field and highlight the human aspects of design and use of computing systems.

The 34 late breaking papers presented in this volume were organized in two topical sections named: Virtual, Augmented and Mixed Reality Design and Implementation; and User Experience in Virtual, Augmented and Mixed Reality.

Table of Contents

Frontmatter

Virtual, Augmented and Mixed Reality Design and Implementation

Frontmatter
Haptic Helmet for Emergency Responses in Virtual and Live Environments

Communication between team members in emergency situations is critical for first responders to ensure their safety and efficiency. In many cases, the thick smoke and noises in a burning building impair algorithms for navigational guidance. Here we present a helmet-based haptic interface with eccentric motors and communication channels. As part of the NIST PSCR Haptic Interfaces for Public Safety Challenge, our helmet with an embedded haptic interface in the headband enables communication with first responders through haptic signals about direction, measurements, and alerts. The haptic interface can be connected over LoRa for live communication or via USB to VR simulation system. With our affordable, robust, and intuitive system we took victory in the Haptic Challenge after the VR and live trials at a firefighter training facility.

Florian Alber, Sean Hackett, Yang Cai
eTher – An Assistive Virtual Agent for Acrophobia Therapy in Virtual Reality

This paper presents the design, a pilot implementation and validation of eTher, an assistive virtual agent for acrophobia therapy in a Virtual Reality environment that depicts a mountain landscape and contains a ride by cable car. eTher acts as a virtual therapist, offering support and encouragement to the patient. It directly interacts with the user and changes its voice parameters – pitch, tempo and volume – according to the patient’s emotional state. eTher identifies the levels of relaxation/anxiety compared to a baseline resting recording and provides three modalities of relaxation - by determining the user to look at a favorite picture, listen to an enjoyable song or read an inspirational quote. If the relaxation modalities fail to be effective, the virtual agent automatically lowers the level of exposure. We have validated our approach with a number of 10 users who played the game once without eTher’s intervention and three times with assistance from eTher. The results showed that the participants succeeded to finish the game quicker in the last gameplay session where the virtual agent intervened. Moreover, their biophysical data showed significant improvements in terms of relaxation state.

Oana Bălan, Ștefania Cristea, Gabriela Moise, Livia Petrescu, Silviu Ivașcu, Alin Moldoveanu, Florica Moldoveanu, Marius Leordeanu
A Color Design System in AR Guide Assembly

With the rapid development of human-computer interaction, computer graphics and other technologies, Augmented Reality is widely used in the assembly of large equipment, but in practical applications there are a lot of problems such as unreasonable color design. Based on this, we propose a projection display color design system based on color theory and image processing technology, the system can provide the designer and user with a color design system, solve the problem of color contrast, and realize the distinction of projection interface and projection plane. First of all, using the camera to gather the image of projection plane, and using the image processing to deal with image, so as to determine the color of the projection plane, then based on the color model and the theory of color contrast to choose the appropriate projection interface’s color, to ensure that there is a sharp contrast between the projection plane and projection interface, finally, the effectiveness of the proposed system is verified by experiment.

Xupeng Cai, Shuxia Wang, Guangyao Xu, Weiping He
An Augmented Reality Command and Control Sand Table Visualization of the User Interface Prototyping Toolkit (UIPT)

The User Interface Prototyping Toolkit (UIPT) constitutes a software application for the design and development of futuristic stylized user interfaces (UIs) for collaborative exploration and validation with regards to decision making and situational awareness by end users. The UIPT toolkit effort is targeted to Command, Control, Communications, Computers, Intelligence, Surveillance and Reconnaissance (C4ISR) and Cyber missions for the U.S. Navy. UIPT enables the Navy to significantly reduce the risks associated with pursuing revolutionary technology such as autonomous vehicles, artificial intelligence and distributed sensing in terms of the design of the user interface. With this in mind, augmented reality is examined to determine if this technology can be useful to the Navy’s mission in supporting the warfighter in the future vision of the Navy. This work examines the development and evaluation of an augmented reality virtual sand table presentation overlaid over the same set of information presented in the UIPT displayed on a large touch table. This new way to present the same information allows for a unique collaborative setting to be evaluated as a potential future user interface for the Navy. This is applied in the context where a group of warfighters gather in collaborative virtual space for decision making and situational awareness. The development and experimentation with an augmented reality interface initiates the means to validate a futuristic Navy vision of user interfaces that support the warfighter in a Command and Control environment.

Bryan Croft, Jeffrey D. Clarkson, Eric Voncolln, Alex Campos, Scott Patten, Richard Roots
Exploring Augmented Reality as Craft Material

Craft making is associated with tradition, cultural preservation, and skilled hand-making techniques. While there are examples of digital craft making analyses in the literature, Augmented Reality (AR) applied to craft making practice has not been explored, yet applying AR to craft making practices could bring insight into methods of combining virtual and physical materials. This paper investigates how AR is considered by craft makers. We find that narrative is essentially physically located in craft objects, and while virtual elements may describe and annotate an artefact, it is not considered part of the craft artefact’s narrative.

Lauren Edlin, Yuanyuan Liu, Nick Bryan-Kinns, Joshua Reiss
The Application of Urban AR Technology in Cultural Communication and Innovation

With the development of science and technology, today’s society has gradually entered the Internet-led system, and based on the demand for creating a more perfect intelligent experience for users, augmented reality technology has become a hot spot for the development of the modern intelligent industry. AR applications such as AR games, AR smart furniture, AR navigation systems, etc. are constantly emerging. As technology evolves and matures, AR applications are beginning to shift from 2D data information (text or image-based descriptions) to 3D integration objects. The development of technology has provided more ways and possibilities for cultural communication and innovation. City around the globe is ready for the next step for their evolution to augmented cities. This paper aims to design an application of Location-based AR technology for exploring cities and discuss its important role in cultural communication and innovation.

Yueyun Fan, Yaqi Zheng
Reporting Strategy for VR Design Reviews

Design reviews are an established component of the product development process. Especially, virtual reality design reviews (VRDRs) can generate valuable feedback on the user’s perception of a virtual product. However, the user’s perception is subjective and contextual. Since there is a lack of strategies for recording, reproducing VRDRs is an intricate task. User feedback is therefore often fleeting. This makes deriving meaningful results from VRDRs a diffuse task.In this paper, we present a strategy for recording VRDRs in structured and coherent reports. We suggest dividing all involved information into structural and process information. While the former describes the content and functionality of the VRDR, the latter represents occurring events.The strategy provides means to store collections of events in the context they occurred in. Additional properties such as timestamps, involved users and tags are provided to create comprehensive VRDR reports. By storing the report in a database, sorting and filtering algorithms can be applied to support efficient data evaluation subsequent to the VRDR. Thus, such reports can be used as a basis for reproducible VRDRs.

Martin Gebert, Maximilian-Peter Dammann, Bernhard Saske, Wolfgang Steger, Ralph Stelzer
Video Player Architecture for Virtual Reality on Mobile Devices

A virtual reality video player creates a different way to play videos: the user is surrounded by the virtual environment, and aspects such as visualization, audio, and 3D become more relevant. This paper proposes a video player architecture for virtual reality environments. To assess this architecture, tests involved comparisons between an SXR video player application, a fully featured 3d application, and a video player implemented in Unity. Tests generated performance reports that measured each scenario using frames per second.

Adriano M. Gil, Afonso R. Costa Jr., Atacilio C. Cunha, Thiago S. Figueira, Antonio A. Silva
A Shader-Based Architecture for Virtual Reality Applications on Mobile Devices

As new technologies for CPUs and GPUs are released, games showcase improved graphics, physics simulations, and responsiveness. For limited form-factors such as virtual reality head-mounted displays though, it is possible to explore alternatives components to harness additional performance such as the GPU. This paper introduces a shader-based architecture for developing games using shared resources between the CPU and the GPU.

Adriano M. Gil, Thiago S. Figueira
Emotions Synthesis Using Spatio-Temporal Geometric Mesh

Emotions can be synthesized in virtual environments through spatial calculations that define regions and intensities of displacement, using landmark controllers that consider spatio-temporal variations. This work presents a proposal for calculating spatio-temporal mesh for 3D objects that can be used in the realistic synthesis of emotions in virtual environments. his technique is based on calculating centroids by facial region and uses classification by machine learning to define the positions of geometric controllers making the animations realistic.

Diego Addan Gonçalves, Eduardo Todt
An Augmented Reality Approach to 3D Solid Modeling and Demonstration

This paper presents an intuitive and natural gesture-based methodology for solid modelling in the Augmented Reality (AR) environment. The framework of Client/Server (C/S) is adopted to design the AR-based computer aided design (CAD) system. The method of creating random or constraints- based points using gesture recognition is developed to support modelling. In addition, a prototype system of product 3D solid modelling has been successfully developed, we have compared it with traditional CAD systems through several basic design modeling tasks. Finally, analysis of questionnaire feedback survey shows the intuitiveness and effectiveness of the system, and user studies demonstrate the advantage of helping accomplish the product early design and creating and manipulating the 3D model in the AR environment.

Shu Han, Shuxia Wang, Peng Wang
Quick Projection Mapping on Moving Object in the Manual Assembly Guidance

In the modern assembly, manual assembly is one of the essential works in the factory. By the help of the robots and any other automatic equipments, modern assembly is quite important to improve the efficiency of the necessary manual assembly. Projection is one of the most significant assisted method to solve the assemble problem. This paper presents a quick projection method for the slightly moving object in the manual assembly guidance. By using the closed-loop alignment approach, the proposed method is low-latency and auto tracking to the target object. We designed a tracking and matching system to realize the projecting on the target object. The result showed that the precious of the projecting information can satisfy the needs which the workers followed the projecting guidance to do the assembly. This system also released workers pressure because of the wrong operation and made them comfortable to do their works in the factory.

Weiping He, Bokai Zheng, Shuxia Wang, Shouxia Wang
Design and Implementation of a Virtual Workstation for a Remote AFISO

On the basis of given use cases for the professional group of flight controllers (AFISO) and air traffic controllers (ATCO), a human machine interface with two different interaction concepts for virtual reality was developed.The aim was to facilitate the cooperation between air traffic controllers and air traffic and to enable the remote monitoring of AFIS airfields with the help of a VR headset.ATCOs have more tasks and a higher degree of authorizations, but also perform the same activities as AFISOs and can use the virtual workstation in the same way.Software and hardware solutions were identified, the usage context of the AFISO was recorded and usage and system requirements for the user interface were formulated. Based on this, an overall concept was developed that includes the virtual work environment, the head-up display and interactive objects. A user interface and two different VR prototypes for interaction were developed. The focus lied on the implementation of the AFISOs tasks and the design of his virtual workplace. Ergonomic and usability relevant aspects as well as the physical environment of the user were considered. In a series of tests with seven ATCOs and two AFISOs, the VR prototypes were tested at the DLR research center in Braunschweig and evaluated through user interviews and questionnaires.

Thomas Hofmann, Jörn Jakobi, Marcus Biella, Christian Blessmann, Fabian Reuschling, Tom Kamender
A Scene Classification Approach for Augmented Reality Devices

Augmented Reality (AR) technology can overlay digital content over the physical world to enhance the user’s interaction with the real-world. The increasing number of devices for this purpose, such as Microsoft HoloLens, MagicLeap, Google Glass, allows to AR an immensity of applications. A critical task to make the AR devices more useful to users is the scene/environment understanding because this can avoid the device of mapping elements that were previously mapped and customized by the user. In this direction, we propose a scene classification approach for AR devices which has two components: i) an AR device that captures images, and ii) a remote server to perform scene classification. Four methods for scene classification, which utilize convolutional neural networks, support vector machine and transfer learning are proposed and evaluated. Experiments conducted using real data from an indoor office environment and Microsoft HoloLens AR device shows that the proposed AR scene classification approach can reach up to $$99\%$$ of accuracy, even with similar texture information across scenes.

Aasim Khurshid, Sergio Cleger, Ricardo Grunitzki
Underwater Search and Discovery: From Serious Games to Virtual Reality

There are different ways of discovering underwater archaeological sites. This paper presents search techniques for discovering artefacts in the form of two different educational games. The first one is a classical serious game that assesses two maritime archaeological methods for search and discovering artefacts including circular and compass search. Evaluation results with 30 participants indicated that the circular search method is the most appropriate one. Based on these results, an immersive virtual reality search and discovery simulation was implemented. To educate the users about underwater site formation process digital storytelling videos were used when an artefact is discovered.

Fotis Liarokapis, Iveta Vidová, Selma Rizvić, Stella Demesticha, Dimitrios Skarlatos
Emergent Behaviour of Therapists in Virtual Reality Rehabilitation of Acquired Brain Injury

This study investigates how therapists are able to adopt a virtual reality toolset for rehabilitation of patients with acquired brain injury. This was investigated by conducting a case study where the therapists and their interactions with the system as well as with the patients were in focus. A tracked tablet gives the therapist a virtual camera and control over the virtual environment. Video recordings, participant observers and field notes were the main sources for data used in an interaction analysis. Results reveal emergent behaviour and resourcefulness by the therapists in utilizing the virtual tools in combination with their conventional approaches to rehabilitation.

Henrik Sæderup, Flaviu Vreme, Hans Pauli Arnoldson, Alexandru Diaconu, Michael Boelstoft Holte
Improving Emergency Response Training and Decision Making Using a Collaborative Virtual Reality Environment for Building Evacuation

Emergency response training is needed to remember and implement emergency operation plans (EOP) and procedures over long periods until an emergency occurs. There is also a need to develop an effective mechanism of teamwork under emergency conditions such as bomb blasts and active shooter events inside a building. One way to address these needs is to create a collaborative training module to study these emergencies and perform virtual evacuation drills. This paper presents a collaborative virtual reality (VR) environment for performing emergency response training for fire and smoke as well as for active shooter training scenarios. The collaborative environment is implemented in Unity 3D and is based on run, hide, and fight mode of emergency response. Our proposed collaborative virtual environment (CVE) is set up on the cloud and the participants can enter the VR environment as a policeman or as a civilian. We have used game creation as a metaphor for developing a CVE platform for conducting training exercises for different what-if scenarios in a safe and cost-effective manner. The novelty of our work lies in modeling behaviors of two kinds of agents in the environment: user-controlled agents and computer-controlled agents. The computer controlled agents are defined with preexisting rules of behaviors whereas the user controlled agents are autonomous agents that provide controls to the user to navigate in the CVE at their own pace. Our contribution lies in our approach to combine these two approaches of behavior to perform emergency response training for building evacuation.

Sharad Sharma
Text Entry in Virtual Reality: Implementation of FLIK Method and Text Entry Testbed

We present a testbed for testing text entry techniques in virtual reality, and two experiments employing the testbed. The purpose of the testbed is to provide a flexible and reusable experiment tool for text entry studies, in such a way to include studies from a variety of sources, more specifically to this work, from virtual reality text entry experiments. Our experiments evaluate common text entry techniques and one novel one that we have dubbed the Fluid Interaction Keyboard (FLIK). These experiments not only serve as a way of validating the text entry test-bed, but also contribute the results of these studies to the pool of research related to text entry in virtual reality.

Eduardo Soto, Robert J. Teather
Appropriately Representing Military Tasks for Human-Machine Teaming Research

The use of simulation has become a popular way to develop knowledge and skills in aviation, medicine, and several other domains. Given the promise of human-robot teaming in many of these same contexts, the amount of research in human-autonomy teaming has increased over the last decade. The United States Air Force Academy (USAFA), for example, has developed several testbeds to explore human-autonomy teaming in and out of the laboratory. Fidelity requirements have been carefully established in order to assess important factors in line with the goals of the research. This paper describes how appropriate fidelity is established across a range of human-autonomy research objectives. We provide descriptions of testbeds ranging from robots in the laboratory to higher-fidelity flight simulations and real-world driving. We conclude with a description and guideline for selecting appropriate levels of fidelity given a research objective in human-machine teaming research.

Chad C. Tossell, Boyoung Kim, Bianca Donadio, Ewart J. de Visser, Ryan Holec, Elizabeth Phillips
A Portable Measurement System for Spatially-Varying Reflectance Using Two Handheld Cameras

In this paper, we propose a system that can measure the spatially-varying reflectance of real materials. Our system uses two handheld cameras, a small LED light, a turning table, and a chessboard with markers. The two cameras are used as a view and light cameras respectively to acquire incoming and outgoing light directions simultaneously, and the brightness at each position on the target material. The reflectance is approximated by using the Ward BRDF (Bidirectional Reflectance Distribution Function) model. The normal directions and all model parameters at each position on the material are estimated by non-linear optimization. As the result of experiment, the normal directions for all spatial points were properly estimated, and the correct colors of rendered materials were reproduced. Also, highlight changes on the surfaces were observed when we moved the light source or the rendered materials. It was confirmed that our system was easy to use and was able to measure the spatially-varying reflectance of real materials.

Zar Zar Tun, Seiji Tsunezaki, Takashi Komuro, Shoji Yamamoto, Norimichi Tsumura
Influence of Visual Gap of Avatar Joint Angle on Sense of Embodiment in VR Space Adjusted via C/D Ratio

The movement of an avatar in the VR space is often handled in complete synchronization with the body of the operator, both spatially and temporally. However, if the operator uses an interaction device with a narrow work area, the range of motion of the avatar may be adjusted by tuning the control/display (C/D) ratio. This tuning of the C/D ratio is a technique often used in mouse interaction; however, upon applying the concept of the C/D ratio on an avatar that is connected with the body of an operator, the sense of embodiment felt by the operator of the avatar is affected. In this study, we investigate the subjective effects of the sense of embodiment, presence, and mental workload while performing a point-to-point task between two points, by using the effects of the avatar appearance and the C/D ratio as independent variables.

Takehiko Yamaguchi, Hiroaki Tama, Yuya Ota, Yukiko Watabe, Sakae Yamamoto, Tetsuya Harada

User Experience in Virtual, Augmented and Mixed Reality

Frontmatter
Analysis of Differences in the Manner to Move Object in Real Space and Virtual Space Using Haptic Device for Two Fingers and HMD

One of the elements that make up VR is “self-projection”. This is to create a state where a person can enter the virtual space and experience it as a first person. If not only HMD but also haptic devices can be used to touch and move objects in the virtual space with the haptic sensation, it is thought that self-projection performance will improve. Therefore, we developed a haptic device with two rings (two-point control type SPIDAR-GCC) so that the user can perform knob operation in the same way as in reality. And we constructed workspaces in virtual and real space with highly similarity to investigate the difference between the work performed in virtual space and in real space. Using this device and HMD, the experimental participants were asked to accomplish a pick-and-place task which is pinching the peg with two fingers and insert it into the hole in the pegboard. In this experiment, the difference in the manner to move objects between in real space and in virtual space was observed. It was mainly due to the errors of measuring the displacements of fingers in the haptic device.

Yuki Aoki, Yuki Tasaka, Junji Odaka, Sakae Yamamoto, Makoto Sato, Takehiko Yamaguchi, Tetsuya Harada
A Study of Size Effects of Overview Interfaces on User Performance in Virtual Environments

Many virtual environment applications use an overview interface showing a survey of the entire space. Little research has been conducted on the size effects of overview interfaces on users’ performance and experiences in virtual environments. The experiment is two (overview interface size) × two (familiarity with mobile devices) between-subject design. Participants completed three tasks on a mobile device and filled out the NASA task load index (TLX) questionnaire as a measure of mental workload. Thirty-two participants were invited to take part in the experiment based on convenient sampling method. The results are as follows: (1) Participants using the smaller overview interface performed significantly better than those using the larger overview interface in the most difficult task. (2) Participants who were more familiar with mobile devices performed significantly better than those who were less familiar with mobile devices in their first visit to the unfamiliar virtual environment. (3) The larger overview interface required significantly more mental workload than the smaller overview interface in terms of the sum score and performance score on NASA TLX. (4) The impacts of overview interface size on users’ performance in the most difficult task and the effort they felt appear to be different for participants with different levels of familiarity with mobile devices. Thus the design of overview interface should consider users’ familiarity with mobile devices.

Meng-Xi Chen, Chien-Hsiung Chen
Text Input in Virtual Reality Using a Tracked Drawing Tablet

We present an experiment evaluating the effectiveness of a tracked drawing tablet for use in virtual reality (VR) text input. Participants first completed a text input pre-test, entering several phrases using a physical keyboard. Participants then entered text in VR using an HTC Vive, with a tracker mounted on a drawing tablet with a QWERTY soft keyboard overlaid on the virtual tablet. This was similar to text input using stylus-supported mobile devices. Our results indicate that not only did participants prefer the Vive controller, it also offered superior entry speed (16.31 wpm vs. 12.79 wpm with the tablet and stylus) and error rates (4.1% vs. 6.4%). Pre-test scores were also correlated to measured entry speeds, and reveal that user typing speed on physical keyboards provides a modest predictor of VR text input speed (R2 of 0.6 for the Vive controller, 0.45 for the tablet).

Seyed Amir Ahmad Didehkhorshid, Siju Philip, Elaheh Samimi, Robert J. Teather
Behavioral Indicators of Interactions Between Humans, Virtual Agent Characters and Virtual Avatars

Simulations and games allow us to experience events as if they were really happening in a way that is safer and less expensive. Despite improvements in realism in these types of environments, one area that still presents a challenge is interpersonal interactions. The subtleties of what makes an interaction rich are difficult to define. As such, there is value in building on existing research into how individuals react to virtual characters to inform future investments.Ultimately, the goal is to understand what might cause people to engage or disengage with virtual characters. To answer that question, it is important to establish metrics that would indicate when people believe their interaction partner is real, or has agency. This paper describes behavioral metrics explored as part of this research. The results provide valuable feedback on how users need to see and be seen by their interaction partner to ensure non-verbal cues provide context and additional meaning to the dialog. This study provides insight into areas of future research, offering a foundation of knowledge for further exploration and lessons learned.This was a field study incorporating a novel approach to a real-life experience, a dialog with another individual. Two metrics are explored in this paper; gestural data and open-ended questions, which together provided insight into the information humans rely on and apply in these types of interactions to understand and be understood.

Tamara S. Griffith, Cali Fidopiastis, Patricia Bockelman-Morrow, Joan Johnston
Perceived Speed, Frustration and Enjoyment of Interactive and Passive Loading Scenarios in Virtual Reality

Long waits and disruptive loading breaks can evoke negative emotions, like frustration. While there is a lot of research on 2D-based loading scenarios, it is unclear how people react to loading screens in an immersive virtual reality (VR) environment. In this paper we conducted a user study to investigate the effects of interactive and passive loading screens on the users’ loading screen experience (LSE) in VR. We measured perceived speed, enjoyment and frustration for long and short waiting times. Results show that interactive loading screens improved participants’ LSE through increasing perceived speed and enjoyment, and decreased their frustration while waiting. Thus, previous findings of 2D-based research were confirmed. Therefore, our research provides a first approach for further investigations of different loading screens in VR.

David Heidrich, Annika Wohlan, Meike Schaller
Augmented Riding: Multimodal Applications of AR, VR, and MR to Enhance Safety for Motorcyclists and Bicyclists

Operating two-wheeled vehicles in four-wheel-dominant environments presents unique challenges and hazards to riders, requiring additional rider attention and resulting in increased inherent risk. Emerging display and simulation solutions offer the unique ability to help mitigate rider risk–augmented, mixed, and virtual reality (collectively extended reality; XR) can be used to rapidly prototype and test concepts, immersive virtual and mixed reality environments can be used to test systems in otherwise hard to replicate environments, and augmented and mixed reality can fuse the real world with digital information overlays and depth-based sensing capabilities to enhance rider situational awareness. This paper discusses the use of multimodal applications of XR and integration with commercial off the shelf components to create safe riding technology suites. Specifically, the paper describes informal and formal research conducted regarding the use of haptic, audio, and visual hazard alerting systems to support hands-on, heads-up, eyes-out motorcycle riding, as well as the use of an immersive mixed reality connected bicycle simulator for rapidly and representatively evaluating rider safety-augmenting technologies in a risk-free environment.

Caroline Kingsley, Elizabeth Thiry, Adrian Flowers, Michael Jenkins
Virtual Environment Assessment for Tasks Based on Sense of Embodiment

The quality of a virtual environment for a specified task based on the concept of sense of embodiment (SoE) was assessed in this study. The quality of virtual reality (VR) is evaluated based on the VR system or apparatus’s performance; however, we focused on VR users executing tasks in virtual environments and tried to assess the virtual environment for the tasks. We focused on the user’s sense of agency (SoA) and sense of self-location (SoSL), which were considered as components of the SoE. The SoA was measured based on the surface electroencephalogram of two body parts and our SoE questionnaire. We analysed the surface electroencephalogram waveforms using signal averaging and determined the observable latent time from the analysed waveforms for estimating the state of SoA. To assess the different virtual environments, we built two virtual environments composed of different versions of SPIDAR-HS as a haptic interface and a common head-mounted display. The experiment was executed in two virtual environments in addition to the reality environment. In the three environments, the participants executed the rod tracking task (RTT) in a similar way, and their EMG and subjective data were measured during the RTT. From the results, we considered the task performance based on the participants’ SoA and SoSL, and the quality of the two virtual environments were compared. Furthermore, the relation between the quality of the virtual environment and the factors related to the characteristics of haptic and visual interfaces was revealed to some extent.

Daiji Kobayashi, Yoshiki Ito, Ryo Nikaido, Hiroya Suzuki, Tetsuya Harada
Camera-Based Selection with Cardboard Head-Mounted Displays

We present two experiments comparing selection techniques for low-cost mobile VR devices, such as Google Cardboard. Our objective was to assess the feasibility of computer vision tracking on mobile devices as an alternative to common head-ray selection methods. In the first experiment, we compared three selection techniques: air touch, head ray, and finger ray. Overall, hand-based selection (air touch) performed much worse than ray-based selection. In the second experiment, we compared different combinations of selection techniques and selection indication methods. The built-in Cardboard button worked well with the head ray technique. Using a hand gesture (air tap) with ray-based techniques resulted in slower selection times, but comparable accuracy. Our results suggest that camera-based mobile tracking is best used with ray-based techniques, but selection indication mechanisms remain problematic.

Siqi Luo, Robert J. Teather, Victoria McArthur
Improving the Visual Perception and Spatial Awareness of Downhill Winter Athletes with Augmented Reality

This research study addresses the design and development of an augmented reality headset display for downhill winter athletes, which may improve visual perception and spatial awareness, and reduce injury. We have used a variety of methods to collect the participant data, including surveys, experience-simulation-testing, user-response-analysis, and statistical analysis. The results revealed that various levels of downhill winter athletes may benefit differently from access to athletic data during physical activity, and indicated that some expert level athletes can train to strengthen their spatial-awareness abilities. The results also generated visual design recommendations, including icon colours, locations within the field-of-view, and alert methods, which could be utilized to optimize the usability of a headset display.

Darren O’Neill, Mahmut Erdemli, Ali Arya, Stephen Field
Desktop and Virtual-Reality Training Under Varying Degrees of Task Difficulty in a Complex Search-and-Shoot Scenario

Two-dimensional (2D) desktop and three-dimensional (3D) Virtual-Reality (VR) play a significant role in providing military personnel with training environments to hone their decision-making skills. The nature of the environment (2D versus 3D) and the order of task difficulty (novice to expert or expert to novice) may influence human performance in these environments. However, an empirical evaluation of these environments and their interaction with the order of task difficulty has been less explored. The primary objective of this research was to address this gap and explore the influence of different environments (2D desktop or 3D VR) and order of task difficulty (novice to expert or expert to novice) on human performance. In a lab-based experiment, a total of 60 healthy subjects executed scenarios with novice or expert difficulty levels across both 2D desktop environments (N = 30) and 3D VR environments (N = 30). Within each environment, 15 participants executed the novice scenario first and expert scenario second, and 15 participants executed the expert scenario first and novice scenario second. Results revealed that the participants performed better in the 3D VR environment compared to the 2D desktop environment. Participants performed better due to both expert training (performance in novice second better compared to novice first) and novice training (performance in expert second better compared to expert first). The combination of a 3D VR environment with expert training first and novice training second maximized performance. We expect to use these conclusions for creating effective training environments using VR technology.

Akash K. Rao, Sushil Chandra, Varun Dutt
Computer-Based PTSD Assessment in VR Exposure Therapy

Post-traumatic stress disorder (PTSD) is a mental health condition affecting people who experienced a traumatic event. In addition to the clinical diagnostic criteria for PTSD, behavioral changes in voice, language, facial expression and head movement may occur. In this paper, we demonstrate how a machine learning model trained on a general population with self-reported PTSD scores can be used to provide behavioral metrics that could enhance the accuracy of the clinical diagnosis with patients. Both datasets were collected from a clinical interview conducted by a virtual agent (SimSensei) [10]. The clinical data was recorded from PTSD patients, who were victims of sexual assault, undergoing a VR exposure therapy. A recurrent neural network was trained on verbal, visual and vocal features to recognize PTSD, according to self-reported PCL-C scores [4]. We then performed decision fusion to fuse three modalities to recognize PTSD in patients with a clinical diagnosis, achieving an F1-score of 0.85. Our analysis demonstrates that machine-based PTSD assessment with self-reported PTSD scores can generalize across different groups and be deployed to assist diagnosis of PTSD.

Leili Tavabi, Anna Poon, Albert Skip Rizzo, Mohammad Soleymani
Text Entry in Virtual Reality: A Comparison of 2D and 3D Keyboard Layouts

Text entry is an important task in most interactive technologies in use today. Virtual Reality (VR) is becoming increasingly popular and is used in a variety of contexts, including tasks that involve text entry. With this being the case, it has become increasingly important to determine what the best keyboard layout is for text entry tasks in VR environments. To address this need, the current study compared two keyboard layouts, 2D (flat UI) and 3D (curved UI), with respect to text entry performance in VR. Results indicated that, compared to the 3D keyboard layout, using the 2D keyboard layout for the text entry task led to a greater number of words per minute, fewer corrections, and fewer redundant key presses while typing. These results indicate that the 2D keyboard layout was more efficient in VR text entry performance, compared to the 3D keyboard layout. Implications for the design and development of VR text entry tasks are discussed.

Caglar Yildirim, Ethan Osborne
Backmatter
Metadata
Title
HCI International 2020 – Late Breaking Papers: Virtual and Augmented Reality
Editors
Prof. Constantine Stephanidis
Jessie Y. C. Chen
Gino Fragomeni
Copyright Year
2020
Electronic ISBN
978-3-030-59990-4
Print ISBN
978-3-030-59989-8
DOI
https://doi.org/10.1007/978-3-030-59990-4