Skip to main content
main-content
Top

About this book

The LNCS journal Transactions on Computational Science reflects recent developments in the field of Computational Science, conceiving the field not as a mere ancillary science but rather as an innovative approach supporting many other scientific disciplines. The journal focuses on original high-quality research in the realm of computational science in parallel and distributed environments, encompassing the facilitating theoretical foundations and the applications of large-scale computations and massive data processing. It addresses researchers and practitioners in areas ranging from aerospace to biochemistry, from electronics to geosciences, from mathematics to software architecture, presenting verifiable computational methods, findings, and solutions, and enabling industrial users to apply techniques of leading-edge, large-scale, high performance computational methods.This, the 37th issue of the Transactions on Computational Science, is devoted to the area of Computer Graphics. The 9 papers included in the volume constitute extended versions of selected papers presented at the 36th Computer Graphics International Conference, CGI 2019. Topics covered include virtual reality, augmented reality, image retrieval, animation of elastoplastic material, and visualization of 360°HDR images.

Table of Contents

Frontmatter

Do Distant or Colocated Audiences Affect User Activity in VR?

Abstract
We explore the impact of distant or colocated real audiences on social inhibition through a user study in virtual reality (VR). The study investigates, in an application, the differences among two multi-user configurations (i.e., the local and distant conditions) and one control condition where the user is alone (i.e., the alone condition). In the local condition, a single user and a real audience share the same real room. Conversely, in the distant condition, the user and the audience are separated into two different real rooms. The user performed a categorization of numbers task in VR, for which the users’ performance results (i.e., type and answering time) are extracted as subjective feelings and perceptions (i.e., perceptions of others, stress, cognitive workload, presence). The differences between the local and distant configurations are explored. Furthermore, we investigate any gender biases in the objective and subjective results. During the local and distant conditions, the presence of a real audience affects the user’s performance due to social inhibition. The users are even more influenced when the audience does not share the same room, despite the audience being less directly perceived in this condition.
Romain Terrier, Nicolas Martin, Jeremy Lacoche, Valerie Gouranton, Bruno Arnaldi

Physical Environment Reconstruction Beyond Light Polarization for Coherent Augmented Reality Scene on Mobile Devices

Abstract
The integration of virtual objects to appear as part of the real world is the base of photo-realistic augmented reality (AR) scene development. The physical illumination information, environment features, and virtual objects shading materials combined are considered to reach a perceptually coherent final scene. Other research investigated the problem while assuming availability of scene geometry beforehand, pre-computation of light location, or offline execution. In this paper, we incorporated our previous work of direct light detection with real scene understanding features to provide occlusion, plane detection, and scene reconstruction for improved photo-realism. The whole system tackles several problems at once which consists of: (1) physics-based light polarization, (2) location of incident lights detection, (3) reflected lights simulation, (4) shading materials definition, (5) real-world geometric understanding. A validation of the system is performed by evaluating the geometric reconstruction accuracy, direct illumination pose, performance cost, and human perception.
A’aeshah Alhakamy, Mihran Tuceryan

Integrated Analysis and Hypothesis Testing for Complex Spatio-Temporal Data

Abstract
Analysis of unstructured, complex data is a challenging task that requires a combination of various data analysis techniques, including, among others, deep learning, statistical analysis, and interactive methods. A simple use of individual data analysis techniques addresses only a part of the overall data exploration and analysis challenge. The visual exploration process also requires exploration of what-if scenarios, a continuous and iterative process of generating and testing hypotheses. We describe a comprehensive approach to exploration of complex data that combines automatic and interactive data analysis and hypotheses testing techniques. The proposed approach is illustrated on a publicly available spatio-temporal data set, a collection of bird songs recorded over an extended period of time. Convolutional Neural Network is used to identify and classify bird species from the bird songs data. In addition, two new interactive views, integrated within a coordinated multiple views setup, are introduced: the what-if view and the spectrogram view. The proposed approach is used to develop a unified tool for exploration of bird songs data, called Bird Song Explorer.
Krešimir Matković, Denis Gračanin, Michael Beham, Rainer Splechtna, Miriah Meyer, Elena Ginina

Action Sequencing in VR, a No-Code Approach

Abstract
In many domains, it is common to have procedures, with a given sequence of actions to follow. To perform such procedures, virtual reality is a helpful tool as it allows to safely place a user in a given situation as many times as needed, without risk. Indeed, learning in a real situation implies risks for both the studied object – or the patient – (e.g. badly treated injury) and the trainee (e.g. lack of danger awareness). To do this, it is necessary to integrate the procedure in the virtual environment, under the form of a scenario. Creating such a scenario is a difficult task for a domain expert, as the coding skill level needed for that is too high. Often, a developer is needed to manage the creation of the virtual content, with the drawbacks that are implied (e.g. time loss and misunderstandings).
We propose a complete workflow to let the domain expert create their own scenarized content for virtual reality, without any need for coding. This workflow is divided in two steps: first, a new approach is provided to generate a scenario without any code, through the principle of creating by doing. Then, efficient methods are provided to reuse the scenario in an application in different ways, for either a human user guided by the scenario, or a virtual actor controlled by it.
Flavien Lécuyer, Valérie Gouranton, Adrien Reuzeau, Ronan Gaugne, Bruno Arnaldi

Single Color Sketch-Based Image Retrieval in HSV Color Space

Abstract
Sketch-based image retrieval is a fundamental computer vision problem. Instead of using hand-designed features to represent sketches and images, recent researches apply deep learning approaches combined with fine-grained matching to retrieve images with fine-grained details. Although these researches allow user to use hand-free sketches drawn for retrieving similar objects, the color matching is ignored which induces a low retrieval precision. To address this problem, we propose a single color sketch-based image retrieval (SCSBIR) approach using HSV color feature considering both shape matching and color matching in this paper. The SCSBIR problem is investigated using deep learning networks, in which deep features are used to represent color sketches and images. A novel ranking method considering both shape matching and color matching is also proposed. In addition, we build a SCSBIR dataset with color sketches and images, and train and test our method by using this dataset. The test results show that our method has a better retrieval performance. The research in this paper can not only promote its application in the commercial field, but also provide reference for the future research in this field.
Yu Xia, Shuangbu Wang, Yanran Li, Lihua You, Xiaosong Yang, Jian Jun Zhang

Integral-Based Material Point Method and Peridynamics Model for Animating Elastoplastic Material

Abstract
This paper exploits the use of Material Point Method (MPM) for graphical animation of elastoplastic materials and fracture. Previous partial derivative based MPM studies face challenges of underlying instability issues of particle distribution and the complexity of modeling discontinuities. This paper incorporates the state-based peridynamics structure with the MPM to alleviate these problems, which outweighs differential-based methods in both accuracy and stability. The deviatoric flow theory and a simple yield function are incorporated to animate plasticity. To model viscoelastic material, the constitutive model is developed with the linearized peridynamics theory which regards the current configuration as equilibrated and is only influenced by current incremental deformation. The peridynamics theory doesn’t involve the deformation gradient, thus it is straightforward to handle the problem of cracking in our hybrid framework. To ease the implementation of the fracture divergence under MPM, two time integration methods are adopted to update the crack interface and continuous parts separately. Our work can create a wide range of material phenomenon including elasticity, plasticity, viscoelasticity and fracture. Our framework provides an attractive method for producing a variety of elastoplastic materials and fracture with visual realism and high stability.
Yao Lyu, Jinglu Zhang, Ari Sarafopoulos, Jian Chang, Shihui Guo, Jian Jun Zhang

A Perceptually Coherent TMO for Visualization of 360 HDR Images on HMD

Abstract
We propose a new Tone Mapping Operator dedicated to the visualization of 360\(^\circ \) High Dynamic Range images on Head-Mounted Displays. Previous work around this topic has shown that the existing Tone Mapping Operators for classic 2D images are not adapted to 360\(^\circ \) High Dynamic Range images. Consequently, several dedicated operators have been proposed. Instead of operating on the entire 360\(^\circ \) image, they only consider the part of the image currently viewed by the user. Tone mapping a part of the 360\(^\circ \) image is less challenging as it does not preserve globally the dynamic range of the luminance of the scene. To cope with this problem, we propose a novel Tone Mapping Operator which takes advantage of 1) a view-dependant tone mapping that enhances the contrast, and 2) a Tone Mapping Operator applied to the entire 360\(^\circ \) image that preserves the global coherency. Furthermore, the proposed Tone Mapping Operator is adapted to the human eye perception of the luminance on Head-Mounted Displays. We present two subjective studies to model the lightness perception on such Head-Mounted Displays.
Ific Goudé, Rémi Cozot, Olivier Le Meur

Simulating Crowds and Autonomous Vehicles

Abstract
Understanding how people view and interact with autonomous vehicles is important to guide future directions of research. One such way of aiding understanding is through simulations of virtual environments involving people and autonomous vehicles. We present a simulation model that incorporates people and autonomous vehicles in a shared urban space. The model is able to simulate many thousands of people and vehicles in real-time. This is achieved by use of GPU hardware, and through a novel linear program solver optimized for large numbers of problems on the GPU. The model is up to 30 times faster than the equivalent multi-core CPU model.
John Charlton, Luis Rene Montana Gonzalez, Steve Maddock, Paul Richmond

MagiPlay: An Augmented Reality Serious Game Allowing Children to Program Intelligent Environments

Abstract
A basic understanding of problem-solving and computational thinking is undoubtedly a benefit for all ages. At the same time, the proliferation of Intelligent Environments has raised the need for configuring their behaviors to address their users’ needs. This configuration can take the form of programming, and coupled with advances in Augmented Reality and Conversational Agents, can enable users to take control of their intelligent surroundings in an efficient and natural manner. Focusing on children, who can greatly benefit by being immersed in programming from an early age, this paper presents an authoring framework in the form of an Augmented Reality serious game, named MagiPlay, allowing children to manipulate and program their Intelligent Environment. This is achieved through a handheld device, which children can use to capture smart objects via its camera and subsequently create rules dictating their behavior. An intuitive user interface permits players to combine LEGO-like 3D bricks as a part of the rule-based creation process, aiming to make the experience more natural. Additionally, children can communicate with the system via natural language through a Conversational Agent, in order to configure the rules by talking with a human-like agent, while the agent also serves as a guide/helper for the player, providing context-sensitive tips for every part of the rule creation process. Finally, MagiPlay enables networked collaboration, to allow parental and teacher guidance and support. The main objective of this research work is to provide young learners with a fun and engaging way to program their intelligent surroundings. This paper describes the game logic of MagiPlay, its implementation details, and discusses the results of a statistically significant evaluation conducted with end-users, i.e. a group of children of seven to twelve years old.
Evropi Stefanidi, Dimitrios Arampatzis, Asterios Leonidis, Maria Korozi, Margherita Antona, George Papagiannakis

Backmatter

Additional information