Skip to main content
Top

2019 | Book

Advances in Computer Graphics

36th Computer Graphics International Conference, CGI 2019, Calgary, AB, Canada, June 17–20, 2019, Proceedings

Editors: Prof. Marina Gavrilova, Jian Chang, Prof. Nadia Magnenat Thalmann, Prof. Eckhard Hitzer, Prof. Hiroshi Ishikawa

Publisher: Springer International Publishing

Book Series : Lecture Notes in Computer Science

insite
SEARCH

About this book

This book constitutes the refereed proceedings of the 36th Computer Graphics International Conference, CGI 2019, held in Calgary, AB, Canada, in June 2019.
The 30 revised full papers presented together with 28 short papers were carefully reviewed and selected from 231 submissions. The papers address topics such as: 3D reconstruction and rendering, virtual reality and augmented reality, computer animation, geometric modelling, geometric computing, shape and surface modelling, visual analytics, image processing, pattern recognition, motion planning, gait and activity biometric recognition, machine learning for graphics and applications in security, smart electronics, autonomous navigation systems, robotics, geographical information systems, and medicine and art.

Table of Contents

Frontmatter

CGI’19 Full Papers

Frontmatter
Polarization-Based Illumination Detection for Coherent Augmented Reality Scene Rendering in Dynamic Environments

A virtual object that is integrated into the real world in a perceptually coherent manner using the physical illumination information in the current environment is still under development. Several researchers investigated the problem producing a high-quality result; however, pre-computation and offline availability of resources were the essential assumption upon which the system relied. In this paper, we propose a novel and robust approach to identifying the incident light in the scene using the polarization properties of the light wave and using this information to produce a visually coherent augmented reality within a dynamic environment. This approach is part of a complete system which has three simultaneous components that run in real-time: (i) the detection of the incident light angle, (ii) the estimation of the reflected light, and (iii) the creation of the shading properties which are required to provide any virtual object with the detected lighting, reflected shadows, and adequate materials. Finally, the system performance is analyzed where our approach has reduced the overall computational cost.

A’aeshah Alhakamy, Mihran Tuceryan
BioClouds: A Multi-level Model to Simulate and Visualize Large Crowds

This paper presents a multi-level approach to simulate large crowds [18] called BioClouds. The goal of this work is to model larger groups of agents by simulating aggregation of agents as singular units. This approach combines microscopic and macroscopic simulation strategies, where each group of agents (called cloud) keeps the global characteristics of the crowd unity without simulating individuals. In addition to macroscopic strategy, BioClouds allows to alter from global to local behavior (individuals), providing more accurate simulation in terms of agents velocities and densities. We also propose a new model of visualization focused on larger simulated crowds but keeping the possibility of “zooming” individuals and see their behaviors. Results indicate that BioClouds presents coherent behaviors when compared to what is expected in global and individual levels. In addition, BioClouds provides an important speed up in processing time when compared to microcospic crowd simulators present in literature, being able to achieve until one million agents, organized in 2000 clouds and simulated at 86.85 ms per frame.

Andre Da Silva Antonitsch, Diogo Hartmann Muller Schaffer, Gabriel Wetzel Rockenbach, Paulo Knob, Soraia Raupp Musse
Am I Better in VR with a Real Audience?

We present an experimental study using virtual reality (VR) to investigate the effects of a real audience on social inhibition. The study compares a multi-user application, locally or remotely shared. The application engages one user and a real audience (i.e., local or remote conditions) and a control condition where the user is alone (i.e., alone condition). The differences have been explored by analyzing the objective performance (i.e., type and answering time) of users when performing a categorization of numbers task in VR. Moreover, the subjective feelings and perceptions (i.e., perceptions of others, stress, cognitive workload, presence) of each user have been compared in relation to the location of the real audience. The results showed that in the presence of a real audience (in the local and remote conditions), user performance is affected by social inhibitions. Furthermore, users are even more influenced when the audience does not share the same room, despite others are less perceived.

Romain Terrier, Nicolas Martin, Jérémy Lacoche, Valérie Gouranton, Bruno Arnaldi
Video Sequence Boundary Labeling with Temporal Coherence

We propose a method for video sequence boundary labeling which maintains the temporal coherence. The method is based on two ideas. We limit the movement of the label boxes only to the horizontal direction, and reserve free space for the movement of the label boxes in the label layout. The proposed method is able to position label boxes in video sequence on a lower number of rows than existing methods, while at the same time, it minimizes the movement of label boxes. We conducted an extensive user experiment where the proposed method was ranked the best for panorama video sequences labeling compared to three existing methods.

Petr Bobák, Ladislav Čmolík, Martin Čadík
Two-Layer Feature Selection Algorithm for Recognizing Human Emotions from 3D Motion Analysis

Research on automatic recognition of human emotion from motion is gaining momentum, especially in the areas of virtual reality, robotics, behavior modeling, and biometric identity recognition. One of the challenges is to identify emotion-specific features from a vast number of expressive descriptors of human motion. In this paper, we have developed a novel framework for emotion classification using motion features. We combined a filter-based feature selection algorithm and a genetic algorithm to recognize four basic emotions: happiness, sadness, fear, and anger. The validity of the proposed framework was confirmed on a dataset containing 30 subjects performing expressive walking sequences. Our proposed framework achieved a very high recognition rate outperforming existing state-of-the-art methods in the literature.

Ferdous Ahmed, Marina L. Gavrilova
Improved Volume Scattering

This paper examines two approaches to improve the realism of volume scattering functions. The first uses a convex combination of multiple Henyey-Greenstein distributions to approximate a more complicated scattering distribution, while the second allows negative coefficients. The former is already supported in some renderers, the latter is not and carries a significant performance penalty. Chromatic scattering is also explored, and found to be beneficial in some circumstances. Source code is publicly available under an open-source license.

Haysn Hornbeck, Usman Alim
An Interactive Virtual Training System for Assembly and Disassembly Based on Precedence Constraints

Compared with traditional training modes of assembly/disassembly, the virtual environment has advantages for enhancing the training quality, saving training resources, and breaking restrictions on training equipment and place. In order to balance the training quality, experience quality, and especially training costs, an interactive virtual training system for assembly and disassembly is discussed in this paper. The training is based on assembly precedence constraints among assembly paths. Also, the developer interface and the user interface are both provided for facilitation of the management, development, and modification of training contents. Two important modes of user interfaces are provided and based on immersive virtual reality (VR) devices and conventional desktop devices (the computer screen, keyboard, and mouse) for different economic conditions. Meanwhile, to improve the development efficiency of the training contents, the system is programmed as a software development kit providing the developer interface with parameter-input fields for Unity3d the virtual simulation engine. Finally, two subjective evaluation experiments are conducted to evaluate the usability in the training experience of interaction and precedence constraints via the desktop interface, and explore the difference of usability in the training experience between immersive VR environment and desktop environment provided by the training system.

Zhuoran Li, Jing Wang, Zhaoyu Yan, Xinyao Wang, Muhammad Shahid Anwar
Multi-character Motion Retargeting for Large-Scale Transformations

Unlike single-character motion retargeting, multi-character motion retargeting (MCMR) algorithms should be able to retarget each character’s motion correcly while maintaining the interaction between them. Existing MCMR solutions mainly focus on small scale changes between interacting characters. However, many retargeting applications require large-scale transformations. In this paper, we propose a new algorithm for large-scale MCMR. We build on the idea of interaction meshes, which are structures representing the spatial relationship among characters. We introduce a new distance-based interaction mesh that embodies the relationship between characters more accurately by prioritizing local connections over global ones. We also introduce a stiffness weight for each skeletal joint in our mesh deformation term, which defines how undesirable it is for the interaction mesh to deform around that joint. This parameter increases the adaptability of our algorithm for large-scale transformations and reduces optimization time considerably. We compare the performance of our algorithm with current state-of-the-art MCMR solution for several motion sequences under four different scenarios. Our results show that our method not only improves the quality of retargeting, but also significantly reduces computation time.

Maryam Naghizadeh, Darren Cosker
Video Tamper Detection Based on Convolutional Neural Network and Perceptual Hashing Learning

Perceptual hashing has been widely used in the field of multimedia security. The difficulty of the traditional perceptual hashing algorithm is to find suitable perceptual features. In this paper, we propose a perceptual hashing learning method for tamper detection based on convolutional neural network, where a hashing layer in the convolutional neural network is introduced to learn the features and hash functions. Specifically, the video is decomposed to obtain temporal representative frame (TRF) sequences containing temporal and spatial domain information. Convolutional neural network is then used to learn visual features of each TRF. We further put each feature into the hashing layer to learn independent hash functions and fuse these features to generate the video hash. Finally, the hash functions and the corresponding video hash are obtained by minimizing the classification loss and quantization error loss. Experimental results and comparisons with state-of-the-art methods show that the algorithm has better classification performance and can effectively perform tamper detection.

Huisi Wu, Yawen Zhou, Zhenkun Wen
Phys-Sketch: Sketching 3D Dynamic Objects in Immersive Virtual Reality

Sketching was traditionally a 2D task. Even when the new generation of VR devices allowed to sketch in 3D, the drawn models remained essentially static representations. In this paper, we introduce a new physics-inspired sketching technique built on the top of Position-based Dynamics to enrich the 3D drawings with dynamic behaviors. A particle-based method allows interacting in real time with a wide range of materials including fluids, rigid bodies, soft bodies and clothes. Users can interact with the dynamic sketches and sculpt them while they move, deform and fall. We analyze the expressiveness of the system from the regard of two experienced artists. Thus, this paper also gives a starting point to move towards an improved generation of physics-enabled sketching applications.

Jose Abel Ticona, Steeven Villa, Rafael Torchelsen, Anderson Maciel, Luciana Nedel
Field-Aware Parameterization for 3D Painting

We present a two-phase method that generates a near-isometric parameterization using a local chart of the surface while still being aware of the geodesic metric. During the first phase, we utilize a novel method that approximates polar coordinates to obtain a preliminary parameterization as well as the gradient of the geodesic field. For the second phase, we present a new optimization that generates a near isometric parameterization while considering the gradient field, allowing us to generate high quality parameterizations while keeping the geodesic information. This local parameterization is applied in a view-dependent 3D painting system, providing a local adaptive map computed at interactive rates.

Songgang Xu, Hang Li, John Keyser
A Novel Method of Multi-user Redirected Walking for Large-Scale Virtual Environments

The wireless virtual reality (VR) solution for Head-Mounted Display (HMD) expands the support to multi-user VR system, so multiple users or participants can collaborate in the same physical space for a large-scale virtual environment. The foremost technical obstacle of multi-user VR system is how to accommodate multiple users within the same physical space. It needs address the problem of potential collisions among these users who are moving both virtually and physically. By analyzing the challenges of multiple users, this paper presents a novel method of multi-user redirected walking for large-scale virtual environments. In this method, an improved approach is presented to calculate the rotation gains, and a user clustering algorithm is adopted to divide the users into groups that are automatically adjusted based on the distance between the boundaries or the size of remaining space. Then a heuristic algorithm for three-user or two-user redirected walking is designed to solve the problem of multi-user roaming in a faster and more efficient way than traditional methods. To verify the validity of our method, we used a computer simulation framework to statistically analyze the influence of different factors, such as the physical space size and the number of users. The results show that our multi-user redirected walking algorithm for large-scale virtual environments is effective and practical.

Tianyang Dong, Yifan Song, Yuqi Shen, Jing Fan
Broker-Insights: An Interactive and Visual Recommendation System for Insurance Brokerage

The black box nature of the recommendation systems limits the understanding and acceptance of the recommendation received by the user. In contrast, user interaction and information visualization play a key role in addressing these drawbacks. In the brokerage domain, insurance brokers offer, negotiate and sell insurance products for their customers. Support brokers into the recommendation process can improve the loyalty, profit, and marketing campaign in their client portfolio. This work presents Broker-Insights, an interactive and visualisation-based insurance products recommender system to support brokers into the decision-making (recommendation) at two levels: recommendations for a specific potential customer; and recommendations for a group of customers. Looking for offering personalized recommendations, Broker-Insights provides a tool to manage customers information in the recommendation task and a module to perform customers segmentation based on specific characteristics. With the help of an eye-tracker, we evaluated Broker-Insigths usability with ten naive users on the offline fashion and also performed an evaluation in the wild with three insurance brokers. Results achieved show that data mining methods, while combined with interactive data visualization improved the user experience and decision-making process into the recommendation task, and increased the products recommendation acceptance.

Paul Dany Atauchi, Luciana Nedel, Renata Galante
Auto-labelling of Markers in Optical Motion Capture by Permutation Learning

Optical marker-based motion capture is a vital tool in applications such as motion and behavioural analysis, animation, and biomechanics. Labelling, that is, assigning optical markers to the pre-defined positions on the body, is a time consuming and labour intensive post-processing part of current motion capture pipelines. The problem can be considered as a ranking process in which markers shuffled by an unknown permutation matrix are sorted to recover the correct order. In this paper, we present a framework for automatic marker labelling which first estimates a permutation matrix for each individual frame using a differentiable permutation learning model and then utilizes temporal consistency to identify and correct remaining labelling errors. Experiments conducted on the test data show the effectiveness of our framework.

Saeed Ghorbani, Ali Etemad, Nikolaus F. Troje
A Simple Algorithm for Hard Exudate Detection in Diabetic Retinopathy Using Spectral-Domain Optical Coherence Tomography

Hard exudates are usually seen in the course of diabetic retinopathy. This illness is one of the most common reasons for blind registration in the world. Due to the data presented by World Health Organization (WHO) the number of people who lose sight because of undetected diabetes will be doubled by 2050. The purpose of this paper is to introduce an enhanced algorithm for hard exudates detection in Optical Coherence Tomography images. In these samples, dangerous pathological changes can be observed in the form of yellow-red spots. During the experiments more than 150 images were used to calculate the accuracy of the proposed approach. We created an algorithm that was implemented in development environment with Java Programming Language and Maven Framework. Classification was done on the basis of authors’ own algorithm results and compared with ophthalmologist decision. The experiments have shown the proposed approach has 97% of accuracy in hard exudates detection.

Maciej Szymkowski, Emil Saeed, Khalid Saeed, Zofia Mariak
Multi-level Motion-Informed Approach for Video Generation with Key Frames

Observing that a motion signal is decomposable into multiple levels, a video generation model which realizes this hypothesis is proposed. The model decomposes motion into a two-level signal involving a global path and local pattern. They are modeled via a latent path in the form of a composite Bezier spline along with a latent sine function respectively. In the application context, the model fills the research gap in its ability to connect an arbitrary number of input key frames smoothly. Experimental results indicate that the model improves in terms of the smoothness of the generated video. In addition, the ability of the model in separating global and local signal has been validated.

Zackary P. T. Sin, Peter H. F. Ng, Simon C. K. Shiu, Fu-lai Chung, Hong Va Leong
Evaluating the Performance of Virtual Reality Navigation Techniques for Large Environments

We present results from two studies comparing the performance of four different navigation techniques (flight, teleportation, world-in-miniature, and 3D cone-drag) and their combinations in large virtual reality map environments. While prior work has individually examined each of these techniques in other settings, our study presents the first direct comparison between them in large open environments, as well as one of the first comparisons in the context of current-generation virtual reality hardware. Our first study compared common techniques (flight, teleportation, and world-in-miniature) for search and navigation tasks. A follow-up study compared these techniques against 3D cone drag, a direct-manipulation navigation technique used in contemporary tools like Google Earth VR. Our results show the strength of flight as a stand-alone navigation technique, but also highlight five specific ways in which viewers can combine teleportation, world-in-miniature, and 3D cone drag with flight, drawing on the relative strengths of each technique to compensate for the weaknesses of others.

Kurtis Danyluk, Wesley Willett
HMD-TMO: A Tone Mapping Operator for 360 HDR Images Visualization for Head Mounted Displays

We propose a Tone Mapping Operator, denoted HMD-TMO, dedicated to the visualization of 360 $$^\circ $$ High Dynamic Range images on Head Mounted Displays. The few existing studies about this topic have shown that the existing Tone Mapping Operators for classic 2D images are not adapted to 360 $$^\circ $$ High Dynamic Range images. Consequently, several dedicated operators have been proposed. Instead of operating on the entire 360 $$^\circ $$ image, they only consider the part of the image currently viewed by the user. Tone mapping a part of the 360 $$^\circ $$ image is less challenging as it does not preserve the global luminance dynamic of the scene. To cope with this problem, we propose a novel tone mapping operator which takes advantage of both a view-dependant tone mapping that enhances the contrast, and a Tone Mapping Operator applied to the entire 360 $$^\circ $$ image that preserves global coherency. Furthermore, we present a subjective study to model lightness perception in a Head Mounted Display.

Ific Goudé, Rémi Cozot, Francesco Banterle
Integrating Peridynamics with Material Point Method for Elastoplastic Material Modeling

We present a novel integral-based Material Point Method (MPM) using state based peridynamics structure for modeling elastoplastic material and fracture animation. Previous partial derivative based MPM studies face challenges of underlying instability issues of particle distribution and the complexity of modeling discontinuities. To alleviate these problems, we integrate the strain metric in the basic elastic constitutive model by using material point truss structure, which outweighs differential-based methods in both accuracy and stability. To model plasticity, we incorporate our constitutive model with deviatoric flow theory and a simple yield function. It is straightforward to handle the problem of cracking in our hybrid framework. Our method adopts two time integration ways to update crack interface and fracture inner parts, which overcome the unnecessary grid duplication. Our work can create a wide range of material phenomenon including elasticity, plasticity, and fracture. Our framework provides an attractive method for producing elastoplastic materials and fracture with visual realism and high stability.

Yao Lyu, Jinglu Zhang, Jian Chang, Shihui Guo, Jian Jun Zhang
“Am I Talking to a Human or a Robot?”: A Preliminary Study of Human’s Perception in Human-Humanoid Interaction and Its Effects in Cognitive and Emotional States

The current preliminary study concerns the identification of the effects human-humanoid interaction can have on human emotional states and behaviors, through a physical interaction. Thus, we have used three cases where people face three different types of physical interaction with a neutral person, Nadine social robot and the person on which Nadine was modelled, Professor Nadia Thalmann. To support our research, we have used EEG recordings to capture the physiological signals derived from the brain during each interaction, audio recordings to compare speech features and a questionnaire to provide psychometric data that can complement the above. Our results mainly showed the existence of frontal theta oscillations while interacting with the humanoid that probably shows the higher cognitive effort of the participants, as well as differences in the occipital area of the brain and thus, the visual attention mechanisms. The level of concentration and motivation of participants while interacting with the robot were higher indicating also higher amount of interest. The outcome of this experiment can broaden the field of human-robot interaction, leading to more efficient, meaningful and natural human-robot interaction.

Evangelia Baka, Ajay Vishwanath, Nidhi Mishra, Georgios Vleioras, Nadia Magnenat Thalmann
Transferring Object Layouts from Virtual to Physical Rooms: Towards Adapting a Virtual Scene to a Physical Scene for VR

One crucial problem in VR is to establish a good correspondence between the virtual and physical scenes, while respecting the key structure and layout of the two spaces. A VR scene may well allow a user to move towards a certain direction, but in reality, a wall could block his/her path. The reason is that the virtual scene is often designed by designers for a large user population, being unable to consider individual physical constraints. Recent works involve redirected walking or additional VR hardware. We propose an alternative solution by adapting the virtual scene to the physical scene. The goal is to preserve the layout of the original design while allowing the virtual scene to be fully walkable. To move towards that goal, this paper proposes an object layout transfer mechanism to transfer an object layout from a virtual room to a physical room. A VR application can automatically transfer a designer’s virtual scene to fit any household room. Our solution involves key object layout features extraction and a multi-stage algorithm to shuffle the objects in the physical space. A user study is conducted to demonstrate that the proposed model is able to synthesize transferred layouts that appear to be created by a human designer.

Zackary P. T. Sin, Peter H. F. Ng, Simon C. K. Shiu, Fu-lai Chung, Hong Va Leong
Fast Simulation of Crowd Collision Avoidance

Real-time large-scale crowd simulations with realistic behavior, are important for many application areas. On CPUs, the ORCA pedestrian steering model is often used for agent-based pedestrian simulations. This paper introduces a technique for running the ORCA pedestrian steering model on the GPU. Performance improvements of up to 30 times greater than a multi-core CPU model are demonstrated. This improvement is achieved through a specialized linear program solver on the GPU and spatial partitioning of information sharing. This allows over 100,000 people to be simulated in real time (60 frames per second).

John Charlton, Luis Rene Montana Gonzalez, Steve Maddock, Paul Richmond
Improved Automatic Speed Control for 3D Navigation

As technology progresses, it is possible to increase the size and complexity of 3D virtual environments. Thus, we need deal with multiscale virtual environments today. Ideally, the user should be able to navigate such environments efficiently and robustly, which requires control of the user speed during navigation. Manual speed control across multiple scales of magnitude suffers from issues such as overshooting behaviors and introduces additional complexity. Most previously presented methods to automatically control the speed of navigation do not generalize well to environments with varying scales. We present an improved method to automatically control the speed of the user in 3D virtual environment navigation. The main benefit of our approach is that it automatically adapts the navigation speed in a manner that enables efficient navigation with maximum freedom, while still avoiding collisions. The results of a usability test show a significant reduction in completion time for a multi-scale navigation task.

Domi Papoi, Wolfgang Stuerzlinger
Efficient Rendering of Rounded Corners and Edges for Convex Objects

Many manufactured objects and worn surfaces exhibit rounded corners and edges. These fine details are a source of sharp highlights and shading effects, important to our perception between joining surfaces. However, their representation is often neglected because they introduce complex geometric meshing in very small areas. This paper presents a new method for managing thin rounded corners and edges without explicitly modifying the underlying geometry, so as to produce their visual effects in sample-based rendering algorithms (e.g., ray tracing and path tracing). Our method relies on positioning virtual spheres and cylinders, associated with a detection and acceleration structure that makes the process more robust and more efficient than existing bevel shaders. Moreover, using our implicit surfaces rather than polygonal meshes allows our method to generate extreme close views of the surfaces with a much better visual quality for little additional memory. We illustrate the achieved effects and analyze comparisons generated with existing industrial software shaders.

Simon Courtin, Sébastien Horna, Mickaël Ribadière, Pierre Poulin, Daniel Meneveaux
Convenient Tree Species Modeling for Virtual Cities

Generating large scale 3D tree models for digital twin cities at a species level-of-detail poses challenges of automation and maintenance of such dynamically evolving models. This paper presents an inverse procedural modeling methodology to automate the generation of 3D tree species models based on growth spaces from point clouds and pre-formulated L-system growth rules. The rules capture the botanical tree architecture at a species level in terms of growth process, branching pattern, and responses to external stimuli. Users only need to fill in a species profile template and provide the growth space derivable from the point clouds. The parameters involved in the rules are automatically optimised within the growth space to produce the species models to represent actual trees. This methodology enables users without 3D modeling skills to conveniently produce highly representative 3D models of any tree species in a large scale.

Like Gobeawan, Daniel Joseph Wise, Alex Thiam Koon Yee, Sum Thai Wong, Chi Wan Lim, Ervine Shengwei Lin, Yi Su
On Visualization of Movements for Monitoring Older Adults

It is a well-known statistics that the percentage of population of older adults will globally surpass the other age categories. It is also observed that a majority of older adults would prefer to stay at their own place of residence for as long as they can. In support of such an independent living life style, various ambient sensor technologies have been designed which can be deployed for long-term monitoring of movements and activities. Depending on the required level of detail associated with such a monitoring system, spatiotemporal data can be collected during various intervals of daily activities for detecting any on-sets of anomalies. The greater granularity, the deeper the level of detail associated with the movement patterns for instances of time and collective durations. This paper we first presents an overview of various visualization techniques which can be employed for monitoring movement patterns. The paper further presents our results of movement visualization experiments using two sample datasets which can be used as a basis for determining movement anomalies. The first dataset is associated with the global movement patterns between various locations in an ambient assisted living environment (AAL) and the other dataset is associated with movement tracking using wearable sensing technologies.

Shahram Payandeh, Eddie Chiu

CGI’19 Short Papers

Frontmatter
Create by Doing – Action Sequencing in VR

In every virtual reality application, there are actions to perform, often in a logical order. This logical ordering can be a predefined sequence of actions, enriched with the representation of different possibilities, which we refer to as a scenario. Authoring such a scenario for virtual reality is still a difficult task, as it needs both the expertise from the domain expert and the developer. We propose to let the domain expert create in virtual reality the scenario by herself without coding, through the paradigm of creating by doing. The domain expert can run an application, record the sequence of actions as a scenario, and then reuse this scenario for other purposes, such as an automatic replay of the scenario by a virtual actor to check the obtained scenario, the injection of this scenario as a constraint or a guide for a trainee, or the monitoring of the scenario unfolding during a procedure.

Flavien Lécuyer, Valérie Gouranton, Adrien Reuzeau, Ronan Gaugne, Bruno Arnaldi
Deep Intrinsic Image Decomposition Using Joint Parallel Learning

Intrinsic image decomposition is a highly ill-posed problem in computer vision referring to extract albedo and shading from an image. In this paper, we regard it as an image-to-image translation issue and propose a novel thought, which makes use of parallel convolutional neural networks (ParCNN) to learn albedo and shading with different spatial features and data distributions, respectively. At the same time, the energy is preserved as much as possible under the constraint of image reconstruction loss shared by the two networks. Moreover, we add the gradient prior based on the traditional image formation process into the loss function, which can lead to a performance improvement of our basic learning model by jointing advantages of the physically-based method and the data-driven method. We choose MPI Sintel dataset for model training and testing. Quantitative and qualitative evaluation results outperform the state-of-the-art methods.

Yuan Yuan, Bin Sheng, Ping Li, Lei Bi, Jinman Kim, Enhua Wu
Procedurally Generating Biologically Driven Feathers

Computer-generated feathers are of interest for both research and digital production, but previously proposed modeling methods have been user controlled rather than shaped by actual data. This work demonstrates a new, biologically informed model for individual feathers that uses techniques from computer vision and computer graphics to derive and control shapes. The method reduces the amount of user effort and influence and provides an accurate means of using natural variation of real feathers to synthesize photo-realistic feather assets.

Jessica Baron, Eric Patterson
A Unified Algorithm for BRDF Matching

This paper generalizes existing BRDF fitting algorithms presented in the literature that aims to find a mapping of the parameters of an arbitrary source material model to the parameters of a target material model. A material model in this context is a function that maps a list of parameters, such as roughness or specular color, to a BRDF. Our conversion function approximates the original model as close as possible under a chosen similarity metric, either in physical reflectivities or perceptually, and calculates the error with respect to this conversion. Our conversion function imposes no constraints other than that the dimensionality of the represented BRDFs match.

Ron Vanderfeesten, Jacco Bikker
Multi-layer Perceptron Architecture for Kinect-Based Gait Recognition

Accurate gait recognition is of high significance for numerous industrial and consumer applications, including virtual reality, online games, medical rehabilitation, video surveillance, and others. This paper proposes multi-layer perceptron (MLP) based neural network architecture for human gait recognition. Two unique geometric features: joint relative cosine dissimilarity (JRCD) and joint relative triangle area (JRTA) are introduced. These features are view and pose invariant, and thus enhance recognition performance. MLP model is trained using dynamic JRTA and JRCD sequences. The performance of the proposed MLP architecture is evaluated on publicly available 3D Kinect skeleton gait database and is shown to be superior to other state-of-the-art methods.

A. S. M. Hossain Bari, Marina L. Gavrilova
New Three-Chemical Polynomial Reaction-Diffusion Equations

Reaction-diffusion (RD) generates time-varying patterns or noises, used to create beautiful patterned or noisy variations in colors, bumps, flow details, or other parameters. RD can be relatively easily solved on various domains: image, curved surface, and volumetric domains, making their applications popular. Being widely available, most of the patterns from known RD have been well explored. In this paper, we move on this field, by providing a large number of new reaction equations. Among the vast space of new equations, we focus on three-chemical polynomial reactions as the three chemicals can be easily mapped to any colors. We propose a set of new equations that generate new time-varying patterns.

Do-yeon Han, Byungmoon Kim, Oh-young Song
Capturing Piecewise SVBRDFs with Content Aware Lighting

We present a method for capturing piecewise SVBRDFs over flat surfaces that consist of piecewise homogeneous materials with arbitrary geometric details. To achieve fast and simple capture, our method first evaluates the piecewise material distribution over the surface from an image taken with uniform lighting and then find an suitable 2D light pattern according to the material’s spatial distribution, which combines both step edge and gradient lighting patterns. After that, we capture another image of the surface lit by the optimized light pattern and reconstruct the SVBRDF and normal details from two captured images. The capturing only takes two photographs and the light pattern optimization is executed in real time, which enables us to design a simple device setup for on-site capturing. We validate our approach and demonstrate the efficiency of our method on a wide range of synthetic and real materials.

Xiao Li, Peiran Ren, Yue Dong, Gang Hua, Xin Tong, Baining Guo
VRSpineSim: Applying Educational Aids Within A Virtual Reality Spine Surgery Simulator

We contribute VRSpineSim, a stereoscopic virtual reality surgery simulator that allows novice surgeons to learn and experiment with a spinal pedicle screw insertion (PSI) procedure using simplified interaction capabilities and 3D haptic user interfaces. By collaborating with medical experts and following an iterative approach, we provide characterization of the PSI task, and derive requirements for the design of a 3D immersive interactive simulation system. We present how these requirements were realized in our prototype and outline its educational benefits for training the PSI procedure. We conclude with the results of a preliminary evaluation of VRSpineSim and reflect on our interface benefits and limitations for future relevant research efforts.

Ahmed E. Mostafa, Won Hyung Ryu, Sonny Chan, Kazuki Takashima, Gail Kopp, Mario Costa Sousa, Ehud Sharlin
Visual Analysis of Bird Moving Patterns

In spite of recent advances in data analysis techniques, exploration of complex, unstructured spatial-temporal data could still be difficult. An interactive approach, with human in the analysis loop, represents a valuable add on to automatic analysis methods. We describe an interactive visual analysis method to exploration of complex spatio-temporal data sets. The proposed approach is illustrated using a publicly available data set, a collection of bird locations recorded over an extended period of time. In order to explore and comprehend complex patterns in bird movements over time, we provide two new views, the centroids scatter plot view and the distance plot view. Successful analysis of the birds data indicates the usefulness of the newly proposed approach for other spatio-temporal data of a similar structure.

Krešimir Matković, Denis Gračanin, Michael Beham, Rainer Splechtna, Miriah Meyer, Elena Ginina
GAN with Pixel and Perceptual Regularizations for Photo-Realistic Joint Deblurring and Super-Resolution

In this paper, we propose a Generative Adversarial Network with Pixel and Perceptual regularizations, denoted as P2GAN, to restore single motion blurry and low-resolution images jointly into clear and high-resolution images. It is an end-to-end neural network consisting of deblurring module and super-resolution module, which repairs degraded pixels in the motion-blur images firstly, and then outputs the deblurred images and deblurred features for further reconstruction. More specifically, the proposed P2GAN integrates pixel-wise loss in pixel-level, contextual loss and adversarial loss in perceptual level simultaneously, in order to guide on deblurring and super-resolution reconstruction of the raw images that are blurry and in low-resolution, which help obtaining realistic images. Extensive experiments conducted on a real-world dataset manifest the effectiveness of the proposed approaches, outperforming the state-of-the-art models.

Yong Li, Zhenguo Yang, Xudong Mao, Yong Wang, Qing Li, Wenyin Liu, Ying Wang
Realistic Pedestrian Simulation Based on the Environmental Attention Model

This paper addresses the problem of crowd simulation while maintaining the attention behaviors. We set up an environmental attention model, which simulates various attention targets of virtual characters influenced by the surrounding environment. According to the investigation and analysis of crowd behaviors, the virtual character attention attributes are divided into the intrinsic attributes and the attention tendency. Based on the various environmental attributes and character attributes, the attention target of a virtual character can be calculated during the walk. Different attention targets lead to different moving heads’ behaviors of characters with different attributes. In this way, we implement the attention individualization of virtual characters, and thereby, improve the diversity of crowd animation.

Di Jiao, Tianyu Huang, Gangyi Ding, Yiran Du
Fast Superpixel Segmentation with Deep Features

In this paper, we propose a superpixel segmentation method which utilizes extracted deep features along with the combination of color and position information of the pixels. It is observed that the results can be improved significantly using better initial seed points. Therefore, we incorporated a one-step k-means clustering to calculate the positions of the initial seed points and applied the active search method to ensure that each pixel belongs to the right seed. The proposed method was also compared to other state-of-the-art methods quantitatively and qualitatively, and was found to produce promising results that adhere to the object boundaries better than others.

Mubinun Awaisu, Liang Li, Junjie Peng, Jiawan Zhang
BricklAyeR: A Platform for Building Rules for AmI Environments in AR

With the proliferation of Intelligent Environments, the need for configuring their behaviors to address their users’ needs emerges. In combination with the current advances in Augmented and Virtual Reality and Conversational Agents, new opportunities arise for systems which allow people to program their environment. Whereas today this requires programming skills, soon, when most spaces will include smart objects, tools which allow their collaborative management by non-technical users will become a necessity. To that end, we present BricklAyeR, a novel collaborative platform for non-programmers, that allows to define the behavior of Intelligent Environments, through an intuitive, 3D building-block User Interface, following the Trigger-Action programming principle, in Augmented Reality, with the help of a Conversational Agent.

Evropi Stefanidi, Dimitrios Arampatzis, Asterios Leonidis, George Papagiannakis
Fine-Grained Color Sketch-Based Image Retrieval

We propose a novel fine-grained color sketch-based image retrieval (CSBIR) approach. The CSBIR problem is investigated for the first time using deep learning networks, in which deep features are used to represent color sketches and images. A novel ranking method considering both shape matching and color matching is also proposed. In addition, we build a CSBIR dataset with color sketches and images to train and test our method. The results show that our method has better retrieval performance.

Yu Xia, Shuangbu Wang, Yanran Li, Lihua You, Xiaosong Yang, Jian Jun Zhang
Development and Usability Analysis of a Mixed Reality GPS Navigation Application for the Microsoft HoloLens

Navigation in real environments is arguably one of the primary applications for the mixed reality (MR) interaction paradigm. We propose a wearable MR system based on off-the-shelf devices as an alternative to the widespread handheld-based GPS navigation paradigm. Our system uses virtual holograms placed on the terrain instead of the usual heads-up display approach where the augmentations follow the line of sight. In a user experiment, we assessed performance and usability. We monitored user attention through EEG while performing a navigation task using either the MR or the handheld interface. Results show that users deemed our solution to offer a higher visibility to both the oncoming traffic and the suggested route. EEG readings also exposed a significantly less demanding focus level for our prototype. An easiness to learn and use was also indicated for the MR system.

Renan Luigi Martins Guarese, Anderson Maciel
A Synthesis-by-Analysis Network with Applications in Image Super-Resolution

Recent studies have demonstrated the successful application of convolutional neural networks in single image super-resolution. In this paper, we present a general synthesis-by-analysis network for super-resolving a low-resolution image. Unlike Laplacian Pyramid Super-Resolution Network (LapSRN) that progressively reconstructs the sub-band residuals of high-resolution images, our proposed network breaks through the sequential dependency to expand the input and output into multiple disjoint bandpass signals. At each band, we perform the nonlinear mapping in truncated frequency interval by applying a carefully designed sub-network. Specifically, we propose a validated network sub-structure that considers both efficiency and accuracy. We also perform exhaustive experiments in existing commonly used dataset. The recovered high-resolution image is competitive or even superior in quality compared to those images produced by other methods.

Lechao Cheng, Zhangye Wang
Towards Moving Virtual Arms Using Brain-Computer Interface

Motor imagery Brain-Computer Interface (MI-BCI) is a paradigm widely used for controlling external devices by imagining bodily movements. This technology has inspired researchers to use it in several applications such as robotic prostheses, games, and virtual reality (VR) scenarios. We study the inclusion of an imaginary third arm as a part of the control commands for BCI. To this end, we analyze a set of open-close hand tasks (including a third arm that comes out from the chest) performed in two VR scenarios: the classical BCI Graz, with arrows as feedback; and a first-person view of a human-like avatar performing the corresponding tasks. This study purpose is to explore the influence of both time window of the trials and the frequency bands on the accuracy of the classifiers. Accordingly, we used a Filter Bank Common Spatial Patterns (FBCSP) algorithm for several time windows (100, 200, 400, 600, 800, 1000 and 2000 ms) for extracting features and evaluating the classification accuracy. The offline classification results show that a third arm can be effectively used as a control command (accuracy > 0.62%). Likewise, the human-like avatar condition ( $$67\%$$ ) outperforms the Graz condition ( $$63\%$$ ) significantly, suggesting that the realistic scenario can reduce the abstractness of the third arm. This study, thus, motivates the further inclusion of non-embodied motor imagery task in BCI systems.

Jaime Riascos, Steeven Villa, Anderson Maciel, Luciana Nedel, Dante Barone
ODE-Driven Sketch-Based Organic Modelling

How to efficiently create 3D models from 2D sketches is an important problem. In this paper we propose a sketch-based and ordinary differential equation (ODE) driven modelling technique to tackle this problem. We first generate 2D silhouette contours of a 3D model. Then, we select proper primitives for each of the corresponding silhouette contours. After that, we develop an ODE-driven and sketch-guided deformation method. It uses ODE-based deformations to deform the primitives to exactly match the generated 2D silhouette contours in one view plane. Our experiment demonstrates that the proposed approach can create 3D models from 2D silhouette contours easily and efficiently.

Ouwen Li, Zhigang Deng, Shaojun Bian, Algirdas Noreika, Xiaogang Jin, Ismail Khalid Kazmi, Lihua You, Jian Jun Zhang
Physical Space Performance Mapping for Lenticular Lenses in Multi-user VR Displays

One of the common issues of virtual reality systems regardless if they are head mounted or projection based is that they cannot provide correct perspective views to more than one user. This limitation increases collaboration times and reduces usability of such systems. Lenticular lenses have been previously used to separate users in multi-user VR systems. On this paper we present an assessment of the area of interaction of users in a multi-user VR system that uses lenticular lenses.

Juan Sebastian Munoz-Arango, Dirk Reiners, Carolina Cruz-Neira
Parallax Occlusion Mapping Using Distance Fields

Parallax occlusion mapping (POM) is a technique to introduce 3D definition using a depth map instead of adding new geometry. The technique relies on ray tracing with higher samples resulting in a better approximation, especially at steeper angles. The distance between each sample is constant and it is possible to skip over fine detail at lower sample count.Our technique relies on a distance field (DF) instead of a depth map. This allows us to ray march through the field and lower the sample count considerably. We can get good results even with a single sample. Comparable results are obtained by less than half the samples of the industry standard POM approach.

Saad Khattak, Andrew Hogue
Object Grasping of Humanoid Robot Based on YOLO

This paper presents a system that aims to achieve autonomous grasping for micro-controller based humanoid robots such as the Inmoov robot [1]. The system consists of a visual sensor, a central controller and a manipulator. We modify the open sourced objection detection software YOLO (You Only Look Once) v2 [2] and associate it with the visual sensor to make the sensor be able to detect not only the category of the target object but also the location with the help of a depth camera. We also estimate the dimensions (i.e., the height and width) of the target based on the bounding box technique (Fig. 1). After that, we send the information to the central controller (a humanoid robot), which controls the manipulator (customised robotic hand) to grasp the object with the help of inverse kinematics theory. We conduct experiments to test our method with the Inmoov robot. The experiments show that our method is capable of detecting the object and driving the robotic hands to grasp the target object. Fig. 1. Autonomous grasping with real-time object detection

Li Tian, Nadia Magnenat Thalmann, Daniel Thalmann, Zhiwen Fang, Jianmin Zheng
Bivariate BRDF Estimation Based on Compressed Sensing

We propose a method of estimating a bivariate BRDF from a small number of sampled data using compressed sensing. This method aims to estimate the reflectance of various materials by using the representation space that keeps local information when restored by compressed sensing. We conducted simulated measurements using randomly sampled data and data sampled according to the camera position and orientation, and confirmed that most of the BRDF was successfully restored from 40% sampled data in the case of simulated measurement using a camera and markers.

Haru Otani, Takashi Komuro, Shoji Yamamoto, Norimichi Tsumura
Nadine Humanoid Social Robotics Platform

Developing a social robot architecture is very difficult as there are countless possibilities and scenarios. In this paper, we introduce the design of a generic social robotics architecture deployed in Nadine social robot, that can be customized to handle any scenario or application, and allows her to express human-like emotions, personality, behaviors, dialog. Our design comprises of three layers, namely, perception, processing and interaction layer and allows modularity (add/remove sub-modules), task or environment based customizations (for example, change in knowledge database, gestures, emotions). We noticed that it is difficult to do a precise state of the art for robots as each of them might be developed for different tasks, different work environment. The robots could have different hardware that also makes comparison challenging. In this paper, we compare Nadine social robot with state of art robots on the basis of social robot characteristics such as speech recognition and synthesis, gaze, face, object recognition, affective system, dialog interaction capabilities, memory.

Manoj Ramanathan, Nidhi Mishra, Nadia Magnenat Thalmann

ENGAGE’19 Workshop Full Papers

Frontmatter
Gajit: Symbolic Optimisation and JIT Compilation of Geometric Algebra in Python with GAALOP and Numba

Modern Geometric Algebra software systems tend to fall into one of two categories, either fast, difficult to use, statically typed, and syntactically different from the mathematics or slow, easy to use, dynamically typed and syntactically close to the mathematical conventions. Gajit is a system that aims to get the best of both worlds. It allows us to prototype and debug algorithms with the Python library clifford [1] which is designed to be easy to read and write and then to optimise our code both symbolically with GAALOP [2] and via the LLVM pipeline with Numba [3] resulting in highly performant code for very little additional effort.

Hugo Hadfield, Dietmar Hildenbrand, Alex Arsenovic
Geometric Algebra Levenberg-Marquardt

This paper introduces a novel and matrix-free implementation of the widely used Levenberg-Marquardt algorithm, in the language of Geometric Algebra. The resulting algorithm is shown to be compact, geometrically intuitive, numerically stable and well suited for efficient GPU implementation. An implementation of the algorithm and the examples in this paper are publicly available.

Steven De Keninck, Leo Dorst
Transverse Approach to Geometric Algebra Models for Manipulating Quadratic Surfaces

Quadratic surfaces gain more and more attention in the geometric algebra community and some frameworks to represent, transform, and intersect these quadratic surfaces have been proposed. To the best of our knowledge, however, no framework has yet proposed that supports all the operations required to completely handle these surfaces. Some existing frameworks do not allow the construction of quadratic surfaces from control points while some do not allow to transform these quadratic surfaces. Although a framework does not exist that covers all the required operations, if we consider all already proposed frameworks together, then all the operations over quadratic surfaces are covered there. This paper presents an approach that transversely uses different frameworks for covering all the operations on quadratic surfaces. We employ a framework to represent any quadratic surfaces either using control points or the coefficients of its implicit form and then map the representation into another framework so that we can transform them and compute their intersection. Our approach also allows us to easily extract some geometric properties.

Stéphane Breuils, Vincent Nozick, Laurent Fuchs, Akihiro Sugimoto
Cubic Curves and Cubic Surfaces from Contact Points in Conformal Geometric Algebra

This work explains how to extend standard conformal geometric algebra of the Euclidean plane in a novel way to describe cubic curves in the Euclidean plane from nine contact points or from the ten coefficients of their implicit equations. As algebraic framework serves the Clifford algebra Cl(9, 7) over the real sixteen dimensional vector space $$\mathbb {R}^{9,7}$$ . These cubic curves can be intersected using the outer product based meet operation of geometric algebra. An analogous approach is explained for the description and operation with cubic surfaces in three Euclidean dimensions, using as framework Cl(19, 16).

Eckhard Hitzer, Dietmar Hildenbrand

ENGAGE’19 Workshop Short Papers

Frontmatter
Non-parametric Realtime Rendering of Subspace Objects in Arbitrary Geometric Algebras

This paper introduces a novel visualization method for elements of arbitrary Geometric Algebras. The algorithm removes the need for a parametric representation, requires no precomputation, and produces high quality images in realtime. It visualizes the outer product null space (OPNS) of 2-dimensional manifolds directly and uses an isosurface approach to display 1- and 0-dimensional manifolds. A multi-platform browser based implementation is publicly available.

Steven De Keninck
Automatic Normal Orientation in Point Clouds of Building Interiors

Correct and consistent normal orientation is a fundamental problem in geometry processing. Applications such as feature detection and geometry reconstruction often rely on correctly oriented normals. Many existing approaches make severe assumptions on the input data or the topology of the underlying object which are not applicable to measurements of urban scenes. In contrast, our approach is specifically tailored to the challenging case of unstructured indoor point cloud scans of multi-story, multi-room buildings. We evaluate the correctness and speed of our approach on multiple real-world point cloud datasets.

Sebastian Ochmann, Reinhard Klein
Colour Image Segmentation by Region Growing Based on Conformal Geometric Algebra

We apply conformal geometric algebra (CGA) to classical algorithms for colour image segmentation. Particularly, we modify standard Prewitt quaternionic filter for edge detection and region growing segmentation procedure and use them simultaneously, which is only allowed by common CGA language, more precisely by the notion of flat point in normalised and unnormalised form.

Jaroslav Hrdina, Radek Matoušek, Radek Tichý
GAC Application to Corner Detection Based on Eccentricity

The geometric algebra GAC offers the same efficiency for calculations with arbitrary conic sections as provided by CGA for spheres. We shall exploit these properties in image processing, particularly in corner detection. Indeed, the point cluster will be approximated by a conic section and, consequently, classified as points with ellipse–like or hyperbola–like neighbourhood. Then, by means of eccentricity, we decide whether the points form a corner.

Jaroslav Hrdina, Aleš Návrat, Petr Vašík
Ray-Tracing Objects and Novel Surface Representations in CGA

Conformal Geometric Algebra (CGA) provides a unified representation of both geometric primitives and conformal transformations, and as such holds great promise in the field of computer graphics [1–3]. In this paper we implement a simple ray tracer in CGA with a Blinn-Phong lighting model and use it to examine ray intersections with surfaces generated from interpolating between objects [7]. An analytical method for finding the normal line to these interpolated surfaces is described. The expression is closely related to the concept of surface principal curvature from differential geometry and provides a novel way of describing the curvature of evolving surfaces.

Sushant Achawal, Joan Lasenby, Hugo Hadfield, Anthony Lasenby
Backmatter
Metadata
Title
Advances in Computer Graphics
Editors
Prof. Marina Gavrilova
Jian Chang
Prof. Nadia Magnenat Thalmann
Prof. Eckhard Hitzer
Prof. Hiroshi Ishikawa
Copyright Year
2019
Electronic ISBN
978-3-030-22514-8
Print ISBN
978-3-030-22513-1
DOI
https://doi.org/10.1007/978-3-030-22514-8

Premium Partner