Zum Inhalt

Computational Science – ICCS 2020

20th International Conference, Amsterdam, The Netherlands, June 3–5, 2020, Proceedings, Part VII

  • 2020
  • Buch
insite
SUCHEN

Über dieses Buch

Die siebenbändige Reihe LNCS 12137, 12138, 12139, 12140, 12141, 12142 und 12143 bildet die Vortragsreihe der 20. Internationalen Konferenz für Computerwissenschaften, ICCS 2020, die im Juni 2020 in Amsterdam, Niederlande, stattfand. * Die insgesamt 101 Beiträge und 248 Workshopbeiträge in diesem Buch wurden sorgfältig geprüft und aus 719 Einreichungen ausgewählt (230 Einreichungen für den Hauptlauf und 489 Einreichungen für die Workshops). Die Arbeiten waren in thematischen Abschnitten mit den Namen Teil I: ICCS Main TrackPart II: ICCS Main TrackPart III: Advances in High-Performance Computational Earth Sciences: Applications and Frameworks; Agent-Based Simulations, Adaptive Algorithms and Solvers; Applications of Computational Methods in Artificial Intelligence and Machine Learning; Biomedical and Bioinformatics Challenges for Computer SciencePart IV: Classifier Learning from Difficult Data; Complex Social Systems through the Lens of Computational Science; Computational Health; Computational Methods for Emerging Problems in (Dis-) Information AnalysisPart V: Computational Optimization, Modelling and Simulation; Computational Science in IoT and Smart Systems; Computer Graphics, Image Processing and

Inhaltsverzeichnis

Frontmatter

Simulations of Flow and Transport: Modeling, Algorithms and Computation

Frontmatter
Decoupled and Energy Stable Time-Marching Scheme for the Interfacial Flow with Soluble Surfactants

In this work, we develop an efficient energy stable scheme for the hydrodynamics coupled phase-field surfactant model with variable densities. The thermodynamically consistent model consists of two Cahn–Hilliard–type equations and incompressible Navier–Stokes equation. We use two scalar auxiliary variables to transform nonlinear parts in the free energy functional into quadratic forms, and then they can be treated efficiently and semi-implicitly. A splitting method based on pressure stabilization is used to solve the Navier–Stokes equation. By some subtle explicit-implicit treatments to nonlinear convection and stress terms, we construct a first-order energy stable scheme for the two-phase system with soluble surfactants. The developed scheme is efficient and easy-to-implement. At each time step, computations of phase-field variables, the velocity and pressure are decoupled. We rigorously prove that the proposed scheme is unconditionally energy stable. Numerical results confirm that our scheme is accurate and energy stable.

Guangpu Zhu, Aifen Li
A Numerical Algorithm to Solve the Two-Phase Flow in Porous Media Including Foam Displacement

This work is dedicated to simulating the Enhanced Oil Recovery (EOR) process of foam injection in a fully saturated reservoir. The presence of foam in the gas-water mixture acts in controlling the mobility of the gas phase, contributing to reduce the effects of fingering and gravity override. A fractional flow formulation based on global pressure is used, resulting in a system of Partial Differential Equations (PDEs) that describe two coupled problems of distinct kinds: elliptic and hyperbolic. The numerical methodology is based on splitting the system of equations into two sub-systems that group equations of the same kind and on applying a hybrid finite element method to solve the elliptic problem and a high-order finite volume method to solve the hyperbolic equations. Numerical results show good efficiency of the algorithm, as well as the remarkable ability of the foam to increase reservoir sweep efficiency by reducing gravity override and fingering effects.

Filipe Fernandes de Paula, Thiago Quinelato, Iury Igreja, Grigori Chapiro
A Three-Dimensional, One-Field, Fictitious Domain Method for Fluid-Structure Interactions

In this article we consider the three-dimensional numerical simulation of Fluid-Structure Interaction (FSI) problems involving large solid deformations. The one-field Fictitious Domain Method (FDM) is introduced in the framework of an operator splitting scheme. Three-dimensional numerical examples are presented in order to validate the proposed approach: demonstrating energy stability and mesh convergence; and extending two dimensional benchmarks from the FSI literature. New three dimensional benchmarks are also proposed.

Yongxing Wang, Peter K. Jimack, Mark A. Walkley
Multi Axes Sliding Mesh Approach for Compressible Viscous Flows

To compute flows around a body with a rotating or movable part like a tiltrotor aircraft, the multi axes sliding mesh approach has been proposed. This approach is based on the unstructured moving grid finite volume method, which has adopted the space-time unified domain for control volume. Thus, it can accurately express such a moving mesh. However, due to the difficulty of mesh control in viscous flows and the need to maintain the stability of computation, it is restricted to only inviscid flows. In this paper, the multi axes sliding mesh approach was extended to viscous flows to understand detailed flow phenomena around a complicated moving body. The strategies to solve several issues not present in inviscid flow computations are described. To show the validity of the approach in viscous flows, it was applied to the flow field of a sphere in uniform flow. Multiple domains that slide individually were placed around the sphere, and it was confirmed that the sliding mesh did not affect the flow field. The usability of the approach is expected to be applied to practical viscous flow computations.

Masashi Yamakawa, Satoshi Chikaguchi, Shinichi Asao, Shotaro Hamato
Monolithic Arbitrary Lagrangian–Eulerian Finite Element Method for a Multi-domain Blood Flow–Aortic Wall Interaction Problem

In this paper, an arbitrary Lagrangian–Eulerian (ALE) finite element method in the monolithic approach is developed for a multi-domain blood flow–aortic wall interaction problem with multiple moving interfaces. An advanced fully discrete ALE-mixed finite element approximation is defined to solve the present fluid–structure interaction (FSI) problem in the cardiovascular environment, in which two fields of structures are involved with two fields of fluid flow, inducing three moving interfaces in between for the interactions. Numerical experiments are carried out for a realistic cardiovascular problem with the implantation of vascular stent graft to demonstrate the strength of our developed ALE-mixed finite element method.

Pengtao Sun, Chen-Song Zhang, Rihui Lan, Lin Li
Morphing Numerical Simulation of Incompressible Flows Using Seamless Immersed Boundary Method

In this paper, we proposed the morphing simulation method on the Cartesian grid in order to realize flow simulations for shape optimization with lower cost and versatility. In conventional morphing simulations, a simulation is performed while deforming a model shape and the computational grid using the boundary fitting grid. However, it is necessary to deform the computational grid each time, and it is difficult to apply to a model with complicated shape. The present method does not require grid regeneration or deformation. In order to apply the present method to models with various shapes on the Cartesian grid, the seamless immersed boundary method (SIBM) is used. Normally, when the SIBM is applied to a deformed object, the velocity condition on the boundary is imposed by the moving velocity of the boundary. In the present method, the velocity condition is imposed by zero velocity even if the object is deformed because the purpose of the present morphing simulation is to obtain simulation results for a stationary object. In order to verify the present method, two-dimensional simulations for the flow around an object were performed. In order to obtain drag coefficients of multiple models, the object was deformed in turn from the initial model to each model in the present morphing simulation. By using the present method, the drag coefficients for some models could be obtained by one simulation. It is concluded that the flow simulation for shape optimization can be performed very easily by using the present morphing simulation method.

Kyohei Tajiri, Mitsuru Tanaka, Masashi Yamakawa, Hidetoshi Nishida
deal.II Implementation of a Two-Field Finite Element Solver for Poroelasticity

This paper presents a finite element solver for poroelasticity in the 2-field approach and its implementation on the deal.II platform. Numerical experiments on benchmarks are presented to demonstrate the accuracy and efficiency of this new solver.

Zhuoran Wang, Jiangguo Liu
Numerical Investigation of Solute Transport in Fractured Porous Media Using the Discrete Fracture Model

In this paper, we investigate flow with solute transport in fractured porous media. The system of the governing equations consists of the continuity equation, Darcy’s law, and concentration equation. A discrete-fracture model (DFM) has been developed to describe the problem under consideration. The multiscale time-splitting method was used to handle different sizes of time-step for different physics, such as pressure and concentration. Some numerical examples are presented to show the efficiency of the multi-scale time-splitting approach.

Mohamed F. El-Amin, Jisheng Kou, Shuyu Sun
Adaptive Multiscale Model Reduction for Nonlinear Parabolic Equations Using GMsFEM

In this paper, we propose a coupled Discrete Empirical Interpolation Method (DEIM) and Generalized Multiscale Finite element method (GMsFEM) to solve nonlinear parabolic equations with application to the Allen-Cahn equation. The Allen-Cahn equation is a model for nonlinear reaction-diffusion process. It is often used to model interface motion in time, e.g. phase separation in alloys. The GMsFEM allows solving multiscale problems at a reduced computational cost by constructing a reduced-order representation of the solution on a coarse grid. In [14], it was shown that the GMsFEM provides a flexible tool to solve multiscale problems by constructing appropriate snapshot, offline and online spaces. In this paper, we solve a time dependent problem, where online enrichment is used. The main contribution is comparing different online enrichment methods. More specifically, we compare uniform online enrichment and adaptive methods. We also compare two kinds of adaptive methods. Furthermore, we use DEIM, a dimension reduction method to reduce the complexity when we evaluate the nonlinear terms. Our results show that DEIM can approximate the nonlinear term without significantly increasing the error. Finally, we apply our proposed method to the Allen Cahn equation.

Yiran Wang, Eric Chung, Shubin Fu
Parallel Shared-Memory Isogeometric Residual Minimization (iGRM) for Three-Dimensional Advection-Diffusion Problems

In this paper, we present a residual minimization method for three-dimensional isogeometric analysis simulations of advection-diffusion equations. First, we apply the implicit time integration scheme for the three-dimensional advection-diffusion equation. Namely, we utilize the Douglas-Gunn time integration scheme. Second, in every time step, we apply the residual minimization method for stabilization of the numerical solution. Third, we use isogeometric analysis with B-spline basis functions for the numerical discretization. We perform alternating directions splitting of the resulting system of linear equations, so the computational cost of the sequential LU factorization is linear $$\mathcal{O}(N)$$. We test our method on the three-dimensional simulation of the advection-diffusion problem. We parallelize the solver for shared-memory machine using the GALOIS framework.

Marcin Łoś, Judit Munoz-Matute, Krzysztof Podsiadło, Maciej Paszyński, Keshav Pingali
Numerical Simulation of Heat Transfer in an Enclosure with Time-Periodic Heat Generation Using Finite-Difference Method

This paper reports a numerical investigation of highly coupled system of partial differential equations, simulating the fluid flow and heat transfer in a large-scale enclosure with time-periodic heat generation. The bottom wall of the enclosure is insulated, and heat exchange with the environment is modeled at other external boundaries. The heater with time-periodic heat generation is located at the bottom of the enclosure. The internal surfaces of both the heater and walls are assumed to be gray. Air is the working fluid and the Rayleigh number is 109. To solve the governing equations with dimensionless vorticity – stream function – temperature variables, the finite difference method has been used. The developed model has been validated through a comparison with data of other authors. The effect of surface emissivity and periodic heat generation on Nusselt numbers and both stream function and temperature distributions has been investigated. The results showed that the influence of the thermal radiation on total thermal transmission increases with surface emissivity of walls and heater surfaces. The present numerical method can be applied in several engineering problems, such as designing passive cooling systems and the simulation of heat transfer in building constructions.

Igor Miroshnichenko, Mikhail Sheremet
Development of an Object-Oriented Programming Tool Based on FEM for Numerical Simulation of Mineral-Slurry Transport

The early stages of the development of a finite element method (FEM) based computational tool for numerically simulating mineral-slurry transport involving both Newtonian and non-Newtonian flows are described in this work. The rationale behind the conception, design and implementation of the referred object-oriented programming tool is thus initially highlighted. A particular emphasis is put on several architectural aspects accounted for and object class hierarchies defined during the development of the tool. Next one of the main modules composing the tool under development is further described. Finally, as a means of illustration, the use of the FEM based tool for simulating two-dimensional laminar flows is discussed. More specifically, canonical configurations widely studied in the past are firstly accounted for. A more practical application involving the simulation of a mineral-slurry handling device is then studied using the power-law rheological model. The results from the simulations carried out highlight the usefulness of the tool for realistically predicting the associated flow behavior. The FEM based tool discussed in this work will be used in future for carrying out high-fidelity numerical simulations of turbulent multiphase flows including fluid-particle interactions.

Sergio Peralta, Jhon Cordova, Cesar Celis, Danmer Maza
Descending Flight Simulation of Tiltrotor Aircraft at Different Descent Rates

Helicopters and tiltrotor aircrafts are known to fall into an unstable state called vortex ring state when they descend rapidly. This paper presents a six degrees of freedom descending flight simulation of a tiltrotor aircraft represented by the V-22 Osprey, considering the interaction between fluid and a rigid body. That is, an aircraft affects the surrounding flow field by rotating the rotors, and flies with the generated force as thrust. Similarly, an orientation of the airframe is controlled by aerodynamic force which is generated by manipulating the shape. This numerical analysis is a complicated moving boundary problem involving motion of an air-frame or rotation of rotors. As a numerical approach, the Moving Computational Domain (MCD) method in combination with the multi-axis sliding mesh approach is adopted. In the MCD method, the whole computational domain moves with objects in the domain. At this time, fluid flow around the objects is generated by the movement of the boundaries. In addition, this method removes computational space restrictions, allowing an aircraft to move freely within the computational space regardless of a size of a computational grid. The multi-axis sliding mesh approach allows rotating bodies to be placed in a computational grid. Using the above approach, the flight simulation at two different descent rates is performed to reveal a behavior of a tiltrotor aircraft and a state of the surrounding flow field.

Ayato Takii, Masashi Yamakawa, Shinichi Asao
The Quantization Algorithm Impact in Hydrological Applications: Preliminary Results

A computationally efficient surface to groundwater coupled hydrological model is being developed based on the Extended Cellular Automata (XCA) formalism. The three-dimensional unsaturated flow model was the first to be designed and implemented in OpenCAL. Here, the response of the model with respect to small variations of the quantization threshold has been assessed, which is the OpenCAL’s quantization algorithm’s parameter used for evaluating cell’s steady state condition. An unsaturated flow test case was considered where the elapsed times of both the non quantized execution and the execution run by setting the quantization threshold to zero (with respect to the moist content variable) were already evaluated. The model response has been assessed in terms of both accuracy and computational performance in the case of an MPI/OpenMP hybrid execution. Results have pointed out that a very good tradeoff between accuracy and computational performance can be achieved, allowing for a considerable speed-up of the model against a very limited loss of precision.

Alessio De Rango, Luca Furnari, Donato D’Ambrosio, Alfonso Senatore, Salvatore Straface, Giuseppe Mendicino
An Expanded Mixed Finite Element Method for Space Fractional Darcy Flow in Porous Media

In this paper an expanded mixed formulation is introduced to solve the two dimensional space fractional Darcy flow in porous media. By introducing an auxiliary vector, we derive a new mixed formulation and the well-possedness of the formulation can be established. Then the locally mass-conservative expanded mixed finite element method is applied for the solution. Numerical results are shown to verify the efficiency of the proposed algorithm.

Huangxin Chen, Shuyu Sun
Prediction of the Free Jet Noise Using Quasi-gas Dynamic Equations and Acoustic Analogy

The paper is focused on the numerical simulation of acoustic properties of the free jets from circle nozzle at low and moderate Reynolds numbers. The near-field of compressible jet flow is calculated using developed regularized (quasi-gas dynamic) algorithms solver QGDFoam. Acoustic noise is computed for jets with M = 0.9, Re = 3600 and M = 2.1, Re = 70000 parameters. The acoustic pressure in far field is predicted using the Ffowcs Williams and Hawkings analogy implemented in the libAcoustics library based on the OpenFOAM software package. The determined properties of the flow and acoustic fields are compared with experimental data. The flow structures are characterized by the development of the Kelvin-Helmholtz instability waves, which lead to energy outflux in the radial direction. Their further growth is accompanied by the formation of large and small-scale eddies leading to the generation of acoustic noise. The results showed that for selected jets the highest levels of generated noise is obtained at angles around 30° which agrees well with experimental data.

Andrey Epikhin, Matvey Kraposhin
Simulation Based Exploration of Bacterial Cross Talk Between Spatially Separated Colonies in a Multispecies Biofilm Community

We present a simple mesoscopic model for bacterial cross-talk between growing biofilm colonies. The simulation setup mimics a novel microfludic biofilm growth reactor which allows a 2D description. The model is a stiff quasilinear system of diffusion-reaction equations with simultaneously a super-diffusion singularity and a degeneracy (as in the porous medium equation) that leads to the formation of sharp interfaces with finite speed of propagation and gradient blow up. We use a finite volume method with arithmetic flux averaging, and a time adaptive stiff time integrator. We find that signal and nutrient transport between colonies can greatly control and limit biofilm response to induction signals, leading to spatially heterogeneous biofilm behavior.

Pavel Zarva, Hermann J. Eberl
Massively Parallel Stencil Strategies for Radiation Transport Moment Model Simulations

The radiation transport equation is a mesoscopic equation in high dimensional phase space. Moment methods approximate it via a system of partial differential equations in traditional space-time. One challenge is the high computational intensity due to large vector sizes (1 600 components for P39) in each spatial grid point. In this work, we extend the calculable domain size in 3D simulations considerably, by implementing the StaRMAP methodology within the massively parallel HPC framework NAStJA, which is designed to use current supercomputers efficiently. We apply several optimization techniques, including a new memory layout and explicit SIMD vectorization. We showcase a simulation with 200 billion degrees of freedom, and argue how the implementations can be extended and used in many scientific domains.

Marco Berghoff, Martin Frank, Benjamin Seibold
Hybrid Mixed Methods Applied to Miscible Displacements with Adverse Mobility Ratio

We propose stable and locally conservative hybrid mixed finite element methods to approximate the Darcy system and convection-diffusion problem, presented in a mixed form, to solve miscible displacements considering convective flows with adverse mobility ratio. The stability of the proposed formulations is achieved due to the choice of non-conforming Raviart-Thomas spaces combined to upwind scheme for the convection-dominated regimes, where the continuity conditions, between the elements, are weakly enforced by the introduction of Lagrange multipliers. Thus, the primal variables of both systems can be condensed in the element level leading a positive-definite global problem involving only the degrees of freedom associated with the multipliers. This approach, compared to the classical conforming Raviart-Thomas, present a reduction of the computational cost because, in both problems, the Lagrange multiplier is associated with a scalar field. In this context, a staggered algorithm is employed to decouple the Darcy problem from the convection-diffusion mixed system. However, both formulations are solved at the same time step, and the time discretization adopted for the convection-diffusion problem is the implicit backward Euler method. Numerical results show optimal convergence rates for all variables and the capacity to capture the formation and the propagation of the viscous fingering, as can be seen in the comparisons of the simulations of the Hele-Shaw cell with experimental results of the literature.

Iury Igreja, Gabriel de Miranda

Smart Systems: Bringing Together Computer Vision, Sensor Networks and Machine Learning

Frontmatter
Learn More from Context: Joint Modeling of Local and Global Attention for Aspect Sentiment Classification

Aspect sentiment classification identifies the sentiment polarity of the target that appears in a sentence. The key point of aspect sentiment classification is to capture valuable information from sentence. Existing methods have acknowledged the importance of the relationship between the target and the sentence. However, these approaches only focus on the local information of the target, such as the positional relationship and the semantic similarity between the words in a sentence and the target. Moreover, the global information of the interaction of words in sentence and their influence on the final prediction of sentiment polarity are ignored in related works. To tackle this issue, the present paper proposes Joint Modeling of Local and Global Attention (LGAJM), with the following two aspects: (1) the study develops a position-based attention network concentrating on the local information of semantic similarity and position information of the target. (2) In order to fetch global information, such as context information and interaction between words in sentences, the self-attention network is introduced. Besides, a BiGRU-based gating mechanism is proposed to weight the outputs of these two attention networks. The model is evaluated on two datasets: laptop and restaurant from SemEval 2014. Experimental results demonstrate the high effectiveness of the proposed method in aspect sentiment classification.

Siyuan Wang, Peng Liu, Jinqiao Shi, Xuebin Wang, Can Zhao, Zelin Yin
ArtPDGAN: Creating Artistic Pencil Drawing with Key Map Using Generative Adversarial Networks

A lot of researches focus on image transfer using deep learning, especially with generative adversarial networks (GANs). However, no existing methods can produce high quality artistic pencil drawings. First, artists do not convert all the details of the photos into the drawings. Instead, artists tend to use strategies to magnify some special parts of the items and cut others down. Second, the elements in artistic drawings may not be located precisely. What’s more, the lines may not relate to the features of the items strictly. To address above challenges, we propose ArtPDGAN, a novel GAN based framework that combines an image-to-image network to generate key map. And then, we use the key map as an important part of input to generate artistic pencil drawings. The key map can show the key parts of the items to guide the generator. We use a paired and unaligned artistic drawing dataset containing high-resolution photos of items and corresponding professional artistic pencil drawings to train ArtPDGAN. Results of our experiments show that the proposed framework performs excellently against existing methods in terms of similarity to artist’s work and user evaluations.

SuChang Li, Kan Li, Ilyes Kacher, Yuichiro Taira, Bungo Yanatori, Imari Sato
Interactive Travel Aid for the Visually Impaired: from Depth Maps to Sonic Patterns and Verbal Messages

This paper presents user trials of a prototype micro-navigation aid for the visually impaired. The main advantage of the system is its small form factor. The device consists of a Structure Sensor depth camera, a smartphone, a remote controller and a pair of headphones. An original feature of the system is its interactivity. The user can activate different space scanning modes and different sound presentation schemes for 3D scenes on demand. The results of the trials are documented by timeline logs recording the activation of different interactive modes. The aim of the first trial was to test system capability for aiding the visually impaired to avoid obstacles. The second tested system efficiency at detecting open spaces. The two visually impaired testers performed the trials successfully, although the times required to complete the tasks seem rather long. Nevertheless, the trials show the potential usefulness of the system as a navigational aid and have enabled us to introduce numerous improvements to the tested prototype.

Piotr Skulimowski, Pawel Strumillo
Ontology-Driven Edge Computing

The paper is devoted to new aspects of ontology-based approach to control the behavior of Edge Computing devices. Despite the ontology-driven solutions are widely used to develop adaptive mechanisms to the specifics of the Internet of Things (IoT) and ubiquitous computing ecosystems, the problem of creating withal full-fledged, easy to handle and efficient ontology-driven Edge Computing still remains unsolved. We propose the new approach to utilize ontology reasoning mechanism right on the extreme resource-constrained Edge devices, not in the Fog or Cloud. Thanks to this, on-the-fly modifying of device functions, as well as ad-hoc monitoring of intermediate data processed by the device and interoperability within the IoT are enabled and become more intelligent. Moreover, the smart leverage of on-demand automated transformation of Machine-to-Machine to Human-Centric IoT becomes possible. We demonstrate the practical usefulness of our solution by the implementation of ontology-driven Smart Home edge device that helps locating the lost things.

Konstantin Ryabinin, Svetlana Chuprina
Combined Metrics for Quality Assessment of 3D Printed Surfaces for Aesthetic Purposes: Towards Higher Accordance with Subjective Evaluations

Objective quality assessment for 3D printing purposes may be considered as one of the most useful applications of machine vision in smart monitoring related to the development of the Industry 4.0 solutions. During recent years several approaches have been proposed, assuming observing the side surfaces, mainly based on the analysis of the regularity of visible patterns, which represent the consecutive printed layers. These methods, based on the use of general purpose image quality assessment (IQA) metrics, Hough transform, entropy and texture analysis, make it possible to classify the printed samples, independently of the filament’s colour, into low and high quality classes, with the use of photos or 3D scans of the side surfaces.The next step of research, investigated in this paper, is the combination of various proposed approaches to develop a combined metric, possibly highly correlated with subjective opinions. Since the correlation of single metrics developed mainly for classification is relatively low, their combination makes it possible to achieve much better results, verified using an original, newly developed database containing 107 captured images and 3D scans of the 3D printed surfaces with various colours and local distortions caused by external factors, together with Mean Opinion Scores (MOS) gathered from independent observers. Obtained results are promising and may be a starting point for further research towards the optimisation of the newly developed metrics for the automatic assessment of the 3D printed surfaces, mainly for aesthetic purposes.

Jarosław Fastowicz, Piotr Lech, Krzysztof Okarma
Path Markup Language for Indoor Navigation

Indoor navigation is critical in many tasks such as firefighting, emergency medical response, and SWAT response, where GPS signals are not available. Prevailing approaches such as beacons, radio signal triangulation, SLAM, and IMU methods are either expensive or impractical in extreme conditions, e.g. poor visibility and sensory drifting. In this study, we develop a path markup language for pre-planning routes and interacting with the user on a mobile device for real-time indoor navigation. The interactive map is annotated with walkable paths and landmarks that can be used for inertial motion sensor-based navigation. The wall-following and landmark-checking algorithms help to cancel drifting errors along the way. Our preliminary experiments show that the approach is affordable and efficient to generate annotated building floor path maps and it is feasible to use the map for indoor navigation in real-time on a mobile device with motion sensors. The method can be applied to intelligent helmets and mobile phones, including potential applications of first responders, tour guide for buildings, and assistance for visually impaired users.

Yang Cai, Florian Alber, Sean Hackett
Smart Fire Alarm System with Person Detection and Thermal Camera

Fire alarm is crucial for safety of life and property in many scenes. A good fire alarm system should be small-sized, low-cost and effective to prevent fire accidents from happening. In this paper we introduce a smart fire alarm system used in kitchen as a representative scenario. The system captures both thermal and optical videos for temperature monitoring and person detection, which are further used to predict potential fire accident and avoid false alarm. Thermal videos are used to record the temperature change in region-of-interests, for example, cookware. YOLOv3-tiny algorithm is modified for person detection and can be iteratively improved with the hard examples gathered by the system. To implement the system on an edge device instead of a server, we propose a high-efficiency neural network inference computing framework called TuringNN. Comprehensive rules enable the system to appropriately respond to different situations. The proposed system has been proved effective in both experiments and numerous cases in complex practical applications.

Yibing Ma, Xuetao Feng, Jile Jiao, Zhongdong Peng, Shenger Qian, Hui Xue, Hua Li
Data Mining for Big Dataset-Related Thermal Analysis of High Performance Computing (HPC) Data Center

Greening of Data Centers could be achieved through energy savings in two significant areas, namely: compute systems and cooling systems. A reliable cooling system is necessary to produce a persistent flow of cold air to cool the servers due to increasing computational load demand. Servers’ dissipated heat effects a strain on the cooling systems. Consequently, it is necessary to identify hotspots that frequently occur in the server zones. This is facilitated through the application of data mining techniques to an available big dataset for thermal characteristics of High-Performance Computing ENEA Data Center, namely Cresco 6. This work presents an algorithm that clusters hotspots with the goal of reducing a data centre’s large thermal-gradient due to uneven distribution of server dissipated waste heat followed by increasing cooling effectiveness.

Davide De Chiara, Marta Chinnici, Ah-Lian Kor
A Comparison of Multiple Objective Algorithms in the Context of a Dial a Ride Problem

In their operations private chauffeur companies have to solve variations of the multiple objective dial a ride problem. The number and type of restrictions make the problem extremely intricate and, when manually done, requires specialized people with a deep knowledge of the modus operandi of the company and of the environment in which the procedure takes place. Nevertheless, the scheduling can be automated through mean of computational methods, allowing to deliver solutions faster and, possible, optimized. In this context, this paper compares six algorithms applied to solving a multiple objective dial a ride problem, using data from a company mainly working in the Algarve, Portugal. The achieved results show that $$\epsilon $$-MOEA overcomes the other algorithms tested, namely the NSGA-II, NSGA-III, $$\epsilon $$-NSGA-II, SPEA2, and PESA2.

Pedro M. M. Guerreiro, Pedro J. S. Cardoso, Hortênsio C. L. Fernandes

Software Engineering for Computational Science

Frontmatter
Lessons Learned in a Decade of Research Software Engineering GPU Applications

After years of using Graphics Processing Units (GPUs) to accelerate scientific applications in fields as varied as tomography, computer vision, climate modeling, digital forensics, geospatial databases, particle physics, radio astronomy, and localization microscopy, we noticed a number of technical, socio-technical, and non-technical challenges that Research Software Engineers (RSEs) may run into. While some of these challenges, such as managing different programming languages within a project, or having to deal with different memory spaces, are common to all software projects involving GPUs, others are more typical of scientific software projects. Among these challenges we include changing resolutions or scales, maintaining an application over time and making it sustainable, and evaluating both the obtained results and the achieved performance.

Ben van Werkhoven, Willem Jan Palenstijn, Alessio Sclocco
Unit Tests of Scientific Software: A Study on SWMM

Testing helps assure software quality by executing program and uncovering bugs. Scientific software developers often find it challenging to carry out systematic and automated testing due to reasons like inherent model uncertainties and complex floating point computations. We report in this paper a manual analysis of the unit tests written by the developers of the Storm Water Management Model (SWMM). The results show that the 1,458 SWMM tests have a 54.0% code coverage and a 82.4% user manual coverage. We also observe a “getter-setter-getter” testing pattern from the SWMM unit tests. Based on these results, we offer insights to improve test development and coverage.

Zedong Peng, Xuanyi Lin, Nan Niu
NUMA-Awareness as a Plug-In for an Eventify-Based Fast Multipole Method

Following the trend towards Exascale, today’s supercomputers consist of increasingly complex and heterogeneous compute nodes. To exploit the performance of these systems, research software in HPC needs to keep up with the rapid development of hardware architectures. Since manual tuning of software to each and every architecture is neither sustainable nor viable, we aim to tackle this challenge through appropriate software design. In this article, we aim to improve the performance and sustainability of FMSolvr, a parallel Fast Multipole Method for Molecular Dynamics, by adapting it to Non-Uniform Memory Access architectures in a portable and maintainable way. The parallelization of FMSolvr is based on Eventify, an event-based tasking framework we co-developed with FMSolvr. We describe a layered software architecture that enables the separation of the Fast Multipole Method from its parallelization. The focus of this article is on the development and analysis of a reusable NUMA module that improves performance while keeping both layers separated to preserve maintainability and extensibility. By means of the NUMA module we introduce diverse NUMA-aware data distribution, thread pinning and work stealing policies for FMSolvr. During the performance analysis the modular design of the NUMA module was advantageous since it facilitates combination, interchange and redesign of the developed policies. The performance analysis reveals that the runtime of FMSolvr is reduced by $$21\%$$ from 1.48 ms to 1.16 ms through these policies.

Laura Morgenstern, David Haensel, Andreas Beckmann, Ivo Kabadshow
Boosting Group-Level Synergies by Using a Shared Modeling Framework

Modern software engineering has established sophisticated tools and workflows that enable distributed development of high-quality software. Here, we present our experiences in adopting these workflows to collectively develop, maintain, and use research software, specifically: a modeling framework for complex and evolving systems. We exemplify how sharing this modeling framework within our research group helped conveying software engineering best practices, fostered cooperation, and boosted synergies. Together, these experiences illustrate that the adoption of modern software engineering workflows is feasible in the dynamically changing academic context, and how these practices facilitate reliability, reproducibility, reusability, and sustainability of research software, ultimately improving the quality of the resulting scientific output.

Yunus Sevinchan, Benjamin Herdeanu, Harald Mack, Lukas Riedel, Kurt Roth
Testing Research Software: A Case Study

Background: The increasing importance of software for the conduct of various types of research raises the necessity of proper testing to ensure correctness. The unique characteristics of the research software produce challenges in the testing process that require attention. Aims: Therefore, the goal of this paper is to share the experience of implementing a testing framework using a statistical approach for a specific type of research software, i.e. non-deterministic software. Method: Using the ParSplice research software project as a case, we implemented a testing framework based on a statistical testing approach called Multinomial Test. Results: Using the new framework, we were able to test the ParSplice project and demonstrate correctness in a situation where traditional methodical testing approaches were not feasible. Conclusions: This study opens up the possibilities of using statistical testing approaches for research software that can overcome some of the inherent challenges involved in testing non-deterministic research software.

Nasir U. Eisty, Danny Perez, Jeffrey C. Carver, J. David Moulton, Hai Ah Nam

Open Access

APE: A Command-Line Tool and API for Automated Workflow Composition

Automated workflow composition is bound to take the work with scientific workflows to the next level. On top of today’s comprehensive eScience infrastructure, it enables the automated generation of possible workflows for a given specification. However, functionality for automated workflow composition tends to be integrated with one of the many available workflow management systems, and is thus difficult or impossible to apply in other environments. Therefore we have developed APE (the Automated Pipeline Explorer) as a command-line tool and API for automated composition of scientific workflows. APE is easily configured to a new application domain by providing it with a domain ontology and semantically annotated tools. It can then be used to synthesize purpose-specific workflows based on a specification of the available workflow inputs, desired outputs and possibly additional constraints. The workflows can further be transformed into executable implementations and/or exported into standard workflow formats. In this paper we describe APE v1.0 and discuss lessons learned from applications in bioinformatics and geosciences.

Vedran Kasalica, Anna-Lena Lamprecht

Solving Problems with Uncertainties

Frontmatter
An Ontological Approach to Knowledge Building by Data Integration

This paper discusses the uncertainty in the automation of knowledge building from heterogeneous raw datasets. Ontologies play a critical role in such a process by providing a well consolidated support to link and semantically integrate datasets via interoperability, as well as semantic enrichment and annotations. By adopting Semantic Web technology, the resulting ecosystem is fully machine consumable. However, while the manual alignment of concepts from different vocabularies is reasonable at a small scale, fully automatic mechanisms are required once the target system scales up, leading to a significant uncertainty.

Salvatore Flavio Pileggi, Hayden Crain, Sadok Ben Yahia
A Simple Stochastic Process Model for River Environmental Assessment Under Uncertainty

We consider a new simple stochastic single-species population dynamics model for understanding the flow-regulated benthic algae bloom in uncertain river environment: an engineering problem. The population dynamics are subject to regime-switching flow conditions such that the population is effectively removed in a high-flow regime while it is not removed at all in a low-flow regime. A focus in this paper is robust and mathematically rigorous statistical evaluation of the disutility by the algae bloom under model uncertainty. We show that the evaluation is achieved if the optimality equation derived from a dynamic programming principle is solved, which is a coupled system of non-linear and non-local degenerate elliptic equations having a possibly discontinuous coefficient. We show that the system is solvable in continuous viscosity and asymptotic senses. We also show that its solutions can be approximated numerically by a convergent finite difference scheme with a demonstrative example.

Hidekazu Yoshioka, Motoh Tsujimura, Kunihiko Hamagami, Yumi Yoshioka
A Posteriori Error Estimation via Differences of Numerical Solutions

In this work we address the problem of the estimation of the approximation error that arise at a discretization of the partial differential equations. For this we take advantage of the ensemble of numerical solutions obtained by independent numerical algorithms. To obtain the approximation error, the differences between numerical solutions are treated in the frame of the Inverse Problem that is posed in the variational statement with the zero order regularization. In this work we analyse the ensemble of numerical results that is obtained by five OpenFOAM solvers for the inviscid compressible flow around a cone at zero angle of attack. We present the comparison of approximation errors that are obtained by the Inverse Problem, and the exact error that is computed as the difference of numerical solutions and a high precision solution.

Aleksey K. Alekseev, Alexander E. Bondarev, Artem E. Kuvshinnikov
Global Sensitivity Analysis of Various Numerical Schemes for the Heston Model

The pricing of financial options is usually based on statistical sampling of the evolution of the underlying under a chosen model, using a suitable numerical scheme. It is widely accepted that using low-discrepancy sequences instead of pseudorandom numbers in most cases increases the accuracy. It is important to understand and quantify the reasons for this effect. In this work, we use Global Sensitivity Analysis in order to study one widely used model for pricing of options, namely the Heston model. The Heston model is an important member of the family of the stochastic volatility models, which have been found to better describe the observed behaviour of option prices in the financial markets. By using a suitable numerical scheme, like those of Euler, Milstein, Kahl-Jäckel, Andersen, one has the flexibility needed to compute European, Asian or exotic options. In any case the problem of evaluating an option price can be considered as a numerical integration problem. For the purposes of modelling and complexity reduction, one should make the distinction between the model nominal dimension and its effective dimension. Another notion of “average dimension” has been found to be more practical from the computational point of view. The definitions and methods of evaluation of effective dimensions are based on computing Sobol’ sensitivity indices. A classification of functions based on their effective dimensions is also known. In the context of quantitative finance, Global Sensitivity Analysis (GSA) can be used to assess the efficiency of a particular numerical scheme. In this work we apply GSA based on Sobol sensitivity indices in order to assess the interactions of the various dimensions in using the above mentioned schemes. We observe that the GSA offers useful insight on how to maximize the advantages of using QMC in these schemes.

Emanouil Atanassov, Sergei Kucherenko, Aneta Karaivanova
Robust Single Machine Scheduling with Random Blocks in an Uncertain Environment

While scheduling problems in deterministic models are quite well investigated, the same problems in an uncertain environment require very often further exploration and examination. In the paper we consider a single machine tabu search method with block approach in an uncertain environment modeled by random variables with the normal distribution. We propose a modification to the tabu search method which improves the robustness of the obtained solutions. The conducted computational experiments show that the proposed improvement results in a much more robust solutions than the ones obtained in the classic block approach.

Wojciech Bożejko, Paweł Rajba, Mieczysław Wodecki
Empirical Analysis of Stochastic Methods of Linear Algebra

In this paper we present the results of an empirical study of stochastic projection and stochastic gradient descent methods as means of obtaining approximate inverses and preconditioners for iterative methods. Results of numerical experiments are used to analyse scalability and overall suitability of the selected methods as practical tools for treatment of large linear systems of equations. The results are preliminary due to the code being not yet fully optimized.

Mustafa Emre Şahin, Anton Lebedev, Vassil Alexandrov
Wind Field Parallelization Based on Python Multiprocessing to Reduce Forest Fire Propagation Prediction Uncertainty

Forest fires provoke significant loses from the ecological, social and economical point of view. Furthermore, the climate emergency will also increase the occurrence of such disasters. In this context, forest fire propagation prediction is a key tool to fight against these natural hazards efficiently and mitigate the damages. However, forest fire spread simulators require a set of input parameters that, in many cases, cannot be measured and must be estimated indirectly introducing uncertainty in forest fire propagation predictions. One of such parameters is the wind. It is possible to measure wind using meteorological stations and it is also possible to predict wind using meteorological models such as WRF. However, wind components are highly affected by the terrain topography introducing a large degree of uncertainty in forest fire spread predictions. Therefore, it is necessary to introduce wind field models that estimate wind speed and direction at very high resolution to reduce such uncertainty. Such models are time consuming models that are usually executed under strict time constrains. So, it is critical to minimize the execution time, taking into account the fact that in many cases it is not possible to execute the model on a supercomputer, but must be executed on commodity hardware available on the field or at control centers. This work introduces a new parallelization approach for wind field calculation based on Python multiprocessing to accelerate wind field evaluation. The results show that the new approach reduces execution time using a single personal computer.

Gemma Sanjuan, Tomas Margalef, Ana Cortés
Risk Profiles of Financial Service Portfolio for Women Segment Using Machine Learning Algorithms

Typically, women are scored with a lower financial risk than men. However, the understanding of variables and indicators that lead to such results, are not fully understood. Furthermore, the stochastic nature of the data makes it difficult to generate a suitable profile to offer an adequate financial portfolio to the women segment. As the amount, variety, and speed of data increases, so too does the uncertainty inherent within, leading to a lack of confidence in the results. In this research, machine learning techniques are used for data analysis. In this way, faster, more accurate results are obtained than in traditional models (such as statistical models or linear programming) in addition to their scalability.

Jessica Ivonne Lozano-Medina, Laura Hervert-Escobar, Neil Hernandez-Gress
Multidimensional BSDEs with Mixed Reflections and Balance Sheet Optimal Switching Problem

In this paper, we study a system of multidimensional coupled reflected backward stochastic differential equations (RBSDEs) with interconnected generators and barriers and mixed reflections, i.e. oblique and normal reflections. This system of equations is arising in the context of optimal switching problem when both sides of the balance sheet are considered. This problem incorporates both the action of switching between investment modes and the action of abandoning the investment project before its maturity once it becomes unprofitable. Pricing such real options (switch option and abandon option) is equivalent to solve the system of coupled RBSDEs considered in the paper, for which we show the existence of a continuous adapted minimal solution via a Picard iteration method.

Rachid Belfadli, M’hamed Eddahbi, Imade Fakhouri, Youssef Ouknine

Teaching Computational Science

Frontmatter
Modeling and Automatic Code Generation Tool for Teaching Concurrent and Parallel Programming by Finite State Processes

Understanding concurrent and parallel programming can be a very hard task on first contact by students. This paper describes the development and experimental results of the FSP2JAVA tool. The proposed method starts from concurrent systems modeling through Finite State Processes (FSP). After that, the method includes an automatic code generation from the model. This goal is achieved by a domain-specific language compiler which translates from the FSP model to Java code. The FSP2JAVA tool is available for free download in the github site. We argue that this tool helps in teaching concurrent systems, since it abstracts all complex languages concern and encourages the student to be focused at the fundamental concepts of modeling and analysis.

Edwin Monteiro, Kelvinn Pereira, Raimundo Barreto
Automatic Feedback Provision in Teaching Computational Science

We describe a method of automatic feedback provision for students learning computational science and data science methods in Python. We have implemented, used and refined this system since 2009 for growing student numbers, and summarise the design and experience of using it. The core idea is to use a unit testing framework: the teacher creates a set of unit tests, and the student code is tested by running these tests. With our implementation, students typically submit work for assessment, and receive feedback by email within a few minutes after submission. The choice of tests and the reporting back to the student is chosen to optimise the educational value for the students. The system very significantly reduces the staff time required to establish whether a student’s solution is correct, and shifts the emphasis of computing laboratory student contact time from assessing correctness to providing guidance. The self-paced nature of the automatic feedback provision supports a student-centred learning approach. Students can re-submit their work repeatedly and iteratively improve their solution, and enjoy using the system. We include an evaluation of the system from using it in a class of 425 students.

Hans Fangohr, Neil O’Brien, Ondrej Hovorka, Thomas Kluyver, Nick Hale, Anil Prabhakar, Arti Kashyap
Computational Science vs. Zombies

Computational Science attempts to solve scientific problems through the design and application of mathematical models. Researchers and research teams need domain knowledge, along with skills in computing and, increasingly, data science expertise. We have been working to draw students into STEM, Computational Science and Data Science through our Team Zombie outreach program. The program leads the students through a scenario of a disease outbreak in their local area, which turns out to a potential zombie apocalypse. They become part of Team Zombie, a multi-disciplinary response team that investigates the outbreak; models the spread and potential interventions; works towards cures or vaccines; and provides options for detection and monitoring. Throughout the activities we make reference to real-world situations where the techniques are applied. Our program has engaged students from primary school through to university level, raising awareness of the range of approaches to problem solving through models and simulations. We hope this will inspire students to choose courses and careers in computational and data science.

Valerie Maxville, Brodie Sandford
Supporting Education in Algorithms of Computational Mathematics by Dynamic Visualizations Using Computer Algebra System

Computer algebra systems (CAS) are often used programs in universities to support calculations and visualization in teaching mathematical subjects. In this paper we present some examples of dynamic visualizations which we prepared for students of Warsaw University of Life Sciences using Mathematica. Visualizations for simplex algorithm and Karush-Kuhn-Tucker algorithm are presented. We also describe a didactic experiment for students of the Production Engineering Faculty of Warsaw University of Life Sciences using dynamic visualization of the network flow problem.

Włodzimierz Wojas, Jan Krupa
Teaching About the Social Construction of Reality Using a Model of Information Processing

This paper proposes leveraging the recent fusing of Big Data, computer modeling, and complexity science for teaching students about complexity. The emergence and evolution of information orders, including social structural orders, is a product of social interactions. A simple computational model of how social interaction leads to emergent cultures is proposed for teaching students the mechanisms of complexity’s development. A large-scale poker game is also described where students learn the theory and principles behind the model in the course of play. Using the model and game can help instructors achieve learning objectives that include understanding the importance of diversity, how multiple realities can exist simultaneously, and why majority group members are usually unaware of their privileges.

Loren Demerath, James Reid, E. Dante Suarez
Bringing Harmony to Computational Science Pedagogy

Inherent in a musical composition are properties that are analogous to properties and laws that exist in mathematics, physics, and psychology. It follows then that computation using musical models can provide results that are analogous to results provided by mathematical, physical, and psychological models. The audible output of a carefully constructed musical model can demonstrate properties and relationships in a way that is immediately perceptible. For this reason, the study of musical computation is a worthwhile pursuit as a pedagogical tool for computational science. Proposed in this paper is a curriculum for implementing the study of musical computation within a larger computer science or computational science program at a college or university. A benefit of such a curriculum is that it provides a way to integrate artistic endeavors into a STREAM program, while maintaining the mathematical foundations of STEM. Furthermore, the study of musical computation aligns well with the arts-related components of Human-Centered Computing. The curriculum is built on the following two hypotheses: The first hypothesis is that the cognitive, creative, and structural processes involved in both musical composition and computer programming, are similar enough that skilled computer scientists, with or without musical backgrounds, can learn to use programming languages to compose interesting, expressive, and sophisticated musical works. The second hypothesis is that the links between music, mathematics, and several branches of science are strong enough that skilled computational scientists can create musical models that are able to be designed using vocabularies of mathematics and science. While the curriculum defined in this paper focuses on musical computation, the design principles behind it may be applied to other disciplines.

Richard Roth, William Pierce

UNcErtainty QUantIficatiOn for ComputationAl MdeLs

Frontmatter
Intrusive Polynomial Chaos for CFD Using OpenFOAM

We present the formulation and implementation of a stochastic Computational Fluid Dynamics (CFD) solver based on the widely used finite volume library - OpenFOAM. The solver employs Generalized Polynomial Chaos (gPC) expansion to (a) quantify the uncertainties associated with the fluid flow simulations, and (b) study the non-linear propagation of these uncertainties. The aim is to accurately estimate the uncertainty in the result of a CFD simulation at a lower computational cost than the standard Monte Carlo (MC) method. The gPC approach is based on the spectral decomposition of the random variables in terms of basis polynomials containing randomness and the unknown deterministic expansion coefficients. As opposed to the mostly used non-intrusive approach, in this work, we use the intrusive variant of the gPC method in the sense that the deterministic equations are modified to directly solve for the (coupled) expansion coefficients. To this end, we have tested the intrusive gPC implementation for both the laminar and the turbulent flow problems in CFD. The results are in accordance with the analytical and the non-intrusive approaches. The stochastic solver thus developed, can serve as an alternative to perform uncertainty quantification, especially when the non-intrusive methods are significantly expensive, which is mostly true for a lot of stochastic CFD problems.

Jigar Parekh, Roel Verstappen
Distributions of a General Reduced-Order Dependence Measure and Conditional Independence Testing

We study distributions of a general reduced-order dependence measure and apply the results to conditional independence testing and feature selection. Experiments with Bayesian Networks indicate that using the introduced test in the Grow and Shrink algorithm instead of Conditional Mutual Information yields promising results for Markov Blanket discovery in terms of F measure.

Mariusz Kubkowski, Małgorzata Łazȩcka, Jan Mielniczuk
MCMC for Bayesian Uncertainty Quantification from Time-Series Data

In computational neuroscience, Neural Population Models (NPMs) are mechanistic models that describe brain physiology in a range of different states. Within computational neuroscience there is growing interest in the inverse problem of inferring NPM parameters from recordings such as the EEG (Electroencephalogram). Uncertainty quantification is essential in this application area in order to infer the mechanistic effect of interventions such as anaesthesia.This paper presents software for Bayesian uncertainty quantification in the parameters of NPMs from approximately stationary data using Markov Chain Monte Carlo (MCMC). Modern MCMC methods require first order (and in some cases higher order) derivatives of the posterior density. The software presented offers two distinct methods of evaluating derivatives: finite differences and exact derivatives obtained through Algorithmic Differentiation (AD). For AD, two different implementations are used: the open source Stan Math Library and the commercially licenced tool distributed by NAG (Numerical Algorithms Group). The use of derivative information in MCMC sampling is demonstrated through a simple example, the noise-driven harmonic oscillator. And different methods for computing derivatives are compared. The software is written in a modular object-oriented way such that it can be extended to derivative based MCMC for other scientific domains.

Philip Maybank, Patrick Peltzer, Uwe Naumann, Ingo Bojak
Uncertainty Quantification for Multiscale Fusion Plasma Simulations with VECMA Toolkit

Within VECMAtk platform we perform Uncertainty Quantification (UQ) for multiscale fusion plasmas simulations. The goal of VECMAtk is to enable modular and automated tools for a wide range of applications to archive robust and actionable results. Our aim in the current paper is to incorporate suitable features to build UQ workflow over the existing fusion codes and to tackle simulations on high performance parallel computers.

Jalal Lakhlili, Olivier Hoenen, Onnie O. Luk, David P. Coster
Sensitivity Analysis of Soil Parameters in Crop Model Supported with High-Throughput Computing

Uncertainty of input parameters in crop models and high costs of their experimental evaluation provide an exciting opportunity for sensitivity analysis, which allows identifying the most significant parameters for different crops. In this research, we perform a sensitivity analysis of soil parameters which play an essential role in plant growth for the MONICA agro-ecosystem model. We utilize Sobol’ sensitivity indices to estimate the importance of main soil parameters for several crop cultures (soybeans, sugar beet and spring barley). High-throughput computing allows us to speed up the computations by more than thirty times and increase the number of sampling points significantly. We identify soil indicators that play an essential role in crop yield productivity and show that their influence is the highest in the topsoil layer.

Mikhail Gasanov, Anna Petrovskaia, Artyom Nikitin, Sergey Matveev, Polina Tregubova, Maria Pukalchik, Ivan Oseledets
A Bluff-and-Fix Algorithm for Polynomial Chaos Methods

Stochastic Galerkin methods can be used to approximate the solution to a differential equation in the presence of uncertainties represented as stochastic inputs or parameters. The strategy is to express the resulting stochastic solution using $$M + 1$$ terms of a polynomial chaos expansion and then derive and solve a deterministic, coupled system of PDEs with standard numerical techniques. One of the critical advantages of this approach is its provable convergence as M increases. The challenge is that the solution to the M system cannot easily reuse an already-existing computer solution to the $$M-1$$ system. We present a promising iterative strategy to address this issue. Numerical estimates of the accuracy and efficiency of the proposed algorithm (bluff-and-fix) demonstrate that it can be more effective than using monolithic methods to solve the whole M + 1 system directly.

Laura Lyman, Gianluca Iaccarino
Markov Chain Monte Carlo Methods for Fluid Flow Forecasting in the Subsurface

Accurate predictions in subsurface flows require the forecasting of quantities of interest by applying models of subsurface fluid flow with very little available data. In general a Bayesian statistical approach along with a Markov Chain Monte Carlo (MCMC) algorithm can be used for quantifying the uncertainties associated with subsurface parameters. However, the complex nature of flow simulators presents considerable challenges to accessing inherent uncertainty in all flow simulator parameters of interest. In this work we focus on the transport of contaminants in a heterogeneous permeability field of a aquifer. In our problem the limited data comes in the form of contaminant fractional flow curves at monitoring wells of the aquifer. We then employ a Karhunen-Loève expansion to truncate the stochastic dimension of the permeability field and thus the expansion helps reducing the computational burden. Aiming to reduce the computational burden further, we code our numerical simulator using parallel programming procedures on Graphics Processing Units (GPUs). In this paper we mainly present a comparative study of two well-known MCMC methods, namely, two-stage and DiffeRential Evolution Adaptive Metropolis (DREAM), for the characterization of the two-dimensional aquifer. A thorough statistical analysis of ensembles of the contaminant fractional flow curves from both MCMC methods is presented. The analysis indicates that although the average fractional flow curves are quite similar, both time-dependent ensemble variances and posterior analysis are considerably distinct for both methods.

Alsadig Ali, Abdullah Al-Mamun, Felipe Pereira, Arunasalam Rahunanthan
Backmatter
Titel
Computational Science – ICCS 2020
Herausgegeben von
Dr. Valeria V. Krzhizhanovskaya
Dr. Gábor Závodszky
Michael H. Lees
Prof. Jack J. Dongarra
Prof. Dr. Peter M. A. Sloot
Sérgio Brissos
João Teixeira
Copyright-Jahr
2020
Electronic ISBN
978-3-030-50436-6
Print ISBN
978-3-030-50435-9
DOI
https://doi.org/10.1007/978-3-030-50436-6

Informationen zur Barrierefreiheit für dieses Buch folgen in Kürze. Wir arbeiten daran, sie so schnell wie möglich verfügbar zu machen. Vielen Dank für Ihre Geduld.

    Bildnachweise
    AvePoint Deutschland GmbH/© AvePoint Deutschland GmbH, NTT Data/© NTT Data, Wildix/© Wildix, arvato Systems GmbH/© arvato Systems GmbH, Ninox Software GmbH/© Ninox Software GmbH, Nagarro GmbH/© Nagarro GmbH, GWS mbH/© GWS mbH, CELONIS Labs GmbH, USU GmbH/© USU GmbH, G Data CyberDefense/© G Data CyberDefense, FAST LTA/© FAST LTA, Vendosoft/© Vendosoft, Kumavision/© Kumavision, Noriis Network AG/© Noriis Network AG, WSW Software GmbH/© WSW Software GmbH, tts GmbH/© tts GmbH, Asseco Solutions AG/© Asseco Solutions AG, AFB Gemeinnützige GmbH/© AFB Gemeinnützige GmbH