Zum Inhalt

Computational Science – ICCS 2020

20th International Conference, Amsterdam, The Netherlands, June 3–5, 2020, Proceedings, Part VI

  • 2020
  • Buch
insite
SUCHEN

Über dieses Buch

Die siebenbändige Reihe LNCS 12137, 12138, 12139, 12140, 12141, 12142 und 12143 bildet die Vortragsreihe der 20. Internationalen Konferenz für Computerwissenschaften, ICCS 2020, die im Juni 2020 in Amsterdam, Niederlande, stattfand. * Die insgesamt 101 Beiträge und 248 Workshopbeiträge in diesem Buch wurden sorgfältig geprüft und aus 719 Einreichungen ausgewählt (230 Einreichungen für die Hauptstrecke und 489 Einreichungen für die Workshops). Die Beiträge waren in thematische Abschnitte unterteilt: Teil I: ICCS Main Track Teil II: ICCS Main Track Teil III: Advances in High-Performance Computational Earth Sciences: Applications and Frameworks; Agent-Based Simulations, Adaptive Algorithms and Solvers; Applications of Computational Methods in Artificial Intelligence and Machine Learning; Biomedical and Bioinformatics Challenges for Computer SciencePart IV: Classifier Learning from Difficult Data; Complex Social Systems through the Lens of Computational Science; Computational Health; Computational Methods for Emerging Problems in (Dis-) Information AnalysisPart V: Computational Optimization, Modellierung and Simulation; Computational Science in IoT and Smart Systems; Computer Graphics, Image Processing

Inhaltsverzeichnis

Frontmatter

Data Driven Computational Sciences

Frontmatter
A High-Performance Implementation of Bayesian Matrix Factorization with Limited Communication

Matrix factorization is a very common machine learning technique in recommender systems. Bayesian Matrix Factorization (BMF) algorithms would be attractive because of their ability to quantify uncertainty in their predictions and avoid over-fitting, combined with high prediction accuracy. However, they have not been widely used on large-scale data because of their prohibitive computational cost. In recent work, efforts have been made to reduce the cost, both by improving the scalability of the BMF algorithm as well as its implementation, but so far mainly separately. In this paper we show that the state-of-the-art of both approaches to scalability can be combined. We combine the recent highly-scalable Posterior Propagation algorithm for BMF, which parallelizes computation of blocks of the matrix, with a distributed BMF implementation that users asynchronous communication within each block. We show that the combination of the two methods gives substantial improvements in the scalability of BMF on web-scale datasets, when the goal is to reduce the wall-clock time.

Tom Vander Aa, Xiangju Qin, Paul Blomstedt, Roel Wuyts, Wilfried Verachtert, Samuel Kaski
Early Adaptive Evaluation Scheme for Data-Driven Calibration in Forest Fire Spread Prediction

Forest fires severally affect many ecosystems every year, leading to large environmental damages, casualties and economic losses. Established and emerging technologies are used to help wildfire analysts determine fire behavior and spread aiming at a more accurate prediction results and efficient use of resources in fire fighting. Natural hazards simulations need to deal with data input uncertainty and their impact on prediction results, usually resorting to compute-intensive calibration techniques. In this paper, we propose a new evaluation technique capable of reducing the overall calibration time by 60% when compared to the current data-driven approaches. This is achieved by means of the proposed adaptive evaluation technique based on a periodic monitoring of the fire spread prediction error $$\epsilon $$ estimated by the normalized symmetric difference for each simulation run. Our new strategy avoid wasting too much computing time running unfit individuals thanks to an early adaptive evaluation.

Edigley Fraga, Ana Cortés, Andrés Cencerrado, Porfidio Hernández, Tomàs Margalef
Strategic Use of Data Assimilation for Dynamic Data-Driven Simulation

Dynamic data-driven simulation (DDDS) incorporates real-time measurement data to improve simulation models during model run-time. Data assimilation (DA) methods aim to best approximate model states with imperfect measurements, where particle Filters (PFs) are commonly used with discrete-event simulations. In this paper, we study three critical conditions of DA using PFs: (1) the time interval of iterations, (2) the number of particles and (3) the level of actual and perceived measurement errors (or noises), and provide recommendations on how to strategically use data assimilation for DDDS considering these conditions. The results show that the estimation accuracy in DA is more constrained by the choice of time intervals than the number of particles. Good accuracy can be achieved without many particles if the time interval is sufficiently short. An over estimation of the level of measurement errors has advantages over an under estimation. Moreover, a slight over estimation has better estimation accuracy and is more responsive to system changes than an accurate perceived level of measurement errors.

Yubin Cho, Yilin Huang, Alexander Verbraeck
PDPNN: Modeling User Personal Dynamic Preference for Next Point-of-Interest Recommendation

Next Point of Interest (POI) recommendation is an important aspect of information feeds for Location Based Social Networks (LSBNs). The boom in LSBN platforms such as Foursquare, Twitter, and Yelp has motivated a considerable amount of research focused on POI recommendations within the last decade. Inspired by the success of deep neural networks in many fields, researchers are increasingly interested in using neural networks such as Recurrent Neural Network (RNN) to make POI recommendation. Compared to traditional methods like Factorizing Personalized Markov Chain (FPMC) and Tensor Factorization (TF), neural network methods show great improvement in general sequences prediction. However, the user’s personal preference, which is crucial for personalized POI recommendation, is not addressed well in existing works. Moreover, the user’s personal preference is dynamic rather than static, which can guide predictions in different temporal and spatial contexts. To this end, we propose a new deep neural network model called Personal Dynamic Preference Neural Network(PDPNN). The core of the PDPNN model includes two parts: one part learns the user’s personal long-term preferences from the historical trajectories, and the other part learns the user’s short-term preferences from the current trajectory. By introducing a similarity function that evaluates the similarity between spatiotemporal contexts of user’s current trajectory and historical trajectories, PDPNN learns the user’s personal dynamic preference from user’s long-term and short-term preferences. We conducted experiments on three real-world datasets, and the results show that our model outperforms current well-known methods.

Jinwen Zhong, Can Ma, Jiang Zhou, Weiping Wang
Cyber Attribution from Topological Patterns

We developed a crawler to collect live malware distribution network data from publicly available sources including Google Safe Browser and VirusTotal. We then generated a dynamic graph with our visualization tool and performed malware attribution analysis. We found: 1) malware distribution networks form clusters rather than a single network; 2) those cluster sizes follow the Power Law; 3) there is a correlation between cluster size and the number of malware species in the cluster; 4) there is a correlation between the number of malware species and cyber events; and finally, 5) infrastructure components such as bridges, hubs, and persistent links play significant roles in malware distribution dynamics.

Yang Cai, Jose Andre Morales, Guoming Sun
Applications of Data Assimilation Methods on a Coupled Dual Porosity Stokes Model

Porous media and conduit coupled systems are heavily used in a variety of areas such as groundwater system, petroleum extraction, and biochemical transport. A coupled dual porosity Stokes model has been proposed to simulate the fluid flow in a dual-porosity media and conduits coupled system. Data assimilation is the discipline that studies the combination of mathematical models and observations. It can improve the accuracy of mathematical models by incorporating data, but also brings challenges by increasing complexity and computational cost. In this paper, we study the application of data assimilation methods to the coupled dual porosity Stokes model. We give a brief introduction to the coupled model and examine the performance of different data assimilation methods on a finite element implementation of the coupled dual porosity Stokes system. We also study how observations on different variables of the system affect the data assimilation process.

Xiukun Hu, Craig C. Douglas
Spatiotemporal Filtering Pipeline for Efficient Social Networks Data Processing Algorithms

One of the areas that gathers momentum is the investigation of location-based social networks (LBSNs) because the understanding of citizens’ behavior on various scales can help to improve quality of living, enhance urban management, and advance the development of smart cities. But it is widely known that the performance of algorithms for data mining and analysis heavily relies on the quality of input data. The main aim of this paper is helping LBSN researchers to perform a preliminary step of data preprocessing and thus increase the efficiency of their algorithms. To do that we propose a spatiotemporal data processing pipeline that is general enough to fit most of the problems related to working with LBSNs. The proposed pipeline includes four main stages: an identification of suspicious profiles, a background extraction, a spatial context extraction, and a fake transitions detection. Efficiency of the pipeline is demonstrated on three practical applications using different LBSN: touristic itinerary generation using Facebook locations, sentiment analysis of an area with the help of Twitter and VK.com, and multiscale events detection from Instagram posts.

Ksenia Mukhina, Alexander Visheratin, Denis Nasonov
Normal Grouping Density Separation (NGDS): A Novel Object-Driven Indoor Point Cloud Partition Method

Precise segmentation/partition is an essential part of many point cloud processing strategies. In the state-of-the-art methods, either the number of clusters or expected supervoxel resolution needs to be carefully selected before segmentation. This makes these processes semi-supervised. The proposed Normal Grouping- Density Separation (NGDS) strategy, relying on both grouping normal vectors into cardinal directions and density-based separation, produces clusters of better (according to use quality measures) quality than current state-of-the-art methods for widely applied object-annotated indoor benchmark dataset. The method reaches, on average, lower under-segmentation error than VCCS (by 45.9pp), Lin et al. (by 14.8pp), and SSP (by 26.2pp). Another metric - achievable segmentation accuracy - yields 92.1% across the tested dataset what is higher value than VCCS (by 14pp), Lin et al. (by 3.8pp), and SSP (by 10.3pp). The experiment carried out indicates superiority of the proposed method as a partition/segmentation algorithm - a process being usually a preprocessing stage of many object detection workflows.

Jakub Walczak, Grzegorz Andrzejczak, Rafał Scherer, Adam Wojciechowski

Machine Learning and Data Assimilation for Dynamical Systems

Frontmatter
Learning Hidden States in a Chaotic System: A Physics-Informed Echo State Network Approach

We extend the Physics-Informed Echo State Network (PI-ESN) framework to reconstruct the evolution of an unmeasured state (hidden state) in a chaotic system. The PI-ESN is trained by using (i) data, which contains no information on the unmeasured state, and (ii) the physical equations of a prototypical chaotic dynamical system. Non-noisy and noisy datasets are considered. First, it is shown that the PI-ESN can accurately reconstruct the unmeasured state. Second, the reconstruction is shown to be robust with respect to noisy data, which means that the PI-ESN acts as a denoiser. This paper opens up new possibilities for leveraging the synergy between physical knowledge and machine learning to enhance the reconstruction and prediction of unmeasured states in chaotic dynamical systems.

Nguyen Anh Khoa Doan, Wolfgang Polifke, Luca Magri
Learning Ergodic Averages in Chaotic Systems

We propose a physics-informed machine learning method to predict the time average of a chaotic attractor. The method is based on the hybrid echo state network (hESN). We assume that the system is ergodic, so the time average is equal to the ergodic average. Compared to conventional echo state networks (ESN) (purely data-driven), the hESN uses additional information from an incomplete, or imperfect, physical model. We evaluate the performance of the hESN and compare it to that of an ESN. This approach is demonstrated on a chaotic time-delayed thermoacoustic system, where the inclusion of a physical model significantly improves the accuracy of the prediction, reducing the relative error from 48% to 1%. This improvement is obtained at the low extra cost of solving a small number of ordinary differential equations that contain physical information. This framework shows the potential of using machine learning techniques combined with prior physical knowledge to improve the prediction of time-averaged quantities in chaotic systems.

Francisco Huhn, Luca Magri
Supermodeling: The Next Level of Abstraction in the Use of Data Assimilation

Data assimilation (DA) is a key procedure that synchronizes a computer model with real observations. However, in the case of overparametrized complex systems modeling, the task of parameter-estimation through data assimilation can expand exponentially. It leads to unacceptable computational overhead, substantial inaccuracies in parameter matching, and wrong predictions. Here we define a Supermodel as a kind of ensembling scheme, which consists of a few sub-models representing various instances of the baseline model. The sub-models differ in parameter sets and are synchronized through couplings between the most sensitive dynamical variables. We demonstrate that after a short pretraining of the fully parametrized small sub-model ensemble, and then training a few latent parameters of the low-parameterized Supermodel, we can outperform in efficiency and accuracy the baseline model matched to data by a classical DA procedure.

Marcin Sendera, Gregory S. Duane, Witold Dzwinel
A New Multilayer Network Construction via Tensor Learning

Multilayer networks proved to be suitable in extracting and providing dependency information of different complex systems. The construction of these networks is difficult and is mostly done with a static approach, neglecting time delayed interdependences. Tensors are objects that naturally represent multilayer networks and in this paper, we propose a new methodology based on Tucker tensor autoregression in order to build a multilayer network directly from data. This methodology captures within and between connections across layers and makes use of a filtering procedure to extract relevant information and improve visualization. We show the application of this methodology to different stationary fractionally differenced financial data. We argue that our result is useful to understand the dependencies across three different aspects of financial risk, namely market risk, liquidity risk, and volatility risk. Indeed, we show how the resulting visualization is a useful tool for risk managers depicting dependency asymmetries between different risk factors and accounting for delayed cross dependencies. The constructed multilayer network shows a strong interconnection between the volumes and prices layers across all the stocks considered while a lower number of interconnections between the uncertainty measures is identified.

Giuseppe Brandi, Tiziana Di Matteo
Neural Assimilation

We introduce a new neural network for Data Assimilation (DA). DA is the approximation of the true state of some physical system at a given time obtained combining time-distributed observations with a dynamic model in an optimal way. The typical assimilation scheme is made up of two major steps: a prediction and a correction of the prediction by including information provided by observed data. This is the so called prediction-correction cycle. Classical methods for DA include Kalman filter (KF). KF can provide a rich information structure about the solution but it is often complex and time-consuming. In operational forecasting there is insufficient time to restart a run from the beginning with new data. Therefore, data assimilation should enable real-time utilization of data to improve predictions. This mandates the choice of an efficient data assimilation algorithm. Due to this necessity, we introduce, in this paper, the Neural Assimilation (NA), a coupled neural network made of two Recurrent Neural Networks trained on forecasting data and observed data respectively. We prove that the solution of NA is the same of KF. As NA is trained on both forecasting and observed data, after the phase of training NA is used for the prediction without the necessity of a correction given by the observations. This allows to avoid the prediction-correction cycle making the whole process very fast. Experimental results are provided and NA is tested to improve the prediction of oxygen diffusion across the Blood-Brain Barrier (BBB).

Rossella Arcucci, Lamya Moutiq, Yi-Ke Guo
A Machine-Learning-Based Importance Sampling Method to Compute Rare Event Probabilities

We develop a novel computational method for evaluating the extreme excursion probabilities arising from random initialization of nonlinear dynamical systems. The method uses excursion probability theory to formulate a sequence of Bayesian inverse problems that, when solved, yields the biasing distribution. Solving multiple Bayesian inverse problems can be expensive; more so in higher dimensions. To alleviate the computational cost, we build machine-learning-based surrogates to solve the Bayesian inverse problems that give rise to the biasing distribution. This biasing distribution can then be used in an importance sampling procedure to estimate the extreme excursion probabilities.

Vishwas Rao, Romit Maulik, Emil Constantinescu, Mihai Anitescu
Node Classification in Complex Social Graphs via Knowledge-Graph Embeddings and Convolutional Neural Network

The interactions between humans and their environment, comprising living and non-living entities, can be studied via Social Network Analysis (SNA). Node classification, as well as community detection tasks, are still open research problems in SNA. Hence, SNA has become an interesting and appealing domain in Artificial Intelligence (AI) research. Immanent facts about social network structures can be effectively harnessed for training AI models in a bid to solve node classification and community detection problems in SNA. Hence, crucial aspects such as the individual attributes of spatial social actors, and the underlying patterns of relationship binding these social actors must be taken into consideration in the course of analyzing the social network. These factors determine the nature and dynamics of a given social network. In this paper, we have proposed a unique framework, Representation Learning via Knowledge-Graph Embeddings and ConvNet (RLVECN), for studying and extracting meaningful facts from social network structures to aid in node classification as well as community detection tasks. Our proposition utilizes an edge sampling approach for exploiting features of the social graph, via learning the context of each actor with respect to neighboring actors/nodes, with the goal of generating vector-space embedding per actor. Successively, these relatively low-dimensional vector embeddings are fed as input features to a downstream classifier for classification tasks about the social graph/network. Herein RLVECN has been trained, tested, and evaluated on real-world social networks.

Bonaventure C. Molokwu, Shaon Bhatta Shuvo, Narayan C. Kar, Ziad Kobti
Accelerated Gaussian Convolution in a Data Assimilation Scenario

Machine Learning algorithms try to provide an adequate forecast for predicting and understanding a multitude of phenomena. However, due to the chaotic nature of real systems, it is very difficult to predict data: a small perturbation from initial state can generate serious errors. Data Assimilation is used to estimate the best initial state of a system in order to predict carefully the future states. Therefore, an accurate and fast Data Assimilation can be considered a fundamental step for the entire Machine Learning process. Here, we deal with the Gaussian convolution operation which is a central step of the Data Assimilation approach and, in general, in several data analysis procedures. In particular, we propose a parallel algorithm, based on the use of Recursive Filters to approximate the Gaussian convolution in a very fast way. Tests and experiments confirm the efficiency of the proposed implementation.

Pasquale De Luca, Ardelio Galletti, Giulio Giunta, Livia Marcellino
On the Complementary Role of Data Assimilation and Machine Learning: An Example Derived from Air Quality Analysis

We present a new formulation of the error covariances that derives from ensembles of model simulations, which captures terrain-dependent error correlations, without the prohibitive cost of ensemble Kalman filtering. Error variances are obtained from innovation variances empirically related to concentrations using a large data set. We use a k-fold cross-validation approach to estimate the remaining parameters. We note that by minimizing the cross-validation cost function, we obtain the optimal parameters for an optimal Kalman gain. Combined with the innovation variance consistent with the sum of observation and background error variances in observation space, yield a scheme that estimates the true error statistics, thus minimizing the true analysis error. Overall, this yield a new error statistics formulation and estimation out-performs the older optimum interpolation scheme using isotropic covariances with optimized covariance parameters. Yet, the analysis scheme is computationally comparable to optimum interpolation and can be used in real-time operational applications. These new error statistics comes as data-driven models, were we use validation techniques that are common to machine learning. We argue that the error statistics could benefit from a machine learning approach, while the air quality model and analysis scheme derives from physics and statistics.

Richard Ménard, Jean-François Cossette, Martin Deshaies-Jacques
Recursive Updates of Wildfire Perimeters Using Barrier Points and Ensemble Kalman Filtering

This paper shows how the wildfire simulation tool farsite is augmented with data assimilation capabilities that exploit the notion of barrier points and a constraint-point ensemble Kalman filtering to update wildfire perimeter predictions. Based on observations of the actual fire perimeter, stationary points on the fire perimeter are identified as barrier points and combined with a recursive update of the initial fire perimeter. It is shown that the combination of barrier point identification and using the barrier points as constraints in the ensemble Kalman filter gives a significant improvement in the forward prediction of the fire perimeter. The results are illustrated on the use case of the 2016 Sandfire that burned in the Angeles National Forest, east of the Santa Clarita Valley in Los Angeles County, California.

Abhishek Subramanian, Li Tan, Raymond A. de Callafon, Daniel Crawl, Ilkay Altintas

Meshfree Methods in Computational Sciences

Frontmatter
Finding Points of Importance for Radial Basis Function Approximation of Large Scattered Data

Interpolation and approximation methods are used in many fields such as in engineering as well as other disciplines for various scientific discoveries. If the data domain is formed by scattered data, approximation methods may become very complicated as well as time-consuming. Usually, the given data is tessellated by some method, not necessarily the Delaunay triangulation, to produce triangular or tetrahedral meshes. After that approximation methods can be used to produce the surface. However, it is difficult to ensure the continuity and smoothness of the final interpolant along with all adjacent triangles. In this contribution, a meshless approach is proposed by using radial basis functions (RBFs). It is applicable to explicit functions of two variables and it is suitable for all types of scattered data in general. The key point for the RBF approximation is finding the important points that give a good approximation with high precision to the scattered data. Since the compactly supported RBFs (CSRBF) has limited influence in numerical computation, large data sets can be processed efficiently as well as very fast via some efficient algorithm. The main advantage of the RBF is, that it leads to a solution of a system of linear equations (SLE) Ax = b. Thus any efficient method solves the systems of linear equations that can be used. In this study is we propose a new method of determining the importance points on the scattered data that produces a very good reconstructed surface with higher accuracy while maintaining the smoothness of the surface.

Vaclav Skala, Samsul Ariffin Abdul Karim, Martin Cervenka
High Accuracy Terrain Reconstruction from Point Clouds Using Implicit Deformable Model

Few previous works have studied the modeling of forest ground surfaces from LiDAR point clouds using implicit functions. [10] is a pioneer in this area. However, by design this approach proposes over-smoothed surfaces, in particular in highly occluded areas, limiting its ability to reconstruct fine-grained terrain surfaces. This paper presents a method designed to finely approximate ground surfaces by relying on deep learning to separate vegetation from potential ground points, filling holes by blending multiple local approximations through the partition of unity principle, then improving the accuracy of the reconstructed surfaces by pushing the surface towards the data points through an iterative convection model.

Jules Morel, Alexandra Bac, Takashi Kanai
Solving Non-linear Elasticity Problems by a WLS High Order Continuation

In this paper, a high order mesh-free continuation for nonlinear elasticity problems is presented. This proposal consists to introduce the Weighted Least Squares (WLS) in a High Order Continuation (HOC). The WLS has been employed to create shape functions using a local support domain. The HOC permits to transform the nonlinear problems in a succession of linear problems of the same tangent matrix. A strong formulation of the problem is adopted to avoid the numerical integration and mesh generation. In this work, a numerical study has been conducted in nonlinear elasticity problems in order to study the behaviour and stability of the proposed approach. Several examples are investigated numerically in order to demonstrate the robustness and efficiency of the proposed approach. This proposed approach has shown its efficiency in management of complex geometries and irregular nodal distributions with respect to other approaches.

Oussama Elmhaia, Youssef Belaasilia, Bouazza Braikat, Noureddine Damil
Advanced Radial Basis Functions Mesh Morphing for High Fidelity Fluid-Structure Interaction with Known Movement of the Walls: Simulation of an Aortic Valve

High fidelity Fluid-Structure Interaction (FSI) can be tackled by means of non-linear Finite Element Models (FEM) suitable to capture large deflections of structural parts interacting with fluids and by means of detailed Computational Fluid Dynamics (CFD). High fidelity is gained thanks to the spatial resolution of the computational grids and a key enabler to have a proper exchange of information between the structural solver and the fluid one is the management of the interfaces. A class of applications consists in problems where the complex movement of the walls is known in advance or can be computed by FEM and has to be transferred to the CFD solver. The aforementioned approach, known also as one-way FSI, requires effective methods for the time marching adaption of the computation grid of the CFD model. A versatile and well established approach consists in a continuum update of the mesh that is regenerated so to fit the evolution of the moving walls. In this study, an innovative method based on Radial Basis Functions (RBF) mesh morphing is proposed, allowing to keep the same mesh topology suitable for a continuum update of the shape. A set of key configurations are exactly guaranteed whilst time interpolation is adopted between frames. The new framework is detailed and then demonstrated, adopting as a reference the established approach based on remeshing, for the study of a Polymeric-Prosthetic Heart Valve (P-PHV).

Leonardo Geronzi, Emanuele Gasparotti, Katia Capellini, Ubaldo Cella, Corrado Groth, Stefano Porziani, Andrea Chiappa, Simona Celi, Marco Evangelos Biancolini
Radial Basis Functions Mesh Morphing
A Comparison Between the Bi-harmonic Spline and the Wendland C2 Radial Function

Radial basis functions (RBFs) based mesh morphing allows to adapt the shape of a computational grid onto a new one by updating the position of all its nodes. Usually nodes on surfaces are used as sources to define the interpolation field that is propagated into the volume mesh by the RBF. The method comes with two distinctive advantages that makes it very flexible: it is mesh independent and it allows a node wise precision. There are however two major drawbacks: large data set management and excessive distortion of the morphed mesh that may occur. Two radial kernels are widely adopted to overtake such issues: the bi-harmonic spline (BHS) and the Wendland C2 (WC2). The BHS minimizes the mesh distortion but it is computational intense as a dense linear system has to be solved whilist the WC2 leads to a sparse system easier to solve but which can lack in smoothness. In this paper we compare these two radial kernels with a specific focus on mesh distortion. A detailed insight about RBF fields resulting from BHS and WC2 is first provided by inspecting the intensity and the distribution of the strain for a very simple shape: a square plate with a central circular hole. An aeronautical example, the ice formation onto the leading edge of a wing, is then exposed adopting an industrial software implementation based on the state of the art of RBF solvers.

Marco Evangelos Biancolini, Andrea Chiappa, Ubaldo Cella, Emiliano Costa, Corrado Groth, Stefano Porziani
Radial Basis Function Approximation Optimal Shape Parameters Estimation

Radial basis functions (RBF) are widely used in many areas especially for interpolation and approximation of scattered data, solution of ordinary and partial differential equations, etc. The RBF methods belong to meshless methods, which do not require tessellation of the data domain, i.e. using Delaunay triangulation, in general. The RBF meshless methods are independent of a dimensionality of the problem solved and they mostly lead to a solution of a linear system of equations. Generally, the approximation is formed using the principle of unity as a sum of weighed RBFs. These two classes of RBFs: global and local, mostly having a shape parameter determining the RBF behavior. In this contribution, we present preliminary results of the estimation of a vector of “optimal” shape parameters, which are different for each RBF used in the final formula for RBF approximation. The preliminary experimental results proved, that there are many local optima and if an iteration process is to be used, no guaranteed global optima are obtained. Therefore, an iterative process, e.g. used in partial differential equation solutions, might find a local optimum, which can be far from the global optima.

Vaclav Skala, Samsul Ariffin Abdul Karim, Marek Zabran

Multiscale Modelling and Simulation

Frontmatter
Projective Integration for Moment Models of the BGK Equation

In this paper, we apply the projective integration method to moment models of the Boltzmann-BGK equation and investigate the numerical properties of the resulting scheme. Projective integration is an explicit, asymptotic-preserving scheme that is tailored to problems with a large spectral gap between fast and slow eigenvalues of the model. A spectral analysis of the moment model shows a clear spectral gap and reveals the multi-scale nature of the model. The new scheme overcomes the severe time step constraint of standard explicit schemes like the forward Euler scheme by performing a number of inner iterations and then extrapolating the solution forward in time. The projective integration scheme is non-intrusive and yields fast and accurate solutions, as demonstrated using a 1D shock tube test case. These observations open up many possibilities for further use of the scheme for high-resolution discretizations and different collision models.

Julian Koellermeier, Giovanni Samaey
Open Boundary Modeling in Molecular Dynamics with Machine Learning

Molecular-continuum flow simulations combine molecular dynamics (MD) and computational fluid dynamics for multiscale considerations. A specific challenge in these simulations arises due to the “open MD boundaries” at the molecular-continuum interface: particles close to these boundaries do not feel any forces from outside which results in unphysical behavior and incorrect thermodynamic pressures. In this contribution, we apply neural networks to generate approximate boundary forces that reduce these artefacts. We train our neural network with force-distance pair values from periodic MD simulations and use this network to later predict boundary force contributions in non-periodic MD systems. We study different training strategies in terms of MD sampling and training for various thermodynamic state points and report on accuracy of the arising MD system. We further discuss computational efficiency of our approach in comparison to existing boundary force models.

Philipp Neumann, Niklas Wittmer
Microtubule Biomechanics and the Effect of Degradation of Elastic Moduli

The present study aims at quantifying the effect of mechanical degradation of microtubules on their electro-elastic response. A three-dimensional continuum-based hollow cylindrical domain of a microtubule has been considered in this work. A fully coupled electro-mechanical model has been developed for conducting the comparative analysis considering three different cases, viz., no degradation, 50% degradation and 90% degradation of elastic modulus of the microtubule. The microtubule has been subjected to dynamic forces adopted from the commonly used loading-unloading conditions in nanoindentation experiments. The results show that the degradation of microtubules significantly influences their electro-elastic response when subjected to externally applied forces. The transient response of the model in terms of induced displacement, electric potential and volumetric strain has also been analyzed for different magnitudes of mechanical degradation. The modelling study presented here represents a more accurate electro-mechanical model compared to the classical mechanical model for quantifying the effects of mechanical transductions on microtubules biomechanics.

Sundeep Singh, Roderick Melnik
Formation of Morphogenetic Patterns in Cellular Automata

One of the most important problems in contemporary science, and especially in biology, is to reveal mechanisms of pattern formation. On the level of biological tissues, patterns form due to interactions between cells. These interactions can be long-range if mediated by diffusive molecules or short-range when associated with cell-to-cell contact sites. Mathematical studies of long-range interactions involve models based on differential equations while short-range interactions are modelled using discrete type models. In this paper, we use cellular automata (CA) technique to study formation of patterns due to short-range interactions. Namely, we use von Neumann cellular automata represented by a finite set of lattices whose states evolve according to transition rules. Lattices can be considered as representing biological cells (which, in the simplest case, can only be in one of the two different states) while the transition rules define changes in their states due to the cell-to-cell contact interactions. In this model, we identify rules resulting in the formation of stationary periodic patterns. In our analysis, we distinguish rules which do not destroy preset patterns and those which cause pattern formation from random initial conditions. Also, we check whether the forming patterns are resistant to noise and analyse the time frame for their formation. Transition rules which allow formation of stationary periodic patterns are then discussed in terms of pattern formation in biology.

Manan’Iarivo Rasolonjanahary, Bakhtier Vasiev
Multilevel Monte Carlo with Improved Correlation for Kinetic Equations in the Diffusive Scaling

In many applications, it is necessary to compute the time-dependent distribution of an ensemble of particles subject to transport and collision phenomena. Kinetic equations are PDEs that model such particles in a position-velocity phase space. In the low collisional regime, explicit particle-based Monte Carlo methods simulate these high dimensional equations efficiently, but, as the collision rate increases, these methods suffer from severe time-step constraints. In the high collision regime, asymptotic-preserving particle schemes are able to produce stable results. However, this stability comes at the cost of a bias in the computed results. In earlier work, the multilevel Monte Carlo method was used to reduce this bias by combining simulations with large and small time steps. This multilevel scheme, however, still has large variances when correlating fine and coarse simulations, which leads to sub-optimal multilevel performance. In this work, we present an improved correlation approach that decreases the variance when bridging the gap from large time steps to time steps of the order of magnitude of the collision rate. We further demonstrate that this reduced variance results in a sharply reduced simulation cost at the expense of a small bias.

Emil Løvbak, Bert Mortier, Giovanni Samaey, Stefan Vandewalle
Development and Application of the Statistically Similar Representative Volume Element for Numerical Modelling of Multiphase Materials

Modern aerospace, automotive and construction industries rely on materials with non-homogeneous properties like composites or multiphase structures. Such materials offer a lot of advantages, but they also require application of advanced numerical models of exploitation condition, which are of high importance for designers, architects and engineers. However, computational cost is one of the most important problems in this approach, being very high and sometimes unacceptable. In this paper we propose approach based on Statistically Similar Representative Volume Element (SSRVE), which is generated by combination of isogeometric analysis and optimization methods. The proposed solution significantly decreases computational cost of complex multiscale simulations and simultaneously maintains high reliability of solvers. At first, the motivation of the work is described in introduction, which is followed by general idea of the SSRVE as a modelling technique. Afterwards, examples of generated SSRVEs based on two different cases are given and passed further to numerical simulations of exploitation conditions. The results obtained from these calculations are used in the model predicting gradients of material properties, which are crucial results for discussion on uniqueness of the proposed solution. Additionally, some aspects of computational cost reduction are discussed, as well.

Łukasz Rauch, Krzysztof Bzowski, Danuta Szeliga, Maciej Pietrzyk
A Heterogeneous Multi-scale Model for Blood Flow

This research focuses on developing a heterogeneous multi-scale model (HMM) for blood flow. Two separate scales are considered in this study, a Macro-scale, which models whole blood as a continuous fluid and tracks the transport of hematocrit profiles through an advection diffusion solver. And a Micro-scale, which computes directly local diffusion coefficients and viscosities using cell resolved simulations. The coupling between these two scales also includes the use of a surrogate model, which saved local viscosity and diffusion coefficients from previously simulated local hematocrit and shear rate combinations. As the HMM model progresses fewer micro models will be spawned. This is accomplished through the surrogate by interpolating from previously computed viscosities and diffusion coefficients. The benefit of using the HMM method for blood flow is that it, along with resolving the rheology of whole blood, can be extended with other types computational models to model physiological processes like thrombus formation.

Benjamin Czaja, Gábor Závodszky, Alfons Hoekstra
Towards Accurate Simulation of Global Challenges on Data Centers Infrastructures via Coupling of Models and Data Sources

Accurate digital twinning of the global challenges (GC) leads to computationally expensive coupled simulations. These simulations bring together not only different models, but also various sources of massive static and streaming data sets. In this paper, we explore ways to bridge the gap between traditional high performance computing (HPC) and data-centric computation in order to provide efficient technological solutions for accurate policy-making in the domain of GC. GC simulations in HPC environments give rise to a number of technical challenges related to coupling. Being intended to reflect current and upcoming situation for policy-making, GC simulations extensively use recent streaming data coming from external data sources, which requires changing traditional HPC systems operation. Another common challenge stems from the necessity to couple simulations and exchange data across data centers in GC scenarios. By introducing a generalized GC simulation workflow, this paper shows commonality of the technical challenges for various GC and reflects on the approaches to tackle these technical challenges in the HiDALGO project.

Sergiy Gogolenko, Derek Groen, Diana Suleimenova, Imran Mahmood, Marcin Lawenda, F. Javier Nieto de Santos, John Hanley, Milana Vučković, Mark Kröll, Bernhard Geiger, Robert Elsässer, Dennis Hoppe
Easing Multiscale Model Design and Coupling with MUSCLE 3

Multiscale modelling and simulation typically entails coupling multiple simulation codes into a single program. Doing this in an ad-hoc fashion tends to result in a tightly coupled, difficult-to-change computer program. This makes it difficult to experiment with different submodels, or to implement advanced techniques such as surrogate modelling. Furthermore, building the coupling itself is time-consuming. The MUltiScale Coupling Library and Environment version 3 (MUSCLE 3) aims to alleviate these problems. It allows the coupling to be specified in a simple configuration file, which specifies the components of the simulation and how they should be connected together. At runtime a simulation manager takes care of coordination of submodels, while data is exchanged over the network in a peer-to-peer fashion via the MUSCLE library. Submodels need to be linked to this library, but this is minimally invasive and restructuring simulation codes is usually not needed. Once operational, the model may be rewired or augmented by changing the configuration, without further changes to the submodels. MUSCLE 3 is developed openly on GitHub, and is available as Open Source software under the Apache 2.0 license.

Lourens E. Veen, Alfons G. Hoekstra

Quantum Computing Workshop

Frontmatter
Simulations of Quantum Finite Automata

This paper presents a Python library to simulate different kinds of quantum finite automata on a classical computer. The library also provides tools for language generation and visual representation of simulation results. We have conducted experiments to measure the time complexity of the simulation in a function of the automaton size, alphabet size and word length. Examples of library usage are also provided.

Gustaw Lippa, Krzysztof Makieła, Marcin Kuta
: A Cross-Platform Programming Framework for Quantum-Accelerated Scientific Computing

This paper introduces a new cross-platform programming framework for developing quantum-accelerated scientific computing applications and executing them on most of today’s cloud-based quantum computers and simulators. It makes use of C++ template meta-programming techniques to implement quantum algorithms as generic, platform-independent expressions, which get automatically synthesized into device-specific compute kernels upon execution. Our software framework supports concurrent and asynchronous execution of multiple quantum kernels via a CUDA-inspired stream concept.

Matthias Möller, Merel Schalkers
Generalized Quantum Deutsch-Jozsa Algorithm

Quantum computing aims to provide algorithms and hardware that allows for solving computational problems asymptotically faster than on classical computers. Yet, design of new, fast quantum algorithms is not straightforward, and the field faces high barriers of entry for traditional computer scientists. One of the main didactic examples used to introduce speedup resulting from quantum computing is the Deutsch-Jozsa algorithm for discriminating between constant and balanced functions. Here, we show a generalization of the Deutsch-Jozsa algorithm beyond balanced functions that can be used to further illustrate the design choices underpinning quantum algorithms.

Tomasz Arodz
Revisiting Old Combinatorial Beasts in the Quantum Age: Quantum Annealing Versus Maximal Matching

This paper experimentally investigates the behavior of analog quantum computers such as commercialized by D-Wave when confronted to instances of the maximum cardinality matching problem specifically designed to be hard to solve by means of simulated annealing. We benchmark a D-Wave “Washington” (2X) with 1098 operational qubits on various sizes of such instances and observe that for all but the most trivially small of these it fails to obtain an optimal solution. Thus, our results suggest that quantum annealing, at least as implemented in a D-Wave device, falls in the same pitfalls as simulated annealing and therefore provides additional evidences suggesting that there exist polynomial-time problems that such a machine cannot solve efficiently to optimality.

Daniel Vert, Renaud Sirdey, Stéphane Louise
A Quantum Annealing Algorithm for Finding Pure Nash Equilibria in Graphical Games

We introduce Q-Nash, a quantum annealing algorithm for the NP-complete problem of finding pure Nash equilibria in graphical games. The algorithm consists of two phases. The first phase determines all combinations of best response strategies for each player using classical computation. The second phase finds pure Nash equilibria using a quantum annealing device by mapping the computed combinations to a quadratic unconstrained binary optimization formulation based on the Set Cover problem. We empirically evaluate Q-Nash on D-Wave’s Quantum Annealer 2000Q using different graphical game topologies. The results with respect to solution quality and computing time are compared to a Brute Force algorithm and the Iterated Best Response heuristic.

Christoph Roch, Thomy Phan, Sebastian Feld, Robert Müller, Thomas Gabor, Carsten Hahn, Claudia Linnhoff-Popien
Hybrid Quantum Annealing Heuristic Method for Solving Job Shop Scheduling Problem

Scheduling problems have attracted the attention of researchers and practitioners for several decades. The quality of different methods developed to solve these problems on classical computers have been collected and compared in various benchmark repositories. Recently, quantum annealing has appeared as promising approach to solve some scheduling problems. The goal of this paper is to check experimentally if and how this approach can be applied for solving a well-known benchmark of the classical Job Shop Scheduling Problem. We present the existing capabilities provided by the D-Wave 2000Q quantum annealing system in the light of this benchmark. We have tested the quantum annealing system features experimentally, and proposed a new heuristic method as a proof-of-concept. In our approach we decompose the considered scheduling problem into a set of smaller optimization problems which fit better into a limited quantum hardware capacity. We have tuned experimentally various parameters of limited fully-connected graphs of qubits available in the quantum annealing system for the heuristic. We also indicate how new improvements in the upcoming D-Wave quantum processor might potentially impact the performance of our approach.

Krzysztof Kurowski, Jan Wȩglarz, Marek Subocz, Rafał Różycki, Grzegorz Waligóra
Foundations for Workflow Application Scheduling on D-Wave System

Many scientific processes and applications can be represented in the standardized form of workflows. One of the key challenges related to managing and executing workflows is scheduling. As an NP-hard problem with exponential complexity it imposes limitations on the size of practically solvable problems. In this paper, we present a solution to the challenge of scheduling workflow applications with the help of the D-Wave quantum annealer. To the best of our knowledge, there is no other work directly addressing workflow scheduling using quantum computing. Our solution includes transformation into a Quadratic Unconstrained Binary Optimization (QUBO) problem and discussion of experimental results, as well as possible applications of the solution. For our experiments we choose four problem instances small enough to fit into the annealer’s architecture. For two of our instances the quantum annealer finds the global optimum for scheduling. We thus show that it is possible to solve such problems with the help of the D-Wave machine and discuss the limitations of this approach.

Dawid Tomasiewicz, Maciej Pawlik, Maciej Malawski, Katarzyna Rycerz
A Hybrid Solution Method for the Multi-Service Location Set Covering Problem

The Multi-Service Location Set Covering Problem is an extension of the well-known Set Covering Problem. It arises in practical applications where a set of physical locations need to be equipped with services to satisfy demand within a certain area, while minimizing costs. In this paper we formulate the problem as a Quadratic Unconstrained Binary Optimization (QUBO) problem, apply the hybrid framework of the D-Wave quantum annealer to solve it, and investigate the feasibility of this approach. To improve the often suboptimal initial solutions found on the D-Wave system, we develop a hybrid quantum/classical optimization algorithm that starts from the seed solution and iteratively creates small subproblems that are more efficiently solved on the D-Wave but often still converge to feasible and improved solutions of the original problem. Finally we suggest some opportunities for increasing the accuracy and performance of our algorithm.

Irina Chiscop, Jelle Nauta, Bert Veerman, Frank Phillipson
New Hybrid Quantum Annealing Algorithms for Solving Vehicle Routing Problem

We introduce new hybrid algorithms, DBSCAN Solver and Solution Partitioning Solver, which use quantum annealing for solving Vehicle Routing Problem (VRP) and its practical variant: Capacitated Vehicle Routing Problem (CVRP). Both algorithms contain important classical components, but we also present two other algorithms, Full QUBO Solver and Average Partitioning Solver, which can be run only on a quantum processing unit (without CPU) and were prototypes which helped us develop better hybrid approaches. In order to validate our methods, we run comprehensive tests using D-Wave’s Leap framework on well-established benchmark test cases as well as on our own test scenarios built based on realistic road networks. We also compared our new quantum and hybrid methods with classical algorithms - well-known metaheuristics for solving VRP and CVRP. The experiments indicate that our hybrid methods give promising results and are able to find solutions of similar or even better quality than the tested classical algorithms.

Michał Borowski, Paweł Gora, Katarzyna Karnas, Mateusz Błajda, Krystian Król, Artur Matyjasek, Damian Burczyk, Miron Szewczyk, Michał Kutwin
Multi-agent Reinforcement Learning Using Simulated Quantum Annealing

With quantum computers still under heavy development, already numerous quantum machine learning algorithms have been proposed for both gate-based quantum computers and quantum annealers. Recently, a quantum annealing version of a reinforcement learning algorithm for grid-traversal using one agent was published. We extend this work based on quantum Boltzmann machines, by allowing for any number of agents. We show that the use of quantum annealing can improve the learning compared to classical methods. We do this both by means of actual quantum hardware and by simulated quantum annealing.

Niels M. P. Neumann, Paolo B. U. L. de Heer, Irina Chiscop, Frank Phillipson
Quantum Hopfield Neural Networks: A New Approach and Its Storage Capacity

At the interface between quantum computing and machine learning, the field of quantum machine learning aims to improve classical machine learning algorithms with the help of quantum computers. Examples are Hopfield neural networks, which can store patterns and thereby are used as associative memory. However, the storage capacity of such classical networks is limited. In this work, we present a new approach to quantum Hopfield neural networks with classical inputs and outputs. The approach is easily extendable to quantum inputs or outputs. Performance is evaluated by three measures of error rates, introduced in this paper. We simulate our approach and find increased storage capacity compared to classical networks for small systems. We furthermore present classical results that indicate an increased storage capacity for quantum Hopfield neural networks in large systems as well.

Nicholas Meinhardt, Niels M. P. Neumann, Frank Phillipson
A Variational Algorithm for Quantum Neural Networks

Quantum Computing leverages the laws of quantum mechanics to build computers endowed with tremendous computing power. The field is attracting ever-increasing attention from both academic and private sectors, as testified by the recent demonstration of quantum supremacy in practice. However, the intrinsic restriction to linear operations significantly limits the range of relevant use cases for the application of Quantum Computing. In this work, we introduce a novel variational algorithm for quantum Single Layer Perceptron. Thanks to the universal approximation theorem, and given that the number of hidden neurons scales exponentially with the number of qubits, our framework opens to the possibility of approximating any function on quantum computers. Thus, the proposed approach produces a model with substantial descriptive power, and widens the horizon of potential applications already in the NISQ era, especially the ones related to Quantum Artificial Intelligence. In particular, we design a quantum circuit to perform linear combinations in superposition and discuss adaptations to classification and regression tasks. After this theoretical investigation, we also provide practical implementations using various simulation environments. Finally, we test the proposed algorithm on synthetic data exploiting both simulators and real quantum devices.

Antonio Macaluso, Luca Clissa, Stefano Lodi, Claudio Sartori
Imperfect Distributed Quantum Phase Estimation

In the near-term, the number of qubits in quantum computers will be limited to a few hundreds. Therefore, problems are often too large and complex to be run on quantum devices. By distributing quantum algorithms over different devices, larger problem instances can be run. This distributing however, often requires operations between two qubits of different devices. Using shared entangled states and classical communication, these operations between different devices can still be performed. In the ideal case of perfect fidelity, distributed quantum computing is a solution to achieving scalable quantum computers with a larger number of qubits. In this work we consider the effects on the output fidelity of a quantum algorithm when using noisy shared entangled states. We consider the quantum phase estimation algorithm and present two distribution schemes for the algorithm. We give the resource requirements for both and show that using less noisy shared entangled states results in a higher overall fidelity.

Niels M. P. Neumann, Roy van Houte, Thomas Attema
Optimal Representation of Quantum Channels

This work shows an approach to reduce the dimensionality of matrix representations of quantum channels. It is achieved by finding a base of the cone of positive semidefinite matrices which represent quantum channels. Next, this is implemented in the Julia programming language as a part of the QuantumInformation.jl package.

Paulina Lewandowska, Ryszard Kukulski, Łukasz Pawela
Perturbation of the Numerical Range of Unitary Matrices

In this work we show how to approach the problem of manipulating the numerical range of a unitary matrix. This task has far-reaching impact on the study of discrimination of quantum measurements. We achieve the aforementioned manipulation by introducing a method which allows us to find a unitary matrix whose numerical range contains the origin where at the same time the distance between unitary matrix and its perturbation is relative small in given metric.

Ryszard Kukulski, Paulina Lewandowska, Łukasz Pawela
Design of Short Codes for Quantum Channels with Asymmetric Pauli Errors

One of the main problems in quantum information systems is the presence of errors due to noise. Many quantum error correcting codes have been designed to deal with generic errors. In this paper we construct new stabilizer codes able to correct a given number $$e_{\mathrm g}$$ of generic Pauli $$\varvec{X}, \varvec{Y}$$ and $$\varvec{Z}$$ errors, plus a number $$e_{\mathrm Z}$$ of Pauli errors of a specified type (e.g., $$\varvec{Z}$$ errors). These codes can be of interest when the quantum channel is asymmetric, i.e., when some types of error occur more frequently than others. For example, we design a [[9, 1]] quantum error correcting code able to correct up to one generic qubit error plus one $$\varvec{Z}$$ error in arbitrary positions. According to a generalized version of the quantum Hamming bound, it is the shortest code with this error correction capability.

Marco Chiani, Lorenzo Valentini
Simulation Methodology for Electron Transfer in CMOS Quantum Dots

The construction of quantum computer simulators requires advanced software which can capture the most significant characteristics of the quantum behavior and quantum states of qubits in such systems. Additionally, one needs to provide valid models for the description of the interface between classical circuitry and quantum core hardware. In this study, we model electron transport in semiconductor qubits based on an advanced CMOS technology. Starting from 3D simulations, we demonstrate an order reduction and the steps necessary to obtain ordinary differential equations on probability amplitudes in a multi-particle system. We compare numerical and semi-analytical techniques concluding this paper by examining two case studies: the electron transfer through multiple quantum dots and the construction of a Hadamard gate simulated using a numerical method to solve the time-dependent Schrödinger equation and the tight-binding formalism for a time-dependent Hamiltonian.

Andrii Sokolov, Dmytro Mishagli, Panagiotis Giounanlis, Imran Bashir, Dirk Leipold, Eugene Koskin, Robert Bogdan Staszewski, Elena Blokhina
Backmatter
Titel
Computational Science – ICCS 2020
Herausgegeben von
Dr. Valeria V. Krzhizhanovskaya
Dr. Gábor Závodszky
Michael H. Lees
Prof. Jack J. Dongarra
Prof. Dr. Peter M. A. Sloot
Sérgio Brissos
João Teixeira
Copyright-Jahr
2020
Electronic ISBN
978-3-030-50433-5
Print ISBN
978-3-030-50432-8
DOI
https://doi.org/10.1007/978-3-030-50433-5

Informationen zur Barrierefreiheit für dieses Buch folgen in Kürze. Wir arbeiten daran, sie so schnell wie möglich verfügbar zu machen. Vielen Dank für Ihre Geduld.

    Bildnachweise
    AvePoint Deutschland GmbH/© AvePoint Deutschland GmbH, NTT Data/© NTT Data, Wildix/© Wildix, arvato Systems GmbH/© arvato Systems GmbH, Ninox Software GmbH/© Ninox Software GmbH, Nagarro GmbH/© Nagarro GmbH, GWS mbH/© GWS mbH, CELONIS Labs GmbH, USU GmbH/© USU GmbH, G Data CyberDefense/© G Data CyberDefense, FAST LTA/© FAST LTA, Vendosoft/© Vendosoft, Kumavision/© Kumavision, Noriis Network AG/© Noriis Network AG, WSW Software GmbH/© WSW Software GmbH, tts GmbH/© tts GmbH, Asseco Solutions AG/© Asseco Solutions AG, AFB Gemeinnützige GmbH/© AFB Gemeinnützige GmbH