Skip to main content

2022 | Buch

Large-Scale Scientific Computing

13th International Conference, LSSC 2021, Sozopol, Bulgaria, June 7–11, 2021, Revised Selected Papers

insite
SUCHEN

Über dieses Buch

This book constitutes revised selected papers from the 13th International Conference on Large-Scale Scientific Computing, LSSC 23021, which was held in Sozopol, Bulgaria, during June 7-11, 2021.
The 60 papers included in this book were carefully reviewed and selected from a total of 73 submissions. The volume also includes two invited talks in full paper length. The papers were organized in topical sections as follows: Fractional diffusion problems: numerical methods, algorithms and applications; large-scale models: numerical methods, parallel computations and applications; application of metaheuristics to large-scale problems; advanced discretizations and solvers for coupled systems of partial differential equations; optimal control of ODEs, PDEs and applications; tensor and matrix factorization for big-data analysis; machine learning and model order reduction for large scale predictive simulations; HPC and big data: algorithms and applications; and contributed papers.

Inhaltsverzeichnis

Frontmatter

Invited Papers

Frontmatter
Random-Walk Based Approximate k-Nearest Neighbors Algorithm for Diffusion State Distance

Diffusion State Distance (DSD) is a data-dependent metric that compares data points using a data-driven diffusion process and provides a powerful tool for learning the underlying structure of high-dimensional data. While finding the exact nearest neighbors in the DSD metric is computationally expensive, in this paper, we propose a new random-walk based algorithm that empirically finds approximate k-nearest neighbors accurately in an efficient manner. Numerical results for real-world protein-protein interaction networks are presented to illustrate the efficiency and robustness of the proposed algorithm. The set of approximate k-nearest neighbors performs well when used to predict proteins’ functional labels.

Lenore J. Cowen, Xiaozhe Hu, Junyuan Lin, Yue Shen, Kaiyi Wu
Model Reduction for Large Scale Systems

Projection based model order reduction has become a mature technique for simulation of large classes of parameterized systems. However, several challenges remain for problems where the solution manifold of the parameterized system cannot be well approximated by linear subspaces. While the online efficiency of these model reduction methods is very convincing for problems with a rapid decay of the Kolmogorov n-width, there are still major drawbacks and limitations. Most importantly, the construction of the reduced system in the offline phase is extremely CPU-time and memory consuming for large scale and multi scale systems. For practical applications, it is thus necessary to derive model reduction techniques that do not rely on a classical offline/online splitting but allow for more flexibility in the usage of computational resources. A promising approach with this respect is model reduction with adaptive enrichment. In this contribution we investigate Petrov-Galerkin based model reduction with adaptive basis enrichment within a Trust Region approach for the solution of multi scale and large scale PDE constrained parameter optimization.

Tim Keil, Mario Ohlberger

Fractional Diffusion Problems: Numerical Methods, Algorithms and Applications

Frontmatter
Constructions of Second Order Approximations of the Caputo Fractional Derivative

In the present paper we study the properties of the weights of approximations of the second derivative and the Caputo fractional derivative. The approximations of the Caputo derivative are obtained by approximating the second derivative in the expansion formula of the L1 approximation. We show that the properties of their weights are similar to the properties of the weights of the L1 approximation of the Caputo derivative when a suitable choice of the parameters of the approximations is used. The experimental results of applications of the approximations for numerical solution of the two-term ordinary fractional differential equation are given in the paper.

Stoyan Apostolov, Yuri Dimitrov, Venelin Todorov
Parameter Identification Approach for a Fractional Dynamics Model of Honeybee Population

In recent years, honeybee losses were reported in many countries such as USA, China, Israel, Turkey, and in Europe, especially, Bulgaria [1].In order to investigate the colony collapse, many differential equations models were proposed.Fractional derivatives incorporate the history of the honeybee population dynamics. We study numerically the inverse problem of parameter identification in a model with Caputo differential operator. We use a gradient method of minimizing a quadratic cost functional. Numerical tests with realistic data are performed and discussed.

Slavi G. Georgiev, Lubin G. Vulkov
A Newton’s Method for Best Uniform Polynomial Approximation

We present a novel algorithm, inspired by the recent BRASIL algorithm [10] for rational approximation, for best uniform polynomial approximation based on a formulation of the problem as a nonlinear system of equations and barycentric interpolation. We use results on derivatives of interpolating polynomials with respect to interpolation nodes to compute the Jacobian matrix. The resulting method is fast and stable, can deal with singularities and exhibits superlinear convergence in a neighborhood of the solution.

Irina Georgieva, Clemens Hofreither
Reduced Sum Implementation of the BURA Method for Spectral Fractional Diffusion Problems

The numerical solution of spectral fractional diffusion problems in the form $${\mathcal A}^\alpha u = f$$ A α u = f is studied, where $$\mathcal A$$ A is a selfadjoint elliptic operator in a bounded domain $$\varOmega \subset {\mathbb R}^d$$ Ω ⊂ R d , and $$\alpha \in (0,1]$$ α ∈ ( 0 , 1 ] . The finite difference approximation of the problem leads to the system $${\mathbb A}^\alpha {\mathbf{u}} = {\mathbf{f}}$$ A α u = f , where $${\mathbb A}$$ A is a sparse, symmetric and positive definite (SPD) matrix, and $${\mathbb A}^\alpha $$ A α is defined by its spectral decomposition. In the case of finite element approximation, $${\mathbb A}$$ A is SPD with respect to the dot product associated with the mass matrix. The BURA method is introduced by the best uniform rational approximation of degree k of $$t^{\alpha }$$ t α in [0, 1], denoted by $$r_{\alpha ,k}$$ r α , k . Then the approximation $$\mathbf{u}_k\approx \mathbf{u}$$ u k ≈ u has the form $$\mathbf{u}_k = c_0 {\mathbf{f}} +\sum _{i=1}^k c_i({\mathbb A} - {\widetilde{d}}_i {\mathbb I})^{-1}{\mathbf{f}}$$ u k = c 0 f + ∑ i = 1 k c i ( A - d ~ i I ) - 1 f , $${\widetilde{d}}_i<0$$ d ~ i < 0 , thus requiring the solving of k auxiliary linear systems with sparse SPD matrices. The BURA method has almost optimal computational complexity, assuming that an optimal PCG iterative solution method is applied to the involved auxiliary linear systems. The presented analysis shows that the absolute values of first $$\left\{ {\widetilde{d}}_i\right\} _{i=1}^{k'}$$ d ~ i i = 1 k ′ can be extremely large. In such a case the condition number of $${\mathbb A} - {\widetilde{d}}_i {\mathbb I}$$ A - d ~ i I is practically equal to one. Obviously, such systems do not need preconditioning. The next question is if we can replace their solution by directly multiplying $${\mathbf{f}}$$ f with $$-c_i/{\widetilde{d}}_i$$ - c i / d ~ i . Comparative analysis of numerical results is presented as a proof-of-concept for the proposed RS-BURA method.

Stanislav Harizanov, Nikola Kosturski, Ivan Lirkov, Svetozar Margenov, Yavor Vutov
First-Order Reaction-Diffusion System with Space-Fractional Diffusion in an Unbounded Medium

Diffusion in porous media, such as biological tissues, is characterized by deviations from the usual Fick’s diffusion laws, which can lead to space-fractional diffusion. The paper considers a simple case of a reaction-diffusion system with two spatial compartments – a proximal one of finite width having a source; and a distal one, which is extended to infinity and where the source is not present but there is a first order decay of the diffusing species. The diffusion term is taken to be proportional to the Riesz fractional Laplacian operator. It is demonstrated that the steady state of the system can be solved in terms of the Hankel transform involving Bessel functions. Methods for numerical evaluation of the resulting integrals are implemented. It is demonstrated that the convergence of the Bessel integrals could be accelerated using standard techniques for sequence acceleration.

Dimiter Prodanov
Performance Study of Hierarchical Semi-separable Compression Solver for Parabolic Problems with Space-Fractional Diffusion

Equations involving fractional diffusion operators are used to model anomalous processes in which the Brownian motion hypotheses are violated. In this work we utilize the Fractional Laplacian operator, defined through the Riesz potential and homogeneous Dirichlet boundary conditions. We explore a parabolic problem in a model square domain, using a backward Euler scheme for the discretization in time. The resulting series of systems of linear algebraic equations are dense and computationally expensive to solve. When utilizing the traditional Gaussian Elimination, the computational complexity is $$O(n^3)$$ O ( n 3 ) for the LU factorization and $$O(n^2)$$ O ( n 2 ) for each time step, where n is the number of unknowns. This can be improved by using Hierarchical Semi-Separable (HSS) compression. With a solver from STRUMPACK, the computational complexity is reduced to $$O(n^2r)$$ O ( n 2 r ) for the factorization and O(nr) for each time step, where r is the maximum off-diagonal rank of the matrix. The presented numerical experiments show the advantages of the HSS method for the examined problem.

Dimitar Slavchev, Svetozar Margenov
Numerical Solution of Non-stationary Problems with a Rational Approximation for Fractional Powers of the Operator

The numerical solutions of the Cauchy problems for a first and second-order differential-operator equations are discussed. The equation of the problem includes the fractional power of a self-adjoint positive operator. In computational practice, rational approximations of the fractional power operator are widely used. In this work, we construct special time approximations with fractional power operators: the transition to a new level in time provides a set of standard problems for the operator and not for the fractional power operator. Our approach utilizes stable splitting schemes with weights parameters for the additive representation of the rational approximation for the fractional power operator.

Petr N. Vabishchevich

Large-Scale Models: Numerical Methods, Parallel Computations and Applications

Frontmatter
An Exact Schur Complement Method for Time-Harmonic Optimal Control Problems

By use of Fourier time series expansions in an angular frequency variable, time-harmonic optimal control problems constrained by a linear differential equation decouples for the different frequencies. Hence, for the analysis of a solution method one can consider the frequency as a parameter. There are three variables to be determined, the state solution, the control variable, and the adjoint variable.The first order optimality conditions lead to a three-by-three block matrix system where the adjoint optimality variable can be eliminated. For the so arising two-by-two block system, in this paper we study a factorization method involving an exact Schur complement method and illustrate the performance of an inexact version of it.

Owe Axelsson, Dalibor Lukáš, Maya Neytcheva
On the Consistency Order of Runge–Kutta Methods Combined with Active Richardson Extrapolation

Passive and active Richardson extrapolations are robust devices to increase the rate of convergence of time integration methods. While the order of convergence is shown to increase by one under rather natural smoothness conditions if the passive Richardson extrapolation is used, for the active Richardson extrapolation the increase of the order has not been generally proven. It is known that the Lipschitz property of the right-hand side function of the differential equation to be solved yields convergence of order p if the method is consistent in order p. In this paper it is shown that the active Richardson extrapolation increases the order of consistency by one when the underlying method is any Runge–Kutta method of order $$p=1,2$$ p = 1 , 2 , or 3.

Teshome Bayleyegn, István Faragó, Ágnes Havasi
Study the Recurrence of the Dominant Pollutants in the Formation of AQI Status over the City of Sofia for the Period 2013–2020

Recently it became possible to acquire and adapt the most up-to-date models of local atmospheric dynamics - WRF, transport and transformation of air pollutants - CMAQ, and the emission model SMOKE. This gave the opportunity to conduct extensive studies of the atmospheric composition climate of the country on fully competitive modern level. Ensemble, sufficiently exhaustive and representative to make reliable conclusions for atmospheric composition - typical and extreme situations with their specific space and temporal variability was created by using of computer simulations. On this basis statistically significant ensemble of corresponding Air Quality indices (AQI) was calculated, and their climate - typical repeatability, space and temporal variability for the territory of the country was constructed. The Air Quality (AQ) impact on human health and quality of life is evaluated in the terms of AQI, which give an integrated assessment of the impact of pollutants and directly measuring the effects of AQ on human health. All the AQI evaluations are on the basis of air pollutant concentrations obtained from the numerical modelling and make it possible to revile the AQI status spatial/temporal distribution and behavior.The presented results, allow to follow highest recurrence of the indices for the whole period and seasonally, and to analyze the possible reason for high values in the Moderate, High and Very High bands.

Ivelina Georgieva, Georgi Gadzhev, Kostadin Ganev
One Solution of Task with Internal Flow in Non-uniform Fluid Using CABARET Method

In this novel study the new problem of flow in density-stratified salinity liquid will be established. Additional point will be adding CABARET method in the noncompressible liquid with state equation present, that will be presented salinity and temperature dependant. Earlier, the authors solved the problem of the collapse of a homogeneously mixed spot in vertically stratified water column. Test were veried on 3d problems with following initial conditions: At time equal zero in the density-stratified liquid with nonzero salinity bounded homogeneous water column is inserted, after some time elapsed the bounds of water column are taken out. Corresponding to this test mathematical model will consist of equation of free-surface and Navier-Stocks equation for noncompressible liquid, salinity transport and thermo conductivity equations. To verify this new model test with the flow around an inverted parabola obstacle will be considered in several modes: subcritical, transcritical and supercritical. The obtained simulation results are compared with the analytical solution with shallow water approximation. A distinctive feature of such tests is the reproducibility of stable modes and the presence of a wide range of examples. To check the correctness of such for simulation of the problem an additional test example with a quasi-two-dimensional problem of collapse of homogeneously-mixed spot in a water column with vertical stratification is considered.

Valentin A. Gushchin, Vasilii G. Kondakov
Behavior and Scalability of the Regional Climate Model RegCM4 on High Performance Computing Platforms

The RegCM is a regional climate model used in many studies. There are simulation runs in different domains, time periods, and regions in the world on all continents. The research works in our group are related to the historical and future climate, and its influence on the human sensation over Southeast Europe. We used the model versions 4.4 and 4.7. The main model components are the initial and boundary condition module, the physics processes parametrization module, and the dynamical core. Concerning the last one, we used the default one – the hydrostatic option corresponding to the MM5 model dynamical core. We run simulations with different combinations of parametrization schemes on the Bulgarian supercomputer Avitohol. The newer versions of the model have an additional option for using a non-hydrostatic dynamical core. The running of model simulations with different input configurations depends highly on the available computing resources. Several main factors influence the simulation times and storage requirements. They could vary much depending on the particular set of input parameters, domain area, land cover, processing cores characteristics, and their number in parallel processing simulations. The objective of that study is to analyse the RegCM model performance with hydrostatic core, and non–hydrostatic core, on the High–Performance Computing platform Avitohol.

Vladimir Ivanov, Georgi Gadzhev
Quantum Effects on 1/2[111] Edge Dislocation Motion in Hydrogen-Charged Fe from Ring-Polymer Molecular Dynamics

Hydrogen influenced change of dislocation mobility is a possible cause of hydrogen embrittlement (HE) in metals and alloys. A comprehensive understanding of HE requires a more detailed description of dislocation motion in combination with the diffusion and trapping of H atoms. A serious obstacle towards the atomistic modelling of a H interstitial in Fis associated with the role nuclear quantum effects (NQEs) might play evenat room temperatures, due to the small mass of the proton. Standard molecular dynamics (MD) implementations offer a rather poor approximation for such investigations as the nuclei are considered as classical particles. Instead, we reach for Ring-polymer MD (RPMD), the current state-of-the-art method to include NQEs in the calculations, which generates a quantum-mechanical ensemble of interacting particles by using MD in an extended phase space.Here we report RPMD simulations of quantum effects on 1/2[111] edge dislocation motion in H charged Fe. The simulations results indicate that H atoms are more strongly confined to dislocation core and longer relaxation time is necessary for the edge dislocation to break away from the H atmosphere. The stronger interaction between dislocation and H atoms trapped in the core, resulting from NQEs, leads to formation of jogs along the dislocation line which reduce edge dislocation mobility in H charged Fe.

Ivaylo Katzarov, Nevena Ilieva, Ludmil Drenchev
Degeneracy of Tetrahedral Partitions Produced by Randomly Generated Red Refinements

In this paper, we survey some of our regularity results on a red refinement technique for unstructured face-to-face tetrahedral partitions. This technique subdivides each tetrahedron into 8 subtetrahedra with equal volumes. However, in contrast to triangular partitions, the red refinement strategy is not uniquely determined for the case of tetrahedral partitions which leads to the fact that randomly performed red refinements may produce degenerating tetrahedra which violate the maximum angle condition. Such tetrahedra are often undesirable in practical calculations and analysis. Thus, a special attention is needed when applying red refinements in a straightforward manner.

Sergey Korotov, Michal Křížek
Effluent Recirculation for Contaminant Removal in Constructed Wetlands Under Uncertainty: A Stochastic Numerical Approach Based on Monte Carlo Methodology

The problem of the alternative operational technique concerning effluent re-circulation in Horizontal Subsurface Flow Constructed Wetlands (HSF CW), and the possibility of this technique to remove efficiently pollutants under uncertainty, is investigated numerically in a stochastic way. Uncertain-but-bounded input-parameters are considered as interval parameters with known upper and lower bounds. This uncertainty is treated by using the Monte Carlo method. A typical pilot case of an HSF CW concerning Biochemical Oxygen Demand (BOD) removal is presented and numerically investigated. The purpose of the present study is to compare the relevant numerical results obtained under parameter uncertainty and concerning HSF CW operation with and without the effluent recirculation technique.

Konstantinos Liolios, Georgios Skodras, Krassimir Georgiev, Ivan Georgiev
Sensitivity Study of Large-Scale Air Pollution Model Based on Modifications of the Latin Hypercube Sampling Method

In this paper, various modifications of the Latin Hypercube Sampling algorithm have been used in order to evaluate the sensitivity of an environmental model output results for some dangerous air pollutants with respect to the emission levels and some chemical reaction rates. The environmental security importance is growing rapidly, becoming at present a significant topic of interest all over the world. Respectively, the environmental modeling has very high priority in various scientific fields. By identifying the major chemical reactions that affect the behavior of the system, specialists in various fields of application will be able to obtain valuable information about improving the model, which in turn will increase the reliability and sustainability of forecasts.

Tzvetan Ostromsky, Venelin Todorov, Ivan Dimov, Rayna Georgieva, Zahari Zlatev, Stoyan Poryazov
Sensitivity Operator-Based Approach to the Interpretation of Heterogeneous Air Quality Monitoring Data

The joint use of atmospheric chemistry transport and transformation models and observational data makes it possible to solve a wide range of environment protection tasks, including pollution sources identification and reconstruction of the pollution fields in unobserved areas. Seamless usage of different measurement data types can improve the accuracy of air quality forecasting systems. The approach considered is based on sensitivity operators and adjoint equations solutions ensembles. The ensemble construction allows for the natural combination of various measurement data types in one operator equation. In the paper, we consider combining image-type, integral-type, pointwise, and time series-type measurement data for the air pollution source identification. The synergy effect is numerically illustrated in the inverse modeling scenario for the Baikal region.

Alexey Penenko, Vladimir Penenko, Elena Tsvetova, Alexander Gochakov, Elza Pyanova, Viktoriia Konopleva
Using the Cauchy Criterion and the Standard Deviation to Evaluate the Sustainability of Climate Simulations

In simulations of climate change with climate models, the question arises as to whether the accepted 30-year period is sufficient for the model to produce sustainable results. We are looking for an answer to this question using the Cauchy criterion and the idea of saturation suggested by Lorenz, using as a sufficient condition for saturation the requirement that the standard deviation should not increase from year to year. The proposed method is illustrated by analysis of time series of observations of the temperature at 2 m and of three global climate models for the same parameter. The measured data are for the Cherni Vrah peak in Bulgaria and the modelled data, which concerns the projected future climate up to year 2100, are taken from the Climate Data Store of the Copernicus project. The example of the observations shows the instability of meteorological processes since 2000 year. All three global climate models, forced with the CMIP5 RCP4.5 scenario, show a lack of sustainability mainly in the polar regions for the period 2071–2100.

Valery Spiridonov, Hristo Chervenkov
Multidimensional Sensitivity Analysis of an Air Pollution Model Based on Modifications of the van der Corput Sequence

An important issue when large-scale mathematical models are used to support decision makers is their reliability. Sensitivity analysis has a crucial role during the process of validating computational models to ensure their accuracy and reliability. The focus of the present work is to perform global sensitivity analysis of a large-scale mathematical model describing remote transport of air pollutants. The plain Monte Carlo approach using the well-known van der Corput sequence with various bases ( $$b=3, 5, 6, 10$$ b = 3 , 5 , 6 , 10 ) has been applied for multidimensional integration to provide sensitivity studies under consideration. Sensitivity studies of the model output were performed into two directions: the sensitivity of the ammonia mean monthly concentrations with respect to the anthropogenic emissions variation, and the sensitivity of the ozone concentration values with respect to the rate variation of several chemical reactions. The numerical results show that the increase of the base leads to a higher accuracy of the estimated quantities in the most of the case studies, but the results are comparable with the results achieved using the standard van der Corput sequence with base 2.

Venelin Todorov, Ivan Dimov, Rayna Georgieva, Tzvetan Ostromsky, Zahari Zlatev, Stoyan Poryazov
Running an Atmospheric Chemistry Scheme from a Large Air Pollution Model by Using Advanced Versions of the Richardson Extrapolation

Atmospheric chemistry schemes, which are described mathematically by non-linear systems of ordinary differential equations (ODEs), are used in many large-scale air pollution models. These systems of ODEs are badly-scaled, extremely stiff and some components of their solution vectors vary quickly forming very sharp gradients. Therefore, it is necessary to handle the atmospheric chemical schemes by applying accurate numerical methods combined with reliable error estimators. Three well-known numerical methods that are suitable for the treatment of stiff systems of ODEs were selected and used: (a) EULERB (the classical Backward Differentiation Formula), (b) DIRK23 (a two-stage third order Diagonally Implicit Runge-Kutta Method) and (c) FIRK35 (a three-stage fifth order Fully Implicit Runge-Kutta Method). Each of these three numerical methods was applied in a combination with nine advanced versions of the Richardson Extrapolation in order to get more accurate results when that is necessary and to evaluate in a reliable way the error made at the end of each step of the computations. The code is trying at every step (A) to determine a good stepsize and (B) to apply it with a suitable version of the Richardson Extrapolation so that the error made at the end of the step will be less than an error-tolerance TOL, which is prescribed by the user in advance. The numerical experiments indicate that both the numerical stability can be preserved and sufficiently accurate results can be obtained when each of the three underlying numerical methods is correctly combined with the advanced versions of the Richardson Extrapolation.

Zahari Zlatev, Ivan Dimov, István Faragó, Krassimir Georgiev, Ágnes Havasi

Application of Metaheuristics to Large-Scale Problems

Frontmatter
New Clustering Techniques of Node Embeddings Based on Metaheuristic Optimization Algorithms

Node embeddings present a powerful method of embedding graph-structured data into a low dimensional space while preserving local node information. Clustering is a common preprocessing task on unsupervised data utilized to get the best insight into the input dataset. The most prominent clustering algorithm is the K-Means algorithm. In this paper, we formulate clustering as an optimization problem using different objective functions following the idea of searching for the best fit centroid-based cluster exemplars. We also apply several nature-inspired optimization algorithms since the K-Means algorithm is trapped in local optima during its execution. We demonstrate our cluster frameworks’ capability on several graph clustering datasets used in node embeddings and node clustering tasks. Performance evaluation and comparison of our frameworks with the K-Means algorithm are demonstrated and discussed in detail. We end this paper with a discussion on the impact of the objective function’s choice on the clustering results.

Adis Alihodžić, Malek Chahin, Fikret Čunjalo
A Comparison of Machine Learning Methods for Forecasting Dow Jones Stock Index

Stock market forecasting is a challenging and attractive topic for researchers and investors, helping them test their new methods and improve stock returns. Especially in the time of financial crisis, these methods gain popularity. The algorithmic solutions based on machine learning are used widely among investors, starting from amateur ones up to leading hedge funds, improving their investment strategies. This paper made an extensive analysis and comparison of several machine learning algorithms to predict the Dow Jones stock index movement. The input features for the algorithms will be some other financial indices, commodity prices and technical indicators. The algorithms such as decision tree, logistic regression, neural networks, support vector machine, random forest, and AdaBoost have exploited for comparison purposes. The data preprocessing step used a few normalization and data transformation techniques to investigate their influence on the predictions. In the end, we presented a few ways of tuning hyperparameters by metaheuristics such as genetic algorithm, differential evolution, and immunological algorithm.

Adis Alihodžić, Enes Zvorničanin, Fikret Čunjalo
Optimal Knockout Tournaments: Definition and Computation

We study competitions structured as hierarchically shaped single elimination tournaments. We define optimal tournaments by maximizing attractiveness such that topmost players will have the chance to meet in higher stages of the tournament. We propose a dynamic programming algorithm for computing optimal tournaments and we provide its sound complexity analysis.

Amelia Bădică, Costin Bădică, Ion Buligiu, Liviu Ion Ciora, Doina Logofătu
Risk Registry Platform for Optimizations in Cases of CBRN and Critical Infrastructure Attacks

Nowadays the world faces a wide range of complex challenges and threats to its security. The origination of modern threats, among others takes into consideration the proliferation of weapons of mass destruction (WMD) and their delivery systems. Rapid advances in science and technology, have proven to be misused by terrorist groups who develop the necessary knowledge and capacity to turn them into Chemical-biological-radiological and nuclear (CBRN) threats against the civil population. Adequate response and cross-sectoral cooperation in case of emergency situation is the base for low number of casualties and fast localization of the threat sources. Risk Registry platforms used to optimize the CBRN and critical infrastructure attacks are important crisis management capacity tools which need to be further developed using the nowadays information and communication technologies (ICT). Identifying and formulating registry algorithms containing documented cases of executed threats in these thematic areas, including the technological side of attacks, is very important for tactical planning and optimizations. In our paper we will describe a system implemented as a Risk Registry platform used for table top or field test exercises in cases of CBRN response by teams in Bulgaria, Greece, and Cyprus.

Nina Dobrinkova, Evangelos Katsaros, Ilias Gkotsis
Influence of the ACO Evaporation Parameter for Unstructured Workforce Planning Problem

Optimization of the production process is important for every factory or organization. The better organization can be done by optimization of the workforce planing. The main goal is decreasing the assignment cost of the workers with the help of which, the work will be done. The problem is NP-hard, therefore it can be solved with algorithms coming from artificial intelligence. The problem is to select employers and to assign them to the jobs to be performed. The constraints of this problem are very strong and for the algorithms is difficult to find feasible solutions especially when the problem is unstructured. We apply Ant Colony Optimization Algorithm to solve the problem. We investigate the algorithm performance according evaporation parameter. The aim is to find the best parameter setting.

Stefka Fidanova, Olympia Roeva
binMeta: A New Java Package for Meta-heuristic Searches

We present a new Java package, named binMeta, for the development and the study of meta-heuristic searches for global optimization. The solution space for our optimization problems is based on a discrete representation, but it does not restrict to combinatorial problems, for every representation on computer machines finally reduces to a sequence of bits. We focus on general purpose meta-heuristics, which are not tailored to any specific subclass of problems. Although we are aware that this is not the first attempt to develop one unique tool implementing more than one meta-heuristic search, we are motivated by the following three main research lines on meta-heuristics. First, we plan to collect several implementations of meta-heuristic searches, developed by several programmers under the common interface of the package, where a particular attention is given to the common components of the various meta-heuristics. Second, the discrete representation for the solutions that we employ allows the user to perform a preliminary study on the degrees of freedom that is likely to give a positive impact on the performance of the meta-heuristic searches. Third, the choice of Java as a programming language is motivated by its flexibility and the use of a high-level objective-oriented paradigm. Finally, an important point in the development of binMeta is that a meta-heuristic search implemented in the package can also be seen as an optimization problem, where its parameters play the role of decision variables.

Antonio Mucherino
Synergy Between Convergence and Divergence—Review of Concepts and Methods

Modern Industry 4.0 technologies face a challenge in dealing with billions of connected devices, petabyte-scale of generated data, and exponentially growing internet traffic. Artificial Intelligence and Evolutionary algorithms can resolve variety of large optimisation problems. Many methods employed in search for solutions often fall in stagnation or in unacceptable results, which reminds for classical dilemma exploration versus exploitations closely related with convergence and diversity of the explored solutions. This article reviews convergence and divergence centred algorithms and discuses synergy between convergence and divergence in adaptive heuristics.

Kalin Penev
Advanced Stochastic Approaches Based on Optimization of Lattice Sequences for Large-Scale Finance Problems

In this work we study advanced stochastic methods for solving a specific multidimensional problems related to computation of European style options in computational finance. Recently, stochastic methods have become a very important tool for high-performance computing of very high-dimensional problems in computational finance. Here, a different kind of optimal generating vectors have been applied for the first time to a specific problem in computational finance. Numerical tests show that they give superior results to the stochastic approaches used up to now. The advantages and disadvantages of various highly efficient stochastic approaches for multidimensional integrals related to evaluation of European style options have been analyzed.

Venelin Todorov, Ivan Dimov, Rayna Georgieva, Stoyan Apostolov, Stoyan Poryazov
Intuitionistic Fuzzy Approach for Outsourcing Provider Selection in a Refinery

Outsourcing is a new approach to the transfer of a business process that is traditionally carried out by the organization to an independent external service provider. Outsourcing is a good strategy for companies that need to reduce operating costs and improve competitiveness. It is important that companies select out the most eligible outsourcing providers.In this study, an intuitionistic fuzzy multi-criteria decision making approach is used for selecting the most appropriate outsourcing service provider for an oil refining enterprise on the Balkan peninsula. An optimal outsourcing problem is formulated and an algorithm for selection the most eligible outsourcing service provider is proposed using the concept of index matrices (IMs), where the evaluations of outsourcing candidates against criteria formulated by several experts are intuitionistic fuzzy pairs. The proposed decision-making model takes into account the ratings of the experts and the weighting factors of the evaluation criteria according to their priorities for the outsourcing service. Due to complexity of outsourcing process, the real numbers are not enough to characterize evaluation objects. Fuzzy sets (FSs) of Zadeh use the single membership function to express the degree to which the element belongs to the fuzzy set. As a result, the FSs is unable to express non-membership and hesitation degrees. Since intuitionistic fuzzy sets (IFSs) of Atanassov consider the membership and non-membership degrees simultaneously, they are more flexible than the FSs in modeling with uncertainty. The originality of the paper comes from the proposed decision-making model and its application in an optimal intuitionistic fuzzy (IF) outsourcing problem of a refinery. The presented approach for selection the most suitable outsourcing service provider can be applied to problems with imprecise parameters and can be extended in order to obtain an optimal solution for other types of multidimensional outsoursing problems by using n-dimensional index matrices.

Velichka Traneva, Stoyan Tranev
Quantitative Relationship Between Particulate Matter and Morbidity

Air pollution is a major environmental health problem affecting everyone. According to the World Health Organization (WHO), there is a close relationship between small particles (PM10 and PM2.5) and increased morbidity or mortality, both daily and over time. We investigated this quantitative relationship in Sofia by comparing levels of particulate matter with a baseline number of hospital, emergency department visits, asthma prevalence, and other morbidity outcomes from 4 local health sources. The methods for this comparison model are linear correlation and non-parametric correlation analysis of a time series study conducted in Sofia from 1 January 2017 to 31 May 2019. We introduce in this study an optimized spatial and time coverage of air quality by including data from a network of citizen stations. These benefits are weighed against limitations, such as model performance, the precision of the data in days with high humidity, and the appropriateness of which will depend on epidemiological study design. The final results that will be presented can be used for optimizing healthcare and pharmaceutical planning by justifying what acute morbidities are mostly affected by higher concentrations of PM10 and PM2.5.

Petar Zhivkov, Alexandar Simidchiev

Advanced Discretizations and Solvers for Coupled Systems of Partial Differential Equations

Frontmatter
Decoupling Methods for Systems of Parabolic Equations

We consider the decoupling methods of the Cauchy problem’s numerical solution for a system of multidimensional parabolic equations. Of most significant interest for computational practice is the case when the equations are related to each other. Splitting schemes are constructed for such vector problems when the transition to a new level in time is provided by solving common scalar problems for individual components of the solution. Two main classes of decoupling methods are distinguished by allocating the diagonal part of the problem’s matrix operator and its lower and upper triangular parts. An increase in the approximate solution of explicit-implicit schemes is achieved by using some three-level approximations in time. Special attention is paid to when the time derivatives of the solution components are related to each other.

Petr N. Vabishchevich

Optimal Control of ODEs, PDEs and Applications

Frontmatter
Random Lifting of Set-Valued Maps

In this paper we discuss the properties of particular set-valued maps in the space of probability measures on a finite-dimensional space that are constructed by mean of a suitable lift of set-valued map in the underlying space. In particular, we are interested to establish under which conditions some good regularity properties of the original set-valued map are inherited by the lifted one. The main motivation for the study is represented by multi-agent systems, i.e., finite-dimensional systems where the number of (microscopic) agents is so large that only macroscopical description are actually available. The macroscopical behaviour is thus expressed by the superposition of the behaviours of the microscopic agents. Using the common description of the state of a multi-agent system by mean of a time-dependent probability measure, expressing the fraction of agents contained in a region at a given time moment, the results of this paper yield regularity results for the macroscopical behaviour of the system.

Rossana Capuani, Antonio Marigonda, Marta Mogentale
Hölder Regularity in Bang-Bang Type Affine Optimal Control Problems

This paper revisits the issue of Hölder Strong Metric sub-Regularity (HSMs-R) of the optimality system associated with ODE optimal control problems that are affine with respect to the control. The main contributions are as follows. First, the metric in the control space, introduced in this paper, differs from the ones used so far in the literature in that it allows to take into consideration the bang-bang structure of the optimal control functions. This is especially important in the analysis of Model Predictive Control algorithms. Second, the obtained sufficient conditions for HSMs-R extend the known ones in a way which makes them applicable to some problems which are non-linear in the state variable and the Hölder exponent is smaller than one (that is, the regularity is not Lipschitz).

Alberto Domínguez Corella, Vladimir M. Veliov
Simultaneous Space-Time Finite Element Methods for Parabolic Optimal Control Problems

This work presents, analyzes and tests stabilized space-time finite element methods on fully unstructured simplicial space-time meshes for the numerical solution of space-time tracking parabolic optimal control problems with the standard $$L_2$$ L 2 -regularization.

Ulrich Langer, Andreas Schafelner
A New Algorithm for the LQR Problem with Partially Unknown Dynamics

We consider an LQR optimal control problem with partially unknown dynamics. We propose a new model-based online algorithm to obtain an approximation of the dynamics and the control at the same time during a single simulation. The iterative algorithm is based on a mixture of Reinforcement Learning and optimal control techniques. In particular, we use Gaussian distributions to represent model uncertainty and the probabilistic model is updated at each iteration using Bayesian regression formulas. On the other hand, the control is obtained in feedback form via a Riccati differential equation. We present some numerical tests showing that the algorithm can efficiently bring the system towards the origin.

Agnese Pacifico, Andrea Pesare, Maurizio Falcone

Tensor and Matrix Factorization for Big-Data Analysis

Frontmatter
Solving Systems of Polynomial Equations—A Tensor Approach

Polynomial relations are at the heart of mathematics. The fundamental problem of solving polynomial equations shows up in a wide variety of (applied) mathematics, science and engineering problems. Although different approaches have been considered in the literature, the problem remains difficult and requires further study.We propose a solution based on tensor techniques. In particular, we build a partially symmetric tensor from the coefficients of the polynomials and compute its canonical polyadic decomposition. Due to the partial symmetry, a structured canonical polyadic decomposition is needed. The factors of the decomposition can then be used for building systems of linear equations, from which we find the solutions of the original system.This paper introduces our approach and illustrates it with a detailed example. Although it cannot solve any system of polynomial equations, it is applicable to a large class of sub-problems. Future work includes comparisons with existing methods and extending the class of problems, for which the method can be applied.

Mariya Ishteva, Philippe Dreesen
Nonnegative Tensor-Train Low-Rank Approximations of the Smoluchowski Coagulation Equation

We present a finite difference approximation of the nonnegative solutions of the two dimensional Smoluchowski equation by a nonnegative low-order tensor factorization. Two different implementations are compared. The first one is based on a full tensor representation of the numerical solution and the coagulation kernel. The second one is based on a tensor-train decomposition of solution and kernel. The convergence of the numerical solution to the analytical one is investigated for the Smoluchowski problem with the constant kernel and the influence of the nonnegative decomposition on the solution accuracy is investigated.

Gianmarco Manzini, Erik Skau, Duc P. Truong, Raviteja Vangara
Boolean Hierarchical Tucker Networks on Quantum Annealers

Quantum annealing is an emerging technology with the potential to solve some of the computational challenges that remain unresolved as we approach an era beyond Moore’s Law. In this work, we investigate the capabilities of the quantum annealers of D-Wave Systems, Inc., for computing a certain type of Boolean tensor decomposition called Boolean Hierarchical Tucker Network (BHTN). Boolean tensor decomposition problems ask for finding a decomposition of a high-dimensional tensor with categorical, [true, false], values, as a product of smaller Boolean core tensors. As the BHTN decompositions are usually not exact, we aim to approximate an input high-dimensional tensor by a product of lower-dimensional tensors such that the difference between both is minimized in some norm. We show that BHTN can be calculated as a sequence of optimization problems suitable for the D-Wave 2000Q quantum annealer. Although current technology is still fairly restricted in the problems they can address, we show that a complex problem such as BHTN can be solved efficiently and accurately.

Elijah Pelofske, Georg Hahn, Daniel O’Malley, Hristo N. Djidjev, Boian S. Alexandrov
Topic Analysis of Superconductivity Literature by Semantic Non-negative Matrix Factorization

We analyze a corpus consisting of more than 17,000 abstracts in the general field of superconductivity, extracted from the arXiv – an online repository of scientific articles. We utilize a recently developed topic modeling method called SeNMFk, extending the standard Non-negative Matrix Factorization (NMF) methods by incorporating the semantic structure of the text, and adding a robust system for determining the number of topics. With SeNMFk, we were able to extract coherent topics validated by human experts. From these topics, a few are relatively general and cover broad concepts, while the majority can be precisely mapped to particular scientific effects or measurement techniques. The topics also differ by ubiquity, with only three topics prevalent in almost 40% of the abstract, while each specific topic tends to dominate a small subset of the abstracts. These results demonstrate the ability of SeNMFk to produce a layered and nuanced analysis of large scientific corpora.

Valentin Stanev, Erik Skau, Ichiro Takeuchi, Boian S. Alexandrov

Machine Learning and Model Order Reduction for Large Scale Predictive Simulations

Frontmatter
Deep Neural Networks and Adaptive Quadrature for Solving Variational Problems

The great success of deep neural networks (DNNs) in such areas as image processing, natural language processing has motivated also their usage in many other areas. It has been shown that in particular cases they provide very good approximation to different classes of functions. The aim of this work is to explore the usage of deep learning methods for approximation of functions, which are solutions of boundary value problems for particular differential equations. More specific, the class of methods known as physics-informed neural network will be explored. Components of the DNN algorithms, such as the definition of loss function and the choice of the minimization method will be discussed while presenting results from the computational experiments.

Daria Fokina, Oleg Iliev, Ivan Oseledets
A Full Order, Reduced Order and Machine Learning Model Pipeline for Efficient Prediction of Reactive Flows

We present an integrated approach for the use of simulated data from full order discretization as well as projection-based Reduced Basis reduced order models for the training of machine learning approaches, in particular Kernel Methods, in order to achieve fast, reliable predictive models for the chemical conversion rate in reactive flows with varying transport regimes.

Pavel Gavrilenko, Bernard Haasdonk, Oleg Iliev, Mario Ohlberger, Felix Schindler, Pavel Toktaliev, Tizian Wenzel, Maha Youssef
A Multiscale Fatigue Model for the Degradation of Fiber-Reinforced Materials

Short-fiber reinforced materials show material degradation under fatigue loading prior to failure. To investigate these effects, we model the constituents by an isotropic fatigue damage model for the matrix material and isotropic linear-elastic material model for the fibers. On the microscale we compute the overall material response for cell problems with different fiber orientation states with FFT-based methods. We discuss a concept to model order reduction, that enables us to apply the model efficiently on component scale.

N. Magino, J. Köbler, H. Andrä, F. Welschinger, R. Müller, M. Schneider
A Classification Algorithm for Anomaly Detection in Terahertz Tomography

Terahertz tomography represents an emerging field in the area of nondestructive testing. Detecting outliers in measurements that are caused by defects is the main challenge in inline process monitoring. An efficient inline control enables to intervene directly during the manufacturing process and, consequently, to reduce product discard. We focus on plastics and ceramics and propose a density-based technique to automatically detect anomalies in the measured data of the radiation. The algorithm relies on a classification method based on machine learning. For a verification, supervised data are generated by a measuring system that approximates an inline process. The experimental results show that the use of terahertz radiation, combined with the classification algorithm, has great potential for a real inline manufacturing process.

Clemens Meiser, Thomas Schuster, Anne Wald
Reduced Basis Methods for Efficient Simulation of a Rigid Robot Hand Interacting with Soft Tissue

We present efficient reduced basis (RB) methods for the simulation of a coupled problem consisting of a rigid robot hand interacting with soft tissue material. The soft tissue is modeled by the linear elasticity equation and discretized with the Finite Element Method. We look at two different scenarios: (i) the forward simulation and (ii) a feedback control formulation of the model. In both cases, large-scale systems of equations appear, which need to be solved in real-time. This is essential in practice for the implementation in a real robot. For the feedback-scenario, we encounter a high-dimensional Algebraic Riccati Equation (ARE) in the context of the linear quadratic regulator. To overcome the real-time constraint by significantly reducing the computational complexity, we use several structure-preserving and non-structure-preserving reduction methods. These include reduced basis techniques based on the Proper Orthogonal Decomposition. For the ARE, we compute a low-rank-factor and hence solve a low-dimensional ARE instead of solving a full dimensional problem. Numerical examples for both cases (i) and (ii) are provided. These illustrate the approximation quality of the reduced solution and speedup factors of the different reduction approaches.

Shahnewaz Shuva, Patrick Buchfink, Oliver Röhrle, Bernard Haasdonk
Structured Deep Kernel Networks for Data-Driven Closure Terms of Turbulent Flows

Standard kernel methods for machine learning usually struggle when dealing with large datasets. We review a recently introduced Structured Deep Kernel Network (SDKN) approach that is capable of dealing with high-dimensional and huge datasets - and enjoys typical standard machine learning approximation properties.We extend the SDKN to combine it with standard machine learning modules and compare it with Neural Networks on the scientific challenge of data-driven prediction of closure terms of turbulent flows. We show experimentally that the SDKNs are capable of dealing with large datasets and achieve near-perfect accuracy on the given application.

Tizian Wenzel, Marius Kurz, Andrea Beck, Gabriele Santin, Bernard Haasdonk

HPC and Big Data: Algorithms and Applications

Frontmatter
On the Use of Low-discrepancy Sequences in the Training of Neural Networks

The quasi-Monte Carlo methods use specially designed deterministic sequences with improved uniformity properties compared with random numbers, in order to achieve higher rates of convergence. Usually certain measures like the discrepancy are used in order to quantify these uniformity properties. The usefulness of certain families of sequences with low discrepancy, like the Sobol and Halton sequences, has been established in problems with high practical value as in Mathematical Finance. Multiple studies have been done about applying these sequences also in the domains of optimisation and machine learning. Currently many types of neural networks are used extensively to achieve break-through results in Machine Learning and Artificial Intelligence. The process of training these networks requires substantial computational resources, usually provided by using powerful GPUs or specially designed hardware.In this work we study different approaches to employ efficiently low-discrepancy sequences at various places in the training process where their uniformity properties can speed-up or improve the training process. We demonstrate the advantage of using Sobol low-discrepancy sequences in benchmark problems and we discuss various practical issues that arise in order to achieve acceptable performance in real-life problems.

E. Atanassov, T. Gurov, D. Georgiev, S. Ivanovska
A PGAS-Based Implementation for the Parallel Minimum Spanning Tree Algorithm

The minimum spanning tree is a critical problem for many applications in network analysis, communication network design, and computer science. The parallel implementation of minimum spanning tree algorithms increases the simulation performance of large graph problems using high-performance computational resources. The minimum spanning tree algorithms generally use traditional parallel programming models for distributed and shared memory systems, like Massage Passing Interface or OpenMP. Furthermore, the partitioned global address space model offers new capabilities in the form of asynchronous computations on distributed shared memory, positively affecting the performance and scalability of the algorithms. The paper aims to present a new minimum spanning tree algorithm implemented in a partitioned global address space model. The experiments with diverse parameters have been conducted to study the efficiency of the asynchronous implementation of the algorithm.

Vahag Bejanyan, Hrachya Astsatryan
Comparison of Different Methods for Multiple Imputation by Chain Equation

Missing data is a common problem when analysing real-world data from many different research fields such as biostatistics, sociology, economics etc. Three types of missing data are typically defined: missing completely at random (MCAR), missing at random (MAR), and missing not at random (MNAR). Ignoring observations with missingness could lead to serious bias and inefficiency, especially when the number of such cases is large compared to the sample size. One popular technique for solving the missing data issue is multiple imputation (MI).There are two general approaches to MI. One is joint modelling which draws missing values simultaneously for all incomplete variables from a multivariate distribution. The other is the fully conditional specification (FCS, also known as MICE), which imputes variables one at a time from a series of univariate conditional distributions. For each incomplete variable FCS draws from a univariate density conditional on the other variables included in the imputation model.In this work we define a computationally efficient numerical simulation framework for data generation and evaluation of different imputation methods. We consider different FCS imputation methods along with traditional ones under different scenarios for the parameters of the models - percentage of missingness, data dimensionality, different combination of categorical and numerical predictors and different correlation between the covariates. Our results are based on synthetic data generated on HPC cluster and show the optimal imputation methods in the different cases according to two scoring techniques.

Denitsa Grigorova, Demir Tonchev, Dean Palejev
Monte Carlo Method for Estimating Eigenvalues Using Error Balancing

Monte Carlo (MC) power iterations are successfully applied for estimating extremal eigenvalue especially those of large sparse matrices. They use truncated Markov chain simulations for estimating matrix-vector products. The iterative MC methods contain two type of errors- systematic (a truncation error) and stochastic (a probable error). The systematic error depends on the number of iterations and the stochastic error depends on the probabilistic nature of the MC method. In this paper we propose a new version of the MC power iterations using balancing of both errors to determine the optimal length of the chain. Numerical results for estimating the largest eigenvalue are also presented and discussed.

Silvi-Maria Gurova, Aneta Karaivanova
Multi-lingual Emotion Classification Using Convolutional Neural Networks

Emotions play a central role in human interaction. Interestingly, different cultures have different ways to express the same emotion, and this motivates the need to study the way emotions are expressed in different parts of the world to understand these differences better. This paper aims to compare 4 emotions namely, anger, happiness, sadness, and neutral as expressed by speakers from 4 different languages (Canadian French, Italian, North American English and German - Berlin Deutsche) using modern digital signal processing methods and convolutional neural networks.

Alexander Iliev, Ameya Mote, Arjun Manoharan
On Parallel MLMC for Stationary Single Phase Flow Problem

Many problems that incorporate uncertainty often requires solving a Stochastic Partial Differential Equation. Fast and efficient methods for solving such equations are of particular interest for computational fluid dynamics. Efficient methods for uncertainty quantification in porous media flow simulations are Multilevel Monte Carlo sampling based algorithms. They rely on sample drawing from a probability space. The error is quantified by the root mean square error. Although computationally they are significantly faster than the classical Monte Carlo, parallel implementation is necessity for realistic simulations. The problem of finding optimal processor distribution is considered NP-complete. In this paper, a stationary single-phase flow through a random porous medium is studied as a model problem. Although simple, it is well-established problem in the field, that shows well the computational challenges involving MLMC simulation. For this problem different dynamic scheduling strategies exploiting three-layer parallelism are examined. The considered schedulers consolidate the sample to sample time differences. In this way, more efficient use of computational resources is achieved.

Oleg Iliev, N. Shegunov, P. Armyanov, A. Semerdzhiev, I. Christov
Numerical Parameter Estimates of Beta-Uniform Mixture Models

When analyzing biomedical data, researchers often need to apply the same statistical test or procedure to many variables resulting in a multiple comparisons setup. A portion of the tests are statistically significant, their unadjusted p-values form a spike near the origin and as such they can be modeled by a suitable Beta distribution. The unadjusted p-values of the non-significant tests are drawn form a uniform distribution in the unit interval. Therefore the set of all unadjusted p-values can be represented by a beta-uniform mixture model. Finding the parameters of that model plays an important role in estimating the statistical power of the subsequent Benjamini-Hochberg correction for multiple comparisons. To empirically investigate the properties of some parameter estimation procedures we carried out a series of computationally intensive numerical simulations on a high-performance computing facility. As a result of these simulations, in this article we have identified the overall optimal method for estimating the mixture parameters. We also show an asymptotic property of one of the parameter estimates.

Dean Palejev
Large-Scale Computer Simulation of the Performance of the Generalized Nets Model of the LPF-algorithm

Large-scale simulation of the throughput (TP) of an existing LPF-algorithm (Longest Port First) for crossbar switch are presented. The throughput for Generalized Nets (GN) model of algorithm is studied for uniform independent and identically distributed Bernoulli traffic. The presented simulations are executed on the AVITOHOL supercomputer located at the IICT, Bulgaria. The modeling of the TP utilizes LPF for a switch with $$N\in [2,60]$$ N ∈ [ 2 , 60 ] commutation field size. Problems arise due to the time complicatedness of the implementation of the LPF-algorithm $$(O(N^{4.7})$$ ( O ( N 4.7 ) ). It is necessary to reduce the time complexity without introducing distortions into the results of simulation. One variant of the LPF with simplified random selection of a starting cell of weighting-matrix are discussed.

Tasho D. Tashev, Alexander K. Alexandrov, Dimitar D. Arnaudov, Radostina P. Tasheva

Contributed Papers

Frontmatter
A New Error Estimate for a Primal-Dual Crank-Nicolson Mixed Finite Element Using Lowest Degree Raviart-Thomas Spaces for Parabolic Equations

We consider the heat equation as a model for parabolic equations. We establish a fully discrete scheme based on the use of Primal-Dual Lowest Order Raviart-Thomas Mixed method combined with the Crank-Nicolson method. We prove a new convergence result with convergence rate towards the “velocity” $$P(t)=-\nabla u(t)$$ P ( t ) = - ∇ u ( t ) in the norm of $$ L^2(H_\mathrm{div})$$ L 2 ( H div ) , under assumption that the solution is smooth. The order is proved to be two in time and one in space. This result is obtained thanks to a new well developed discrete a priori estimate. The convergence result obtained in this work improves the existing one for PDMFEM (Primal-Dual Mixed Finite Element Method) for Parabolic equations which states the convergence towards the velocity in only the norm of $$ L^\infty \left( \left( L^2\right) ^d\right) $$ L ∞ L 2 d , see [6, Theorem 2.1, p. 54].This work is an extension of [1] which dealt with new error estimates of a MFE scheme of order one in time. It is also motivated by the work [9] in which a full discrete Crank Nicolson scheme based on another MFE approach, different from the one we use here, is established in the two dimensional space.

Fayssal Benkhaldoun, Abdallah Bradji
A Finite Volume Scheme for a Wave Equation with Several Time Independent Delays

We establish a new finite volume scheme for a second order hyperbolic equation with several time independent delays in any space dimension. This model is considered in [7] where some exponential stability estimates are proved and in [8] which dealt with the oscillation of the solutions. The delays are included in both the exact solution and its time derivative. The scheme uses, as space discretization, SUSHI (Scheme using Stabilization and Hybrid Interfaces) developed in [5]. We first prove the existence and uniqueness of the discrete solution. We subsequently, develop a new discrete a priori estimate. Thanks to this a priori estimate, we prove error estimates in several discrete seminorms.This work is an extension and improvement of our recent work [2] which dealt with the finite volume approximation of the wave equation but with only one delay which is included in the time derivative of the exact solution.

Fayssal Benkhaldoun, Abdallah Bradji, Tarek Ghoudi
Recovering the Time-Dependent Volatility in Jump-Diffusion Models from Nonlocal Price Observations

This paper is devoted to a recovery of time-dependent volatility under jump-diffusion processes assumption. The problem is formulated as an inverse problem: given nonlocal observations of European option prices, find a time-dependent volatility function such that the theoretical option prices match the observed ones in an optimal way with respect to a prescribed cost functional. We propose a variational adjoint equation approach to derive the gradients of the functionals. A finite difference formulation of the 1D inverse problem is discussed.

Slavi G. Georgiev, Lubin G. Vulkov
On the Solution of Contact Problems with Tresca Friction by the Semismooth* Newton Method

An equilibrium of a linear elastic body subject to loading and satisfying the friction and contact conditions can be described by a variational inequality of the second kind and the respective discrete model attains the form of a generalized equation. To its numerical solution we apply the semismooth* Newton method by Gfrerer and Outrata (2019) in which, in contrast to most available Newton-type methods for inclusions, one approximates not only the single-valued but also the multi-valued part. This is performed on the basis of limiting (Morduchovich) coderivative. In our case of the Tresca friction, the multi-valued part amounts to the subdifferential of a convex function generated by the friction and contact conditions. The full 3D discrete problem is then reduced to the contact boundary. Implementation details of the semismooth* Newton method are provided and numerical tests demonstrate its superlinear convergence and mesh independence.

Helmut Gfrerer, Jiří V. Outrata, Jan Valdman
Fitted Finite Volume Method for Unsaturated Flow Parabolic Problems with Space Degeneration

In the present work, we discuss a question of correct boundary conditions and adequate approximation of parabolic problems with space degeneration in porous media. To the Richards equation, as a typical problem, we apply a time discretization, linearize the obtained nonlinear problem and introduce correct boundary conditions. Then, we develop fitted finite volume method to get the space discretization of the model problem. A graded space mesh is also deduced. We illustrate experimentally that the proposed method is efficient in the case of degenerate permeability.

Miglena N. Koleva, Lubin G. Vulkov
Minimization of p-Laplacian via the Finite Element Method in MATLAB

Minimization of energy functionals is based on a discretization by the finite element method and optimization by the trust-region method. A key tool to an efficient implementation is a local evaluation of the approximated gradients together with sparsity of the resulting Hessian matrix. Vectorization concepts are explained for the p-Laplace problem in one and two space-dimensions.

Ctirad Matonoha, Alexej Moskovka, Jan Valdman
Quality Optimization of Seismic-Derived Surface Meshes of Geological Bodies

Availability of 3D datasets of geological formations present a number of opportunities for various numerical simulations provided quality meshes can be extracted for the features of interest. We present a particular technique designed to generate an initial levelset-based triangulation of geological formations such as salt volumes, turbidites, faults and certain types of shallow horizons. We then work directly with the underlying voxel data to improve the mesh quality so that the resulting triangulation is suitable for numerical simulations involving PDEs, while approximating well enough the underlying (implicit) levelset. We apply our algorithm on typical Gulf of Mexico formations including turbidite reservoirs and multiple salt domes. We demonstrate that the resulting meshes are of high quality and can be directly used in coupled poroelastic reservoir simulations.

P. Popov, V. Iliev, G. Fitnev
Backmatter
Metadaten
Titel
Large-Scale Scientific Computing
herausgegeben von
Dr. Ivan Lirkov
Prof. Svetozar Margenov
Copyright-Jahr
2022
Electronic ISBN
978-3-030-97549-4
Print ISBN
978-3-030-97548-7
DOI
https://doi.org/10.1007/978-3-030-97549-4